WorldWideScience

Sample records for composite likelihood estimation

  1. Composite likelihood estimation of demographic parameters

    Directory of Open Access Journals (Sweden)

    Garrigan Daniel

    2009-11-01

    Full Text Available Abstract Background Most existing likelihood-based methods for fitting historical demographic models to DNA sequence polymorphism data to do not scale feasibly up to the level of whole-genome data sets. Computational economies can be achieved by incorporating two forms of pseudo-likelihood: composite and approximate likelihood methods. Composite likelihood enables scaling up to large data sets because it takes the product of marginal likelihoods as an estimator of the likelihood of the complete data set. This approach is especially useful when a large number of genomic regions constitutes the data set. Additionally, approximate likelihood methods can reduce the dimensionality of the data by summarizing the information in the original data by either a sufficient statistic, or a set of statistics. Both composite and approximate likelihood methods hold promise for analyzing large data sets or for use in situations where the underlying demographic model is complex and has many parameters. This paper considers a simple demographic model of allopatric divergence between two populations, in which one of the population is hypothesized to have experienced a founder event, or population bottleneck. A large resequencing data set from human populations is summarized by the joint frequency spectrum, which is a matrix of the genomic frequency spectrum of derived base frequencies in two populations. A Bayesian Metropolis-coupled Markov chain Monte Carlo (MCMCMC method for parameter estimation is developed that uses both composite and likelihood methods and is applied to the three different pairwise combinations of the human population resequence data. The accuracy of the method is also tested on data sets sampled from a simulated population model with known parameters. Results The Bayesian MCMCMC method also estimates the ratio of effective population size for the X chromosome versus that of the autosomes. The method is shown to estimate, with reasonable

  2. Composite likelihood and two-stage estimation in family studies

    DEFF Research Database (Denmark)

    Andersen, Elisabeth Anne Wreford

    2004-01-01

    In this paper register based family studies provide the motivation for linking a two-stage estimation procedure in copula models for multivariate failure time data with a composite likelihood approach. The asymptotic properties of the estimators in both parametric and semi-parametric models are d...

  3. Likelihood estimators for multivariate extremes

    KAUST Repository

    Huser, Raphaë l; Davison, Anthony C.; Genton, Marc G.

    2015-01-01

    The main approach to inference for multivariate extremes consists in approximating the joint upper tail of the observations by a parametric family arising in the limit for extreme events. The latter may be expressed in terms of componentwise maxima, high threshold exceedances or point processes, yielding different but related asymptotic characterizations and estimators. The present paper clarifies the connections between the main likelihood estimators, and assesses their practical performance. We investigate their ability to estimate the extremal dependence structure and to predict future extremes, using exact calculations and simulation, in the case of the logistic model.

  4. Likelihood estimators for multivariate extremes

    KAUST Repository

    Huser, Raphaël

    2015-11-17

    The main approach to inference for multivariate extremes consists in approximating the joint upper tail of the observations by a parametric family arising in the limit for extreme events. The latter may be expressed in terms of componentwise maxima, high threshold exceedances or point processes, yielding different but related asymptotic characterizations and estimators. The present paper clarifies the connections between the main likelihood estimators, and assesses their practical performance. We investigate their ability to estimate the extremal dependence structure and to predict future extremes, using exact calculations and simulation, in the case of the logistic model.

  5. Maximum likelihood estimation for integrated diffusion processes

    DEFF Research Database (Denmark)

    Baltazar-Larios, Fernando; Sørensen, Michael

    We propose a method for obtaining maximum likelihood estimates of parameters in diffusion models when the data is a discrete time sample of the integral of the process, while no direct observations of the process itself are available. The data are, moreover, assumed to be contaminated...... EM-algorithm to obtain maximum likelihood estimates of the parameters in the diffusion model. As part of the algorithm, we use a recent simple method for approximate simulation of diffusion bridges. In simulation studies for the Ornstein-Uhlenbeck process and the CIR process the proposed method works...... by measurement errors. Integrated volatility is an example of this type of observations. Another example is ice-core data on oxygen isotopes used to investigate paleo-temperatures. The data can be viewed as incomplete observations of a model with a tractable likelihood function. Therefore we propose a simulated...

  6. Multi-Channel Maximum Likelihood Pitch Estimation

    DEFF Research Database (Denmark)

    Christensen, Mads Græsbøll

    2012-01-01

    In this paper, a method for multi-channel pitch estimation is proposed. The method is a maximum likelihood estimator and is based on a parametric model where the signals in the various channels share the same fundamental frequency but can have different amplitudes, phases, and noise characteristics....... This essentially means that the model allows for different conditions in the various channels, like different signal-to-noise ratios, microphone characteristics and reverberation. Moreover, the method does not assume that a certain array structure is used but rather relies on a more general model and is hence...

  7. Tapered composite likelihood for spatial max-stable models

    KAUST Repository

    Sang, Huiyan

    2014-05-01

    Spatial extreme value analysis is useful to environmental studies, in which extreme value phenomena are of interest and meaningful spatial patterns can be discerned. Max-stable process models are able to describe such phenomena. This class of models is asymptotically justified to characterize the spatial dependence among extremes. However, likelihood inference is challenging for such models because their corresponding joint likelihood is unavailable and only bivariate or trivariate distributions are known. In this paper, we propose a tapered composite likelihood approach by utilizing lower dimensional marginal likelihoods for inference on parameters of various max-stable process models. We consider a weighting strategy based on a "taper range" to exclude distant pairs or triples. The "optimal taper range" is selected to maximize various measures of the Godambe information associated with the tapered composite likelihood function. This method substantially reduces the computational cost and improves the efficiency over equally weighted composite likelihood estimators. We illustrate its utility with simulation experiments and an analysis of rainfall data in Switzerland.

  8. Tapered composite likelihood for spatial max-stable models

    KAUST Repository

    Sang, Huiyan; Genton, Marc G.

    2014-01-01

    Spatial extreme value analysis is useful to environmental studies, in which extreme value phenomena are of interest and meaningful spatial patterns can be discerned. Max-stable process models are able to describe such phenomena. This class of models is asymptotically justified to characterize the spatial dependence among extremes. However, likelihood inference is challenging for such models because their corresponding joint likelihood is unavailable and only bivariate or trivariate distributions are known. In this paper, we propose a tapered composite likelihood approach by utilizing lower dimensional marginal likelihoods for inference on parameters of various max-stable process models. We consider a weighting strategy based on a "taper range" to exclude distant pairs or triples. The "optimal taper range" is selected to maximize various measures of the Godambe information associated with the tapered composite likelihood function. This method substantially reduces the computational cost and improves the efficiency over equally weighted composite likelihood estimators. We illustrate its utility with simulation experiments and an analysis of rainfall data in Switzerland.

  9. Modelling maximum likelihood estimation of availability

    International Nuclear Information System (INIS)

    Waller, R.A.; Tietjen, G.L.; Rock, G.W.

    1975-01-01

    Suppose the performance of a nuclear powered electrical generating power plant is continuously monitored to record the sequence of failure and repairs during sustained operation. The purpose of this study is to assess one method of estimating the performance of the power plant when the measure of performance is availability. That is, we determine the probability that the plant is operational at time t. To study the availability of a power plant, we first assume statistical models for the variables, X and Y, which denote the time-to-failure and the time-to-repair variables, respectively. Once those statistical models are specified, the availability, A(t), can be expressed as a function of some or all of their parameters. Usually those parameters are unknown in practice and so A(t) is unknown. This paper discusses the maximum likelihood estimator of A(t) when the time-to-failure model for X is an exponential density with parameter, lambda, and the time-to-repair model for Y is an exponential density with parameter, theta. Under the assumption of exponential models for X and Y, it follows that the instantaneous availability at time t is A(t)=lambda/(lambda+theta)+theta/(lambda+theta)exp[-[(1/lambda)+(1/theta)]t] with t>0. Also, the steady-state availability is A(infinity)=lambda/(lambda+theta). We use the observations from n failure-repair cycles of the power plant, say X 1 , X 2 , ..., Xsub(n), Y 1 , Y 2 , ..., Ysub(n) to present the maximum likelihood estimators of A(t) and A(infinity). The exact sampling distributions for those estimators and some statistical properties are discussed before a simulation model is used to determine 95% simulation intervals for A(t). The methodology is applied to two examples which approximate the operating history of two nuclear power plants. (author)

  10. Maximum likelihood window for time delay estimation

    International Nuclear Information System (INIS)

    Lee, Young Sup; Yoon, Dong Jin; Kim, Chi Yup

    2004-01-01

    Time delay estimation for the detection of leak location in underground pipelines is critically important. Because the exact leak location depends upon the precision of the time delay between sensor signals due to leak noise and the speed of elastic waves, the research on the estimation of time delay has been one of the key issues in leak lovating with the time arrival difference method. In this study, an optimal Maximum Likelihood window is considered to obtain a better estimation of the time delay. This method has been proved in experiments, which can provide much clearer and more precise peaks in cross-correlation functions of leak signals. The leak location error has been less than 1 % of the distance between sensors, for example the error was not greater than 3 m for 300 m long underground pipelines. Apart from the experiment, an intensive theoretical analysis in terms of signal processing has been described. The improved leak locating with the suggested method is due to the windowing effect in frequency domain, which offers a weighting in significant frequencies.

  11. A composite likelihood approach for spatially correlated survival data

    Science.gov (United States)

    Paik, Jane; Ying, Zhiliang

    2013-01-01

    The aim of this paper is to provide a composite likelihood approach to handle spatially correlated survival data using pairwise joint distributions. With e-commerce data, a recent question of interest in marketing research has been to describe spatially clustered purchasing behavior and to assess whether geographic distance is the appropriate metric to describe purchasing dependence. We present a model for the dependence structure of time-to-event data subject to spatial dependence to characterize purchasing behavior from the motivating example from e-commerce data. We assume the Farlie-Gumbel-Morgenstern (FGM) distribution and then model the dependence parameter as a function of geographic and demographic pairwise distances. For estimation of the dependence parameters, we present pairwise composite likelihood equations. We prove that the resulting estimators exhibit key properties of consistency and asymptotic normality under certain regularity conditions in the increasing-domain framework of spatial asymptotic theory. PMID:24223450

  12. A composite likelihood approach for spatially correlated survival data.

    Science.gov (United States)

    Paik, Jane; Ying, Zhiliang

    2013-01-01

    The aim of this paper is to provide a composite likelihood approach to handle spatially correlated survival data using pairwise joint distributions. With e-commerce data, a recent question of interest in marketing research has been to describe spatially clustered purchasing behavior and to assess whether geographic distance is the appropriate metric to describe purchasing dependence. We present a model for the dependence structure of time-to-event data subject to spatial dependence to characterize purchasing behavior from the motivating example from e-commerce data. We assume the Farlie-Gumbel-Morgenstern (FGM) distribution and then model the dependence parameter as a function of geographic and demographic pairwise distances. For estimation of the dependence parameters, we present pairwise composite likelihood equations. We prove that the resulting estimators exhibit key properties of consistency and asymptotic normality under certain regularity conditions in the increasing-domain framework of spatial asymptotic theory.

  13. Penalized Maximum Likelihood Estimation for univariate normal mixture distributions

    International Nuclear Information System (INIS)

    Ridolfi, A.; Idier, J.

    2001-01-01

    Due to singularities of the likelihood function, the maximum likelihood approach for the estimation of the parameters of normal mixture models is an acknowledged ill posed optimization problem. Ill posedness is solved by penalizing the likelihood function. In the Bayesian framework, it amounts to incorporating an inverted gamma prior in the likelihood function. A penalized version of the EM algorithm is derived, which is still explicit and which intrinsically assures that the estimates are not singular. Numerical evidence of the latter property is put forward with a test

  14. High-order Composite Likelihood Inference for Max-Stable Distributions and Processes

    KAUST Repository

    Castruccio, Stefano; Huser, Raphaë l; Genton, Marc G.

    2015-01-01

    In multivariate or spatial extremes, inference for max-stable processes observed at a large collection of locations is a very challenging problem in computational statistics, and current approaches typically rely on less expensive composite likelihoods constructed from small subsets of data. In this work, we explore the limits of modern state-of-the-art computational facilities to perform full likelihood inference and to efficiently evaluate high-order composite likelihoods. With extensive simulations, we assess the loss of information of composite likelihood estimators with respect to a full likelihood approach for some widely-used multivariate or spatial extreme models, we discuss how to choose composite likelihood truncation to improve the efficiency, and we also provide recommendations for practitioners. This article has supplementary material online.

  15. High-order Composite Likelihood Inference for Max-Stable Distributions and Processes

    KAUST Repository

    Castruccio, Stefano

    2015-09-29

    In multivariate or spatial extremes, inference for max-stable processes observed at a large collection of locations is a very challenging problem in computational statistics, and current approaches typically rely on less expensive composite likelihoods constructed from small subsets of data. In this work, we explore the limits of modern state-of-the-art computational facilities to perform full likelihood inference and to efficiently evaluate high-order composite likelihoods. With extensive simulations, we assess the loss of information of composite likelihood estimators with respect to a full likelihood approach for some widely-used multivariate or spatial extreme models, we discuss how to choose composite likelihood truncation to improve the efficiency, and we also provide recommendations for practitioners. This article has supplementary material online.

  16. Supplementary Material for: High-Order Composite Likelihood Inference for Max-Stable Distributions and Processes

    KAUST Repository

    Castruccio, Stefano; Huser, Raphaë l; Genton, Marc G.

    2016-01-01

    In multivariate or spatial extremes, inference for max-stable processes observed at a large collection of points is a very challenging problem and current approaches typically rely on less expensive composite likelihoods constructed from small subsets of data. In this work, we explore the limits of modern state-of-the-art computational facilities to perform full likelihood inference and to efficiently evaluate high-order composite likelihoods. With extensive simulations, we assess the loss of information of composite likelihood estimators with respect to a full likelihood approach for some widely used multivariate or spatial extreme models, we discuss how to choose composite likelihood truncation to improve the efficiency, and we also provide recommendations for practitioners. This article has supplementary material online.

  17. Maximum likelihood estimation of finite mixture model for economic data

    Science.gov (United States)

    Phoong, Seuk-Yen; Ismail, Mohd Tahir

    2014-06-01

    Finite mixture model is a mixture model with finite-dimension. This models are provides a natural representation of heterogeneity in a finite number of latent classes. In addition, finite mixture models also known as latent class models or unsupervised learning models. Recently, maximum likelihood estimation fitted finite mixture models has greatly drawn statistician's attention. The main reason is because maximum likelihood estimation is a powerful statistical method which provides consistent findings as the sample sizes increases to infinity. Thus, the application of maximum likelihood estimation is used to fit finite mixture model in the present paper in order to explore the relationship between nonlinear economic data. In this paper, a two-component normal mixture model is fitted by maximum likelihood estimation in order to investigate the relationship among stock market price and rubber price for sampled countries. Results described that there is a negative effect among rubber price and stock market price for Malaysia, Thailand, Philippines and Indonesia.

  18. MAXIMUM-LIKELIHOOD-ESTIMATION OF THE ENTROPY OF AN ATTRACTOR

    NARCIS (Netherlands)

    SCHOUTEN, JC; TAKENS, F; VANDENBLEEK, CM

    In this paper, a maximum-likelihood estimate of the (Kolmogorov) entropy of an attractor is proposed that can be obtained directly from a time series. Also, the relative standard deviation of the entropy estimate is derived; it is dependent on the entropy and on the number of samples used in the

  19. Adaptive Unscented Kalman Filter using Maximum Likelihood Estimation

    DEFF Research Database (Denmark)

    Mahmoudi, Zeinab; Poulsen, Niels Kjølstad; Madsen, Henrik

    2017-01-01

    The purpose of this study is to develop an adaptive unscented Kalman filter (UKF) by tuning the measurement noise covariance. We use the maximum likelihood estimation (MLE) and the covariance matching (CM) method to estimate the noise covariance. The multi-step prediction errors generated...

  20. Maximum likelihood estimation of the attenuated ultrasound pulse

    DEFF Research Database (Denmark)

    Rasmussen, Klaus Bolding

    1994-01-01

    The attenuated ultrasound pulse is divided into two parts: a stationary basic pulse and a nonstationary attenuation pulse. A standard ARMA model is used for the basic pulse, and a nonstandard ARMA model is derived for the attenuation pulse. The maximum likelihood estimator of the attenuated...

  1. Approximate maximum likelihood estimation for population genetic inference.

    Science.gov (United States)

    Bertl, Johanna; Ewing, Gregory; Kosiol, Carolin; Futschik, Andreas

    2017-11-27

    In many population genetic problems, parameter estimation is obstructed by an intractable likelihood function. Therefore, approximate estimation methods have been developed, and with growing computational power, sampling-based methods became popular. However, these methods such as Approximate Bayesian Computation (ABC) can be inefficient in high-dimensional problems. This led to the development of more sophisticated iterative estimation methods like particle filters. Here, we propose an alternative approach that is based on stochastic approximation. By moving along a simulated gradient or ascent direction, the algorithm produces a sequence of estimates that eventually converges to the maximum likelihood estimate, given a set of observed summary statistics. This strategy does not sample much from low-likelihood regions of the parameter space, and is fast, even when many summary statistics are involved. We put considerable efforts into providing tuning guidelines that improve the robustness and lead to good performance on problems with high-dimensional summary statistics and a low signal-to-noise ratio. We then investigate the performance of our resulting approach and study its properties in simulations. Finally, we re-estimate parameters describing the demographic history of Bornean and Sumatran orang-utans.

  2. Optimized Large-scale CMB Likelihood and Quadratic Maximum Likelihood Power Spectrum Estimation

    Science.gov (United States)

    Gjerløw, E.; Colombo, L. P. L.; Eriksen, H. K.; Górski, K. M.; Gruppuso, A.; Jewell, J. B.; Plaszczynski, S.; Wehus, I. K.

    2015-11-01

    We revisit the problem of exact cosmic microwave background (CMB) likelihood and power spectrum estimation with the goal of minimizing computational costs through linear compression. This idea was originally proposed for CMB purposes by Tegmark et al., and here we develop it into a fully functioning computational framework for large-scale polarization analysis, adopting WMAP as a working example. We compare five different linear bases (pixel space, harmonic space, noise covariance eigenvectors, signal-to-noise covariance eigenvectors, and signal-plus-noise covariance eigenvectors) in terms of compression efficiency, and find that the computationally most efficient basis is the signal-to-noise eigenvector basis, which is closely related to the Karhunen-Loeve and Principal Component transforms, in agreement with previous suggestions. For this basis, the information in 6836 unmasked WMAP sky map pixels can be compressed into a smaller set of 3102 modes, with a maximum error increase of any single multipole of 3.8% at ℓ ≤ 32 and a maximum shift in the mean values of a joint distribution of an amplitude-tilt model of 0.006σ. This compression reduces the computational cost of a single likelihood evaluation by a factor of 5, from 38 to 7.5 CPU seconds, and it also results in a more robust likelihood by implicitly regularizing nearly degenerate modes. Finally, we use the same compression framework to formulate a numerically stable and computationally efficient variation of the Quadratic Maximum Likelihood implementation, which requires less than 3 GB of memory and 2 CPU minutes per iteration for ℓ ≤ 32, rendering low-ℓ QML CMB power spectrum analysis fully tractable on a standard laptop.

  3. Maximum Likelihood Blood Velocity Estimator Incorporating Properties of Flow Physics

    DEFF Research Database (Denmark)

    Schlaikjer, Malene; Jensen, Jørgen Arendt

    2004-01-01

    )-data under investigation. The flow physic properties are exploited in the second term, as the range of velocity values investigated in the cross-correlation analysis are compared to the velocity estimates in the temporal and spatial neighborhood of the signal segment under investigation. The new estimator...... has been compared to the cross-correlation (CC) estimator and the previously developed maximum likelihood estimator (MLE). The results show that the CMLE can handle a larger velocity search range and is capable of estimating even low velocity levels from tissue motion. The CC and the MLE produce...... for the CC and the MLE. When the velocity search range is set to twice the limit of the CC and the MLE, the number of incorrect velocity estimates are 0, 19.1, and 7.2% for the CMLE, CC, and MLE, respectively. The ability to handle a larger search range and estimating low velocity levels was confirmed...

  4. Maximum Likelihood and Bayes Estimation in Randomly Censored Geometric Distribution

    Directory of Open Access Journals (Sweden)

    Hare Krishna

    2017-01-01

    Full Text Available In this article, we study the geometric distribution under randomly censored data. Maximum likelihood estimators and confidence intervals based on Fisher information matrix are derived for the unknown parameters with randomly censored data. Bayes estimators are also developed using beta priors under generalized entropy and LINEX loss functions. Also, Bayesian credible and highest posterior density (HPD credible intervals are obtained for the parameters. Expected time on test and reliability characteristics are also analyzed in this article. To compare various estimates developed in the article, a Monte Carlo simulation study is carried out. Finally, for illustration purpose, a randomly censored real data set is discussed.

  5. A Composite Likelihood Inference in Latent Variable Models for Ordinal Longitudinal Responses

    Science.gov (United States)

    Vasdekis, Vassilis G. S.; Cagnone, Silvia; Moustaki, Irini

    2012-01-01

    The paper proposes a composite likelihood estimation approach that uses bivariate instead of multivariate marginal probabilities for ordinal longitudinal responses using a latent variable model. The model considers time-dependent latent variables and item-specific random effects to be accountable for the interdependencies of the multivariate…

  6. Likelihood Estimation of Gamma Ray Bursts Duration Distribution

    OpenAIRE

    Horvath, Istvan

    2005-01-01

    Two classes of Gamma Ray Bursts have been identified so far, characterized by T90 durations shorter and longer than approximately 2 seconds. It was shown that the BATSE 3B data allow a good fit with three Gaussian distributions in log T90. In the same Volume in ApJ. another paper suggested that the third class of GRBs is may exist. Using the full BATSE catalog here we present the maximum likelihood estimation, which gives us 0.5% probability to having only two subclasses. The MC simulation co...

  7. Estimating likelihood of future crashes for crash-prone drivers

    OpenAIRE

    Subasish Das; Xiaoduan Sun; Fan Wang; Charles Leboeuf

    2015-01-01

    At-fault crash-prone drivers are usually considered as the high risk group for possible future incidents or crashes. In Louisiana, 34% of crashes are repeatedly committed by the at-fault crash-prone drivers who represent only 5% of the total licensed drivers in the state. This research has conducted an exploratory data analysis based on the driver faultiness and proneness. The objective of this study is to develop a crash prediction model to estimate the likelihood of future crashes for the a...

  8. Accelerated maximum likelihood parameter estimation for stochastic biochemical systems

    Directory of Open Access Journals (Sweden)

    Daigle Bernie J

    2012-05-01

    Full Text Available Abstract Background A prerequisite for the mechanistic simulation of a biochemical system is detailed knowledge of its kinetic parameters. Despite recent experimental advances, the estimation of unknown parameter values from observed data is still a bottleneck for obtaining accurate simulation results. Many methods exist for parameter estimation in deterministic biochemical systems; methods for discrete stochastic systems are less well developed. Given the probabilistic nature of stochastic biochemical models, a natural approach is to choose parameter values that maximize the probability of the observed data with respect to the unknown parameters, a.k.a. the maximum likelihood parameter estimates (MLEs. MLE computation for all but the simplest models requires the simulation of many system trajectories that are consistent with experimental data. For models with unknown parameters, this presents a computational challenge, as the generation of consistent trajectories can be an extremely rare occurrence. Results We have developed Monte Carlo Expectation-Maximization with Modified Cross-Entropy Method (MCEM2: an accelerated method for calculating MLEs that combines advances in rare event simulation with a computationally efficient version of the Monte Carlo expectation-maximization (MCEM algorithm. Our method requires no prior knowledge regarding parameter values, and it automatically provides a multivariate parameter uncertainty estimate. We applied the method to five stochastic systems of increasing complexity, progressing from an analytically tractable pure-birth model to a computationally demanding model of yeast-polarization. Our results demonstrate that MCEM2 substantially accelerates MLE computation on all tested models when compared to a stand-alone version of MCEM. Additionally, we show how our method identifies parameter values for certain classes of models more accurately than two recently proposed computationally efficient methods

  9. Marginal Maximum Likelihood Estimation of Item Response Models in R

    Directory of Open Access Journals (Sweden)

    Matthew S. Johnson

    2007-02-01

    Full Text Available Item response theory (IRT models are a class of statistical models used by researchers to describe the response behaviors of individuals to a set of categorically scored items. The most common IRT models can be classified as generalized linear fixed- and/or mixed-effect models. Although IRT models appear most often in the psychological testing literature, researchers in other fields have successfully utilized IRT-like models in a wide variety of applications. This paper discusses the three major methods of estimation in IRT and develops R functions utilizing the built-in capabilities of the R environment to find the marginal maximum likelihood estimates of the generalized partial credit model. The currently available R packages ltm is also discussed.

  10. Maximum likelihood estimation of phase-type distributions

    DEFF Research Database (Denmark)

    Esparza, Luz Judith R

    for both univariate and multivariate cases. Methods like the EM algorithm and Markov chain Monte Carlo are applied for this purpose. Furthermore, this thesis provides explicit formulae for computing the Fisher information matrix for discrete and continuous phase-type distributions, which is needed to find......This work is concerned with the statistical inference of phase-type distributions and the analysis of distributions with rational Laplace transform, known as matrix-exponential distributions. The thesis is focused on the estimation of the maximum likelihood parameters of phase-type distributions...... confidence regions for their estimated parameters. Finally, a new general class of distributions, called bilateral matrix-exponential distributions, is defined. These distributions have the entire real line as domain and can be used, for instance, for modelling. In addition, this class of distributions...

  11. Targeted maximum likelihood estimation for a binary treatment: A tutorial.

    Science.gov (United States)

    Luque-Fernandez, Miguel Angel; Schomaker, Michael; Rachet, Bernard; Schnitzer, Mireille E

    2018-04-23

    When estimating the average effect of a binary treatment (or exposure) on an outcome, methods that incorporate propensity scores, the G-formula, or targeted maximum likelihood estimation (TMLE) are preferred over naïve regression approaches, which are biased under misspecification of a parametric outcome model. In contrast propensity score methods require the correct specification of an exposure model. Double-robust methods only require correct specification of either the outcome or the exposure model. Targeted maximum likelihood estimation is a semiparametric double-robust method that improves the chances of correct model specification by allowing for flexible estimation using (nonparametric) machine-learning methods. It therefore requires weaker assumptions than its competitors. We provide a step-by-step guided implementation of TMLE and illustrate it in a realistic scenario based on cancer epidemiology where assumptions about correct model specification and positivity (ie, when a study participant had 0 probability of receiving the treatment) are nearly violated. This article provides a concise and reproducible educational introduction to TMLE for a binary outcome and exposure. The reader should gain sufficient understanding of TMLE from this introductory tutorial to be able to apply the method in practice. Extensive R-code is provided in easy-to-read boxes throughout the article for replicability. Stata users will find a testing implementation of TMLE and additional material in the Appendix S1 and at the following GitHub repository: https://github.com/migariane/SIM-TMLE-tutorial. © 2018 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd.

  12. Elemental composition of cosmic rays using a maximum likelihood method

    International Nuclear Information System (INIS)

    Ruddick, K.

    1996-01-01

    We present a progress report on our attempts to determine the composition of cosmic rays in the knee region of the energy spectrum. We have used three different devices to measure properties of the extensive air showers produced by primary cosmic rays: the Soudan 2 underground detector measures the muon flux deep underground, a proportional tube array samples shower density at the surface of the earth, and a Cherenkov array observes light produced high in the atmosphere. We have begun maximum likelihood fits to these measurements with the hope of determining the nuclear mass number A on an event by event basis. (orig.)

  13. Maximum-likelihood estimation of recent shared ancestry (ERSA).

    Science.gov (United States)

    Huff, Chad D; Witherspoon, David J; Simonson, Tatum S; Xing, Jinchuan; Watkins, W Scott; Zhang, Yuhua; Tuohy, Therese M; Neklason, Deborah W; Burt, Randall W; Guthery, Stephen L; Woodward, Scott R; Jorde, Lynn B

    2011-05-01

    Accurate estimation of recent shared ancestry is important for genetics, evolution, medicine, conservation biology, and forensics. Established methods estimate kinship accurately for first-degree through third-degree relatives. We demonstrate that chromosomal segments shared by two individuals due to identity by descent (IBD) provide much additional information about shared ancestry. We developed a maximum-likelihood method for the estimation of recent shared ancestry (ERSA) from the number and lengths of IBD segments derived from high-density SNP or whole-genome sequence data. We used ERSA to estimate relationships from SNP genotypes in 169 individuals from three large, well-defined human pedigrees. ERSA is accurate to within one degree of relationship for 97% of first-degree through fifth-degree relatives and 80% of sixth-degree and seventh-degree relatives. We demonstrate that ERSA's statistical power approaches the maximum theoretical limit imposed by the fact that distant relatives frequently share no DNA through a common ancestor. ERSA greatly expands the range of relationships that can be estimated from genetic data and is implemented in a freely available software package.

  14. Maximum likelihood estimation for cytogenetic dose-response curves

    International Nuclear Information System (INIS)

    Frome, E.L.; DuFrain, R.J.

    1986-01-01

    In vitro dose-response curves are used to describe the relation between chromosome aberrations and radiation dose for human lymphocytes. The lymphocytes are exposed to low-LET radiation, and the resulting dicentric chromosome aberrations follow the Poisson distribution. The expected yield depends on both the magnitude and the temporal distribution of the dose. A general dose-response model that describes this relation has been presented by Kellerer and Rossi (1972, Current Topics on Radiation Research Quarterly 8, 85-158; 1978, Radiation Research 75, 471-488) using the theory of dual radiation action. Two special cases of practical interest are split-dose and continuous exposure experiments, and the resulting dose-time-response models are intrinsically nonlinear in the parameters. A general-purpose maximum likelihood estimation procedure is described, and estimation for the nonlinear models is illustrated with numerical examples from both experimental designs. Poisson regression analysis is used for estimation, hypothesis testing, and regression diagnostics. Results are discussed in the context of exposure assessment procedures for both acute and chronic human radiation exposure

  15. Maximum likelihood sequence estimation for optical complex direct modulation.

    Science.gov (United States)

    Che, Di; Yuan, Feng; Shieh, William

    2017-04-17

    Semiconductor lasers are versatile optical transmitters in nature. Through the direct modulation (DM), the intensity modulation is realized by the linear mapping between the injection current and the light power, while various angle modulations are enabled by the frequency chirp. Limited by the direct detection, DM lasers used to be exploited only as 1-D (intensity or angle) transmitters by suppressing or simply ignoring the other modulation. Nevertheless, through the digital coherent detection, simultaneous intensity and angle modulations (namely, 2-D complex DM, CDM) can be realized by a single laser diode. The crucial technique of CDM is the joint demodulation of intensity and differential phase with the maximum likelihood sequence estimation (MLSE), supported by a closed-form discrete signal approximation of frequency chirp to characterize the MLSE transition probability. This paper proposes a statistical method for the transition probability to significantly enhance the accuracy of the chirp model. Using the statistical estimation, we demonstrate the first single-channel 100-Gb/s PAM-4 transmission over 1600-km fiber with only 10G-class DM lasers.

  16. Maximum likelihood estimation for cytogenetic dose-response curves

    International Nuclear Information System (INIS)

    Frome, E.L; DuFrain, R.J.

    1983-10-01

    In vitro dose-response curves are used to describe the relation between the yield of dicentric chromosome aberrations and radiation dose for human lymphocytes. The dicentric yields follow the Poisson distribution, and the expected yield depends on both the magnitude and the temporal distribution of the dose for low LET radiation. A general dose-response model that describes this relation has been obtained by Kellerer and Rossi using the theory of dual radiation action. The yield of elementary lesions is kappa[γd + g(t, tau)d 2 ], where t is the time and d is dose. The coefficient of the d 2 term is determined by the recovery function and the temporal mode of irradiation. Two special cases of practical interest are split-dose and continuous exposure experiments, and the resulting models are intrinsically nonlinear in the parameters. A general purpose maximum likelihood estimation procedure is described and illustrated with numerical examples from both experimental designs. Poisson regression analysis is used for estimation, hypothesis testing, and regression diagnostics. Results are discussed in the context of exposure assessment procedures for both acute and chronic human radiation exposure

  17. Maximum likelihood estimation for cytogenetic dose-response curves

    Energy Technology Data Exchange (ETDEWEB)

    Frome, E.L; DuFrain, R.J.

    1983-10-01

    In vitro dose-response curves are used to describe the relation between the yield of dicentric chromosome aberrations and radiation dose for human lymphocytes. The dicentric yields follow the Poisson distribution, and the expected yield depends on both the magnitude and the temporal distribution of the dose for low LET radiation. A general dose-response model that describes this relation has been obtained by Kellerer and Rossi using the theory of dual radiation action. The yield of elementary lesions is kappa(..gamma..d + g(t, tau)d/sup 2/), where t is the time and d is dose. The coefficient of the d/sup 2/ term is determined by the recovery function and the temporal mode of irradiation. Two special cases of practical interest are split-dose and continuous exposure experiments, and the resulting models are intrinsically nonlinear in the parameters. A general purpose maximum likelihood estimation procedure is described and illustrated with numerical examples from both experimental designs. Poisson regression analysis is used for estimation, hypothesis testing, and regression diagnostics. Results are discussed in the context of exposure assessment procedures for both acute and chronic human radiation exposure.

  18. Affective mapping: An activation likelihood estimation (ALE) meta-analysis.

    Science.gov (United States)

    Kirby, Lauren A J; Robinson, Jennifer L

    2017-11-01

    Functional neuroimaging has the spatial resolution to explain the neural basis of emotions. Activation likelihood estimation (ALE), as opposed to traditional qualitative meta-analysis, quantifies convergence of activation across studies within affective categories. Others have used ALE to investigate a broad range of emotions, but without the convenience of the BrainMap database. We used the BrainMap database and analysis resources to run separate meta-analyses on coordinates reported for anger, anxiety, disgust, fear, happiness, humor, and sadness. Resultant ALE maps were compared to determine areas of convergence between emotions, as well as to identify affect-specific networks. Five out of the seven emotions demonstrated consistent activation within the amygdala, whereas all emotions consistently activated the right inferior frontal gyrus, which has been implicated as an integration hub for affective and cognitive processes. These data provide the framework for models of affect-specific networks, as well as emotional processing hubs, which can be used for future studies of functional or effective connectivity. Copyright © 2015 Elsevier Inc. All rights reserved.

  19. Maximum-likelihood estimation of the hyperbolic parameters from grouped observations

    DEFF Research Database (Denmark)

    Jensen, Jens Ledet

    1988-01-01

    a least-squares problem. The second procedure Hypesti first approaches the maximum-likelihood estimate by iterating in the profile-log likelihood function for the scale parameter. Close to the maximum of the likelihood function, the estimation is brought to an end by iteration, using all four parameters...

  20. Estimating likelihood of future crashes for crash-prone drivers

    Directory of Open Access Journals (Sweden)

    Subasish Das

    2015-06-01

    Full Text Available At-fault crash-prone drivers are usually considered as the high risk group for possible future incidents or crashes. In Louisiana, 34% of crashes are repeatedly committed by the at-fault crash-prone drivers who represent only 5% of the total licensed drivers in the state. This research has conducted an exploratory data analysis based on the driver faultiness and proneness. The objective of this study is to develop a crash prediction model to estimate the likelihood of future crashes for the at-fault drivers. The logistic regression method is used by employing eight years' traffic crash data (2004–2011 in Louisiana. Crash predictors such as the driver's crash involvement, crash and road characteristics, human factors, collision type, and environmental factors are considered in the model. The at-fault and not-at-fault status of the crashes are used as the response variable. The developed model has identified a few important variables, and is used to correctly classify at-fault crashes up to 62.40% with a specificity of 77.25%. This model can identify as many as 62.40% of the crash incidence of at-fault drivers in the upcoming year. Traffic agencies can use the model for monitoring the performance of an at-fault crash-prone drivers and making roadway improvements meant to reduce crash proneness. From the findings, it is recommended that crash-prone drivers should be targeted for special safety programs regularly through education and regulations.

  1. A note on estimating errors from the likelihood function

    International Nuclear Information System (INIS)

    Barlow, Roger

    2005-01-01

    The points at which the log likelihood falls by 12 from its maximum value are often used to give the 'errors' on a result, i.e. the 68% central confidence interval. The validity of this is examined for two simple cases: a lifetime measurement and a Poisson measurement. Results are compared with the exact Neyman construction and with the simple Bartlett approximation. It is shown that the accuracy of the log likelihood method is poor, and the Bartlett construction explains why it is flawed

  2. LIKELIHOOD ESTIMATION OF PARAMETERS USING SIMULTANEOUSLY MONITORED PROCESSES

    DEFF Research Database (Denmark)

    Friis-Hansen, Peter; Ditlevsen, Ove Dalager

    2004-01-01

    The topic is maximum likelihood inference from several simultaneously monitored response processes of a structure to obtain knowledge about the parameters of other not monitored but important response processes when the structure is subject to some Gaussian load field in space and time. The consi....... The considered example is a ship sailing with a given speed through a Gaussian wave field....

  3. Multilevel maximum likelihood estimation with application to covariance matrices

    Czech Academy of Sciences Publication Activity Database

    Turčičová, Marie; Mandel, J.; Eben, Kryštof

    Published online: 23 January ( 2018 ) ISSN 0361-0926 R&D Projects: GA ČR GA13-34856S Institutional support: RVO:67985807 Keywords : Fisher information * High dimension * Hierarchical maximum likelihood * Nested parameter spaces * Spectral diagonal covariance model * Sparse inverse covariance model Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 0.311, year: 2016

  4. Statistical Bias in Maximum Likelihood Estimators of Item Parameters.

    Science.gov (United States)

    1982-04-01

    34 a> E r’r~e r ,C Ie I# ne,..,.rVi rnd Id.,flfv b1 - bindk numb.r) I; ,t-i i-cd I ’ tiie bias in the maximum likelihood ,st i- i;, ’ t iIeiIrs in...NTC, IL 60088 Psychometric Laboratory University of North Carolina I ERIC Facility-Acquisitions Davie Hall 013A 4833 Rugby Avenue Chapel Hill, NC

  5. Estimation of Model's Marginal likelihood Using Adaptive Sparse Grid Surrogates in Bayesian Model Averaging

    Science.gov (United States)

    Zeng, X.

    2015-12-01

    A large number of model executions are required to obtain alternative conceptual models' predictions and their posterior probabilities in Bayesian model averaging (BMA). The posterior model probability is estimated through models' marginal likelihood and prior probability. The heavy computation burden hinders the implementation of BMA prediction, especially for the elaborated marginal likelihood estimator. For overcoming the computation burden of BMA, an adaptive sparse grid (SG) stochastic collocation method is used to build surrogates for alternative conceptual models through the numerical experiment of a synthetical groundwater model. BMA predictions depend on model posterior weights (or marginal likelihoods), and this study also evaluated four marginal likelihood estimators, including arithmetic mean estimator (AME), harmonic mean estimator (HME), stabilized harmonic mean estimator (SHME), and thermodynamic integration estimator (TIE). The results demonstrate that TIE is accurate in estimating conceptual models' marginal likelihoods. The BMA-TIE has better predictive performance than other BMA predictions. TIE has high stability for estimating conceptual model's marginal likelihood. The repeated estimated conceptual model's marginal likelihoods by TIE have significant less variability than that estimated by other estimators. In addition, the SG surrogates are efficient to facilitate BMA predictions, especially for BMA-TIE. The number of model executions needed for building surrogates is 4.13%, 6.89%, 3.44%, and 0.43% of the required model executions of BMA-AME, BMA-HME, BMA-SHME, and BMA-TIE, respectively.

  6. Bias Correction for the Maximum Likelihood Estimate of Ability. Research Report. ETS RR-05-15

    Science.gov (United States)

    Zhang, Jinming

    2005-01-01

    Lord's bias function and the weighted likelihood estimation method are effective in reducing the bias of the maximum likelihood estimate of an examinee's ability under the assumption that the true item parameters are known. This paper presents simulation studies to determine the effectiveness of these two methods in reducing the bias when the item…

  7. Existence and uniqueness of the maximum likelihood estimator for models with a Kronecker product covariance structure

    NARCIS (Netherlands)

    Ros, B.P.; Bijma, F.; de Munck, J.C.; de Gunst, M.C.M.

    2016-01-01

    This paper deals with multivariate Gaussian models for which the covariance matrix is a Kronecker product of two matrices. We consider maximum likelihood estimation of the model parameters, in particular of the covariance matrix. There is no explicit expression for the maximum likelihood estimator

  8. On the Relationships between Jeffreys Modal and Weighted Likelihood Estimation of Ability under Logistic IRT Models

    Science.gov (United States)

    Magis, David; Raiche, Gilles

    2012-01-01

    This paper focuses on two estimators of ability with logistic item response theory models: the Bayesian modal (BM) estimator and the weighted likelihood (WL) estimator. For the BM estimator, Jeffreys' prior distribution is considered, and the corresponding estimator is referred to as the Jeffreys modal (JM) estimator. It is established that under…

  9. Modified Moment, Maximum Likelihood and Percentile Estimators for the Parameters of the Power Function Distribution

    Directory of Open Access Journals (Sweden)

    Azam Zaka

    2014-10-01

    Full Text Available This paper is concerned with the modifications of maximum likelihood, moments and percentile estimators of the two parameter Power function distribution. Sampling behavior of the estimators is indicated by Monte Carlo simulation. For some combinations of parameter values, some of the modified estimators appear better than the traditional maximum likelihood, moments and percentile estimators with respect to bias, mean square error and total deviation.

  10. Building unbiased estimators from non-Gaussian likelihoods with application to shear estimation

    International Nuclear Information System (INIS)

    Madhavacheril, Mathew S.; Sehgal, Neelima; McDonald, Patrick; Slosar, Anže

    2015-01-01

    We develop a general framework for generating estimators of a given quantity which are unbiased to a given order in the difference between the true value of the underlying quantity and the fiducial position in theory space around which we expand the likelihood. We apply this formalism to rederive the optimal quadratic estimator and show how the replacement of the second derivative matrix with the Fisher matrix is a generic way of creating an unbiased estimator (assuming choice of the fiducial model is independent of data). Next we apply the approach to estimation of shear lensing, closely following the work of Bernstein and Armstrong (2014). Our first order estimator reduces to their estimator in the limit of zero shear, but it also naturally allows for the case of non-constant shear and the easy calculation of correlation functions or power spectra using standard methods. Both our first-order estimator and Bernstein and Armstrong's estimator exhibit a bias which is quadratic in true shear. Our third-order estimator is, at least in the realm of the toy problem of Bernstein and Armstrong, unbiased to 0.1% in relative shear errors Δg/g for shears up to |g|=0.2

  11. Finite mixture model: A maximum likelihood estimation approach on time series data

    Science.gov (United States)

    Yen, Phoong Seuk; Ismail, Mohd Tahir; Hamzah, Firdaus Mohamad

    2014-09-01

    Recently, statistician emphasized on the fitting of finite mixture model by using maximum likelihood estimation as it provides asymptotic properties. In addition, it shows consistency properties as the sample sizes increases to infinity. This illustrated that maximum likelihood estimation is an unbiased estimator. Moreover, the estimate parameters obtained from the application of maximum likelihood estimation have smallest variance as compared to others statistical method as the sample sizes increases. Thus, maximum likelihood estimation is adopted in this paper to fit the two-component mixture model in order to explore the relationship between rubber price and exchange rate for Malaysia, Thailand, Philippines and Indonesia. Results described that there is a negative effect among rubber price and exchange rate for all selected countries.

  12. Maximum Likelihood Approach for RFID Tag Set Cardinality Estimation with Detection Errors

    DEFF Research Database (Denmark)

    Nguyen, Chuyen T.; Hayashi, Kazunori; Kaneko, Megumi

    2013-01-01

    Abstract Estimation schemes of Radio Frequency IDentification (RFID) tag set cardinality are studied in this paper using Maximum Likelihood (ML) approach. We consider the estimation problem under the model of multiple independent reader sessions with detection errors due to unreliable radio...... is evaluated under dierent system parameters and compared with that of the conventional method via computer simulations assuming flat Rayleigh fading environments and framed-slotted ALOHA based protocol. Keywords RFID tag cardinality estimation maximum likelihood detection error...

  13. Bayesian and maximum likelihood estimation of genetic maps

    DEFF Research Database (Denmark)

    York, Thomas L.; Durrett, Richard T.; Tanksley, Steven

    2005-01-01

    There has recently been increased interest in the use of Markov Chain Monte Carlo (MCMC)-based Bayesian methods for estimating genetic maps. The advantage of these methods is that they can deal accurately with missing data and genotyping errors. Here we present an extension of the previous methods...... of genotyping errors. A similar advantage of the Bayesian method was not observed for missing data. We also re-analyse a recently published set of data from the eggplant and show that the use of the MCMC-based method leads to smaller estimates of genetic distances....

  14. FLEAD: online frequency likelihood estimation anomaly detection for mobile sensing

    NARCIS (Netherlands)

    Le Viet Duc, L Duc; Scholten, Johan; Havinga, Paul J.M.

    With the rise of smartphone platforms, adaptive sensing becomes an predominant key to overcome intricate constraints such as smartphone's capabilities and dynamic data. One way to do this is estimating the event probability based on anomaly detection to invoke heavy processes, such as switching on

  15. Maximum Likelihood Estimation and Inference With Examples in R, SAS and ADMB

    CERN Document Server

    Millar, Russell B

    2011-01-01

    This book takes a fresh look at the popular and well-established method of maximum likelihood for statistical estimation and inference. It begins with an intuitive introduction to the concepts and background of likelihood, and moves through to the latest developments in maximum likelihood methodology, including general latent variable models and new material for the practical implementation of integrated likelihood using the free ADMB software. Fundamental issues of statistical inference are also examined, with a presentation of some of the philosophical debates underlying the choice of statis

  16. Analyzing multivariate survival data using composite likelihood and flexible parametric modeling of the hazard functions

    DEFF Research Database (Denmark)

    Nielsen, Jan; Parner, Erik

    2010-01-01

    In this paper, we model multivariate time-to-event data by composite likelihood of pairwise frailty likelihoods and marginal hazards using natural cubic splines. Both right- and interval-censored data are considered. The suggested approach is applied on two types of family studies using the gamma...

  17. Sampling of systematic errors to estimate likelihood weights in nuclear data uncertainty propagation

    International Nuclear Information System (INIS)

    Helgesson, P.; Sjöstrand, H.; Koning, A.J.; Rydén, J.; Rochman, D.; Alhassan, E.; Pomp, S.

    2016-01-01

    In methodologies for nuclear data (ND) uncertainty assessment and propagation based on random sampling, likelihood weights can be used to infer experimental information into the distributions for the ND. As the included number of correlated experimental points grows large, the computational time for the matrix inversion involved in obtaining the likelihood can become a practical problem. There are also other problems related to the conventional computation of the likelihood, e.g., the assumption that all experimental uncertainties are Gaussian. In this study, a way to estimate the likelihood which avoids matrix inversion is investigated; instead, the experimental correlations are included by sampling of systematic errors. It is shown that the model underlying the sampling methodology (using univariate normal distributions for random and systematic errors) implies a multivariate Gaussian for the experimental points (i.e., the conventional model). It is also shown that the likelihood estimates obtained through sampling of systematic errors approach the likelihood obtained with matrix inversion as the sample size for the systematic errors grows large. In studied practical cases, it is seen that the estimates for the likelihood weights converge impractically slowly with the sample size, compared to matrix inversion. The computational time is estimated to be greater than for matrix inversion in cases with more experimental points, too. Hence, the sampling of systematic errors has little potential to compete with matrix inversion in cases where the latter is applicable. Nevertheless, the underlying model and the likelihood estimates can be easier to intuitively interpret than the conventional model and the likelihood function involving the inverted covariance matrix. Therefore, this work can both have pedagogical value and be used to help motivating the conventional assumption of a multivariate Gaussian for experimental data. The sampling of systematic errors could also

  18. Applying a Weighted Maximum Likelihood Latent Trait Estimator to the Generalized Partial Credit Model

    Science.gov (United States)

    Penfield, Randall D.; Bergeron, Jennifer M.

    2005-01-01

    This article applies a weighted maximum likelihood (WML) latent trait estimator to the generalized partial credit model (GPCM). The relevant equations required to obtain the WML estimator using the Newton-Raphson algorithm are presented, and a simulation study is described that compared the properties of the WML estimator to those of the maximum…

  19. Parameter estimation in astronomy through application of the likelihood ratio. [satellite data analysis techniques

    Science.gov (United States)

    Cash, W.

    1979-01-01

    Many problems in the experimental estimation of parameters for models can be solved through use of the likelihood ratio test. Applications of the likelihood ratio, with particular attention to photon counting experiments, are discussed. The procedures presented solve a greater range of problems than those currently in use, yet are no more difficult to apply. The procedures are proved analytically, and examples from current problems in astronomy are discussed.

  20. Massive optimal data compression and density estimation for scalable, likelihood-free inference in cosmology

    Science.gov (United States)

    Alsing, Justin; Wandelt, Benjamin; Feeney, Stephen

    2018-03-01

    Many statistical models in cosmology can be simulated forwards but have intractable likelihood functions. Likelihood-free inference methods allow us to perform Bayesian inference from these models using only forward simulations, free from any likelihood assumptions or approximations. Likelihood-free inference generically involves simulating mock data and comparing to the observed data; this comparison in data-space suffers from the curse of dimensionality and requires compression of the data to a small number of summary statistics to be tractable. In this paper we use massive asymptotically-optimal data compression to reduce the dimensionality of the data-space to just one number per parameter, providing a natural and optimal framework for summary statistic choice for likelihood-free inference. Secondly, we present the first cosmological application of Density Estimation Likelihood-Free Inference (DELFI), which learns a parameterized model for joint distribution of data and parameters, yielding both the parameter posterior and the model evidence. This approach is conceptually simple, requires less tuning than traditional Approximate Bayesian Computation approaches to likelihood-free inference and can give high-fidelity posteriors from orders of magnitude fewer forward simulations. As an additional bonus, it enables parameter inference and Bayesian model comparison simultaneously. We demonstrate Density Estimation Likelihood-Free Inference with massive data compression on an analysis of the joint light-curve analysis supernova data, as a simple validation case study. We show that high-fidelity posterior inference is possible for full-scale cosmological data analyses with as few as ˜104 simulations, with substantial scope for further improvement, demonstrating the scalability of likelihood-free inference to large and complex cosmological datasets.

  1. Outlier identification procedures for contingency tables using maximum likelihood and $L_1$ estimates

    NARCIS (Netherlands)

    Kuhnt, S.

    2004-01-01

    Observed cell counts in contingency tables are perceived as outliers if they have low probability under an anticipated loglinear Poisson model. New procedures for the identification of such outliers are derived using the classical maximum likelihood estimator and an estimator based on the L1 norm.

  2. IRT Item Parameter Recovery with Marginal Maximum Likelihood Estimation Using Loglinear Smoothing Models

    Science.gov (United States)

    Casabianca, Jodi M.; Lewis, Charles

    2015-01-01

    Loglinear smoothing (LLS) estimates the latent trait distribution while making fewer assumptions about its form and maintaining parsimony, thus leading to more precise item response theory (IRT) item parameter estimates than standard marginal maximum likelihood (MML). This article provides the expectation-maximization algorithm for MML estimation…

  3. Maximum likelihood estimation for Cox's regression model under nested case-control sampling

    DEFF Research Database (Denmark)

    Scheike, Thomas; Juul, Anders

    2004-01-01

    Nested case-control sampling is designed to reduce the costs of large cohort studies. It is important to estimate the parameters of interest as efficiently as possible. We present a new maximum likelihood estimator (MLE) for nested case-control sampling in the context of Cox's proportional hazard...

  4. Maximum likelihood estimation for Cox's regression model under nested case-control sampling

    DEFF Research Database (Denmark)

    Scheike, Thomas Harder; Juul, Anders

    2004-01-01

    -like growth factor I was associated with ischemic heart disease. The study was based on a population of 3784 Danes and 231 cases of ischemic heart disease where controls were matched on age and gender. We illustrate the use of the MLE for these data and show how the maximum likelihood framework can be used......Nested case-control sampling is designed to reduce the costs of large cohort studies. It is important to estimate the parameters of interest as efficiently as possible. We present a new maximum likelihood estimator (MLE) for nested case-control sampling in the context of Cox's proportional hazards...... model. The MLE is computed by the EM-algorithm, which is easy to implement in the proportional hazards setting. Standard errors are estimated by a numerical profile likelihood approach based on EM aided differentiation. The work was motivated by a nested case-control study that hypothesized that insulin...

  5. Fast maximum likelihood estimation of mutation rates using a birth-death process.

    Science.gov (United States)

    Wu, Xiaowei; Zhu, Hongxiao

    2015-02-07

    Since fluctuation analysis was first introduced by Luria and Delbrück in 1943, it has been widely used to make inference about spontaneous mutation rates in cultured cells. Under certain model assumptions, the probability distribution of the number of mutants that appear in a fluctuation experiment can be derived explicitly, which provides the basis of mutation rate estimation. It has been shown that, among various existing estimators, the maximum likelihood estimator usually demonstrates some desirable properties such as consistency and lower mean squared error. However, its application in real experimental data is often hindered by slow computation of likelihood due to the recursive form of the mutant-count distribution. We propose a fast maximum likelihood estimator of mutation rates, MLE-BD, based on a birth-death process model with non-differential growth assumption. Simulation studies demonstrate that, compared with the conventional maximum likelihood estimator derived from the Luria-Delbrück distribution, MLE-BD achieves substantial improvement on computational speed and is applicable to arbitrarily large number of mutants. In addition, it still retains good accuracy on point estimation. Published by Elsevier Ltd.

  6. Estimation Methods for Non-Homogeneous Regression - Minimum CRPS vs Maximum Likelihood

    Science.gov (United States)

    Gebetsberger, Manuel; Messner, Jakob W.; Mayr, Georg J.; Zeileis, Achim

    2017-04-01

    Non-homogeneous regression models are widely used to statistically post-process numerical weather prediction models. Such regression models correct for errors in mean and variance and are capable to forecast a full probability distribution. In order to estimate the corresponding regression coefficients, CRPS minimization is performed in many meteorological post-processing studies since the last decade. In contrast to maximum likelihood estimation, CRPS minimization is claimed to yield more calibrated forecasts. Theoretically, both scoring rules used as an optimization score should be able to locate a similar and unknown optimum. Discrepancies might result from a wrong distributional assumption of the observed quantity. To address this theoretical concept, this study compares maximum likelihood and minimum CRPS estimation for different distributional assumptions. First, a synthetic case study shows that, for an appropriate distributional assumption, both estimation methods yield to similar regression coefficients. The log-likelihood estimator is slightly more efficient. A real world case study for surface temperature forecasts at different sites in Europe confirms these results but shows that surface temperature does not always follow the classical assumption of a Gaussian distribution. KEYWORDS: ensemble post-processing, maximum likelihood estimation, CRPS minimization, probabilistic temperature forecasting, distributional regression models

  7. Maximum likelihood estimation of the position of a radiating source in a waveguide

    International Nuclear Information System (INIS)

    Hinich, M.J.

    1979-01-01

    An array of sensors is receiving radiation from a source of interest. The source and the array are in a one- or two-dimensional waveguide. The maximum-likelihood estimators of the coordinates of the source are analyzed under the assumptions that the noise field is Gaussian. The Cramer-Rao lower bound is of the order of the number of modes which define the source excitation function. The results show that the accuracy of the maximum likelihood estimator of source depth using a vertical array in a infinite horizontal waveguide (such as the ocean) is limited by the number of modes detected by the array regardless of the array size

  8. Regularization parameter selection methods for ill-posed Poisson maximum likelihood estimation

    International Nuclear Information System (INIS)

    Bardsley, Johnathan M; Goldes, John

    2009-01-01

    In image processing applications, image intensity is often measured via the counting of incident photons emitted by the object of interest. In such cases, image data noise is accurately modeled by a Poisson distribution. This motivates the use of Poisson maximum likelihood estimation for image reconstruction. However, when the underlying model equation is ill-posed, regularization is needed. Regularized Poisson likelihood estimation has been studied extensively by the authors, though a problem of high importance remains: the choice of the regularization parameter. We will present three statistically motivated methods for choosing the regularization parameter, and numerical examples will be presented to illustrate their effectiveness

  9. A maximum pseudo-likelihood approach for estimating species trees under the coalescent model

    Directory of Open Access Journals (Sweden)

    Edwards Scott V

    2010-10-01

    Full Text Available Abstract Background Several phylogenetic approaches have been developed to estimate species trees from collections of gene trees. However, maximum likelihood approaches for estimating species trees under the coalescent model are limited. Although the likelihood of a species tree under the multispecies coalescent model has already been derived by Rannala and Yang, it can be shown that the maximum likelihood estimate (MLE of the species tree (topology, branch lengths, and population sizes from gene trees under this formula does not exist. In this paper, we develop a pseudo-likelihood function of the species tree to obtain maximum pseudo-likelihood estimates (MPE of species trees, with branch lengths of the species tree in coalescent units. Results We show that the MPE of the species tree is statistically consistent as the number M of genes goes to infinity. In addition, the probability that the MPE of the species tree matches the true species tree converges to 1 at rate O(M -1. The simulation results confirm that the maximum pseudo-likelihood approach is statistically consistent even when the species tree is in the anomaly zone. We applied our method, Maximum Pseudo-likelihood for Estimating Species Trees (MP-EST to a mammal dataset. The four major clades found in the MP-EST tree are consistent with those in the Bayesian concatenation tree. The bootstrap supports for the species tree estimated by the MP-EST method are more reasonable than the posterior probability supports given by the Bayesian concatenation method in reflecting the level of uncertainty in gene trees and controversies over the relationship of four major groups of placental mammals. Conclusions MP-EST can consistently estimate the topology and branch lengths (in coalescent units of the species tree. Although the pseudo-likelihood is derived from coalescent theory, and assumes no gene flow or horizontal gene transfer (HGT, the MP-EST method is robust to a small amount of HGT in the

  10. Maximum likelihood estimation and EM algorithm of Copas-like selection model for publication bias correction.

    Science.gov (United States)

    Ning, Jing; Chen, Yong; Piao, Jin

    2017-07-01

    Publication bias occurs when the published research results are systematically unrepresentative of the population of studies that have been conducted, and is a potential threat to meaningful meta-analysis. The Copas selection model provides a flexible framework for correcting estimates and offers considerable insight into the publication bias. However, maximizing the observed likelihood under the Copas selection model is challenging because the observed data contain very little information on the latent variable. In this article, we study a Copas-like selection model and propose an expectation-maximization (EM) algorithm for estimation based on the full likelihood. Empirical simulation studies show that the EM algorithm and its associated inferential procedure performs well and avoids the non-convergence problem when maximizing the observed likelihood. © The Author 2017. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  11. Performance of penalized maximum likelihood in estimation of genetic covariances matrices

    Directory of Open Access Journals (Sweden)

    Meyer Karin

    2011-11-01

    Full Text Available Abstract Background Estimation of genetic covariance matrices for multivariate problems comprising more than a few traits is inherently problematic, since sampling variation increases dramatically with the number of traits. This paper investigates the efficacy of regularized estimation of covariance components in a maximum likelihood framework, imposing a penalty on the likelihood designed to reduce sampling variation. In particular, penalties that "borrow strength" from the phenotypic covariance matrix are considered. Methods An extensive simulation study was carried out to investigate the reduction in average 'loss', i.e. the deviation in estimated matrices from the population values, and the accompanying bias for a range of parameter values and sample sizes. A number of penalties are examined, penalizing either the canonical eigenvalues or the genetic covariance or correlation matrices. In addition, several strategies to determine the amount of penalization to be applied, i.e. to estimate the appropriate tuning factor, are explored. Results It is shown that substantial reductions in loss for estimates of genetic covariance can be achieved for small to moderate sample sizes. While no penalty performed best overall, penalizing the variance among the estimated canonical eigenvalues on the logarithmic scale or shrinking the genetic towards the phenotypic correlation matrix appeared most advantageous. Estimating the tuning factor using cross-validation resulted in a loss reduction 10 to 15% less than that obtained if population values were known. Applying a mild penalty, chosen so that the deviation in likelihood from the maximum was non-significant, performed as well if not better than cross-validation and can be recommended as a pragmatic strategy. Conclusions Penalized maximum likelihood estimation provides the means to 'make the most' of limited and precious data and facilitates more stable estimation for multi-dimensional analyses. It should

  12. Microarray background correction: maximum likelihood estimation for the normal-exponential convolution

    DEFF Research Database (Denmark)

    Silver, Jeremy D; Ritchie, Matthew E; Smyth, Gordon K

    2009-01-01

    exponentially distributed, representing background noise and signal, respectively. Using a saddle-point approximation, Ritchie and others (2007) found normexp to be the best background correction method for 2-color microarray data. This article develops the normexp method further by improving the estimation...... is developed for exact maximum likelihood estimation (MLE) using high-quality optimization software and using the saddle-point estimates as starting values. "MLE" is shown to outperform heuristic estimators proposed by other authors, both in terms of estimation accuracy and in terms of performance on real data...

  13. Analysis of Minute Features in Speckled Imagery with Maximum Likelihood Estimation

    Directory of Open Access Journals (Sweden)

    Alejandro C. Frery

    2004-12-01

    Full Text Available This paper deals with numerical problems arising when performing maximum likelihood parameter estimation in speckled imagery using small samples. The noise that appears in images obtained with coherent illumination, as is the case of sonar, laser, ultrasound-B, and synthetic aperture radar, is called speckle, and it can neither be assumed Gaussian nor additive. The properties of speckle noise are well described by the multiplicative model, a statistical framework from which stem several important distributions. Amongst these distributions, one is regarded as the universal model for speckled data, namely, the 𝒢0 law. This paper deals with amplitude data, so the 𝒢A0 distribution will be used. The literature reports that techniques for obtaining estimates (maximum likelihood, based on moments and on order statistics of the parameters of the 𝒢A0 distribution require samples of hundreds, even thousands, of observations in order to obtain sensible values. This is verified for maximum likelihood estimation, and a proposal based on alternate optimization is made to alleviate this situation. The proposal is assessed with real and simulated data, showing that the convergence problems are no longer present. A Monte Carlo experiment is devised to estimate the quality of maximum likelihood estimators in small samples, and real data is successfully analyzed with the proposed alternated procedure. Stylized empirical influence functions are computed and used to choose a strategy for computing maximum likelihood estimates that is resistant to outliers.

  14. Maximum likelihood estimation of ancestral codon usage bias parameters in Drosophila

    DEFF Research Database (Denmark)

    Nielsen, Rasmus; Bauer DuMont, Vanessa L; Hubisz, Melissa J

    2007-01-01

    : the selection coefficient for optimal codon usage (S), allowing joint maximum likelihood estimation of S and the dN/dS ratio. We apply the method to previously published data from Drosophila melanogaster, Drosophila simulans, and Drosophila yakuba and show, in accordance with previous results, that the D...

  15. An Iterative Maximum a Posteriori Estimation of Proficiency Level to Detect Multiple Local Likelihood Maxima

    Science.gov (United States)

    Magis, David; Raiche, Gilles

    2010-01-01

    In this article the authors focus on the issue of the nonuniqueness of the maximum likelihood (ML) estimator of proficiency level in item response theory (with special attention to logistic models). The usual maximum a posteriori (MAP) method offers a good alternative within that framework; however, this article highlights some drawbacks of its…

  16. Experimental demonstration of the maximum likelihood-based chromatic dispersion estimator for coherent receivers

    DEFF Research Database (Denmark)

    Borkowski, Robert; Johannisson, Pontus; Wymeersch, Henk

    2014-01-01

    We perform an experimental investigation of a maximum likelihood-based (ML-based) algorithm for bulk chromatic dispersion estimation for digital coherent receivers operating in uncompensated optical networks. We demonstrate the robustness of the method at low optical signal-to-noise ratio (OSNR...

  17. The gap between fatherhood and couplehood desires among Israeli gay men and estimations of their likelihood.

    Science.gov (United States)

    Shenkman, Geva

    2012-10-01

    This study examined the frequencies of the desires and likelihood estimations of Israeli gay men regarding fatherhood and couplehood, using a sample of 183 gay men aged 19-50. It follows previous research which indicated the existence of a gap in the United States with respect to fatherhood, and called for generalizability examinations in other countries and the exploration of possible explanations. As predicted, a gap was also found in Israel between fatherhood desires and their likelihood estimations, as well as between couplehood desires and their likelihood estimations. In addition, lower estimations of fatherhood likelihood were found to predict depression and to correlate with decreased subjective well-being. Possible psychosocial explanations are offered. Moreover, by mapping attitudes toward fatherhood and couplehood among Israeli gay men, the current study helps to extend our knowledge of several central human development motivations and their correlations with depression and subjective well-being in a less-studied sexual minority in a complex cultural climate. (PsycINFO Database Record (c) 2012 APA, all rights reserved).

  18. A simple route to maximum-likelihood estimates of two-locus

    Indian Academy of Sciences (India)

    Home; Journals; Journal of Genetics; Volume 94; Issue 3. A simple route to maximum-likelihood estimates of two-locus recombination fractions under inequality restrictions. Iain L. Macdonald Philasande Nkalashe. Research Note Volume 94 Issue 3 September 2015 pp 479-481 ...

  19. Monte Carlo Maximum Likelihood Estimation for Generalized Long-Memory Time Series Models

    NARCIS (Netherlands)

    Mesters, G.; Koopman, S.J.; Ooms, M.

    2016-01-01

    An exact maximum likelihood method is developed for the estimation of parameters in a non-Gaussian nonlinear density function that depends on a latent Gaussian dynamic process with long-memory properties. Our method relies on the method of importance sampling and on a linear Gaussian approximating

  20. Estimation of stochastic frontier models with fixed-effects through Monte Carlo Maximum Likelihood

    NARCIS (Netherlands)

    Emvalomatis, G.; Stefanou, S.E.; Oude Lansink, A.G.J.M.

    2011-01-01

    Estimation of nonlinear fixed-effects models is plagued by the incidental parameters problem. This paper proposes a procedure for choosing appropriate densities for integrating the incidental parameters from the likelihood function in a general context. The densities are based on priors that are

  1. Constructing valid density matrices on an NMR quantum information processor via maximum likelihood estimation

    Energy Technology Data Exchange (ETDEWEB)

    Singh, Harpreet; Arvind; Dorai, Kavita, E-mail: kavita@iisermohali.ac.in

    2016-09-07

    Estimation of quantum states is an important step in any quantum information processing experiment. A naive reconstruction of the density matrix from experimental measurements can often give density matrices which are not positive, and hence not physically acceptable. How do we ensure that at all stages of reconstruction, we keep the density matrix positive? Recently a method has been suggested based on maximum likelihood estimation, wherein the density matrix is guaranteed to be positive definite. We experimentally implement this protocol on an NMR quantum information processor. We discuss several examples and compare with the standard method of state estimation. - Highlights: • State estimation using maximum likelihood method was performed on an NMR quantum information processor. • Physically valid density matrices were obtained every time in contrast to standard quantum state tomography. • Density matrices of several different entangled and separable states were reconstructed for two and three qubits.

  2. Maximum likelihood estimation of the parameters of nonminimum phase and noncausal ARMA models

    DEFF Research Database (Denmark)

    Rasmussen, Klaus Bolding

    1994-01-01

    The well-known prediction-error-based maximum likelihood (PEML) method can only handle minimum phase ARMA models. This paper presents a new method known as the back-filtering-based maximum likelihood (BFML) method, which can handle nonminimum phase and noncausal ARMA models. The BFML method...... is identical to the PEML method in the case of a minimum phase ARMA model, and it turns out that the BFML method incorporates a noncausal ARMA filter with poles outside the unit circle for estimation of the parameters of a causal, nonminimum phase ARMA model...

  3. Maximum Likelihood Blind Channel Estimation for Space-Time Coding Systems

    Directory of Open Access Journals (Sweden)

    Hakan A. Çırpan

    2002-05-01

    Full Text Available Sophisticated signal processing techniques have to be developed for capacity enhancement of future wireless communication systems. In recent years, space-time coding is proposed to provide significant capacity gains over the traditional communication systems in fading wireless channels. Space-time codes are obtained by combining channel coding, modulation, transmit diversity, and optional receive diversity in order to provide diversity at the receiver and coding gain without sacrificing the bandwidth. In this paper, we consider the problem of blind estimation of space-time coded signals along with the channel parameters. Both conditional and unconditional maximum likelihood approaches are developed and iterative solutions are proposed. The conditional maximum likelihood algorithm is based on iterative least squares with projection whereas the unconditional maximum likelihood approach is developed by means of finite state Markov process modelling. The performance analysis issues of the proposed methods are studied. Finally, some simulation results are presented.

  4. A comparison of maximum likelihood and other estimators of eigenvalues from several correlated Monte Carlo samples

    International Nuclear Information System (INIS)

    Beer, M.

    1980-01-01

    The maximum likelihood method for the multivariate normal distribution is applied to the case of several individual eigenvalues. Correlated Monte Carlo estimates of the eigenvalue are assumed to follow this prescription and aspects of the assumption are examined. Monte Carlo cell calculations using the SAM-CE and VIM codes for the TRX-1 and TRX-2 benchmark reactors, and SAM-CE full core results are analyzed with this method. Variance reductions of a few percent to a factor of 2 are obtained from maximum likelihood estimation as compared with the simple average and the minimum variance individual eigenvalue. The numerical results verify that the use of sample variances and correlation coefficients in place of the corresponding population statistics still leads to nearly minimum variance estimation for a sufficient number of histories and aggregates

  5. An Invariance Property for the Maximum Likelihood Estimator of the Parameters of a Gaussian Moving Average Process

    OpenAIRE

    Godolphin, E. J.

    1980-01-01

    It is shown that the estimation procedure of Walker leads to estimates of the parameters of a Gaussian moving average process which are asymptotically equivalent to the maximum likelihood estimates proposed by Whittle and represented by Godolphin.

  6. Maximum Likelihood-Based Methods for Target Velocity Estimation with Distributed MIMO Radar

    Directory of Open Access Journals (Sweden)

    Zhenxin Cao

    2018-02-01

    Full Text Available The estimation problem for target velocity is addressed in this in the scenario with a distributed multi-input multi-out (MIMO radar system. A maximum likelihood (ML-based estimation method is derived with the knowledge of target position. Then, in the scenario without the knowledge of target position, an iterative method is proposed to estimate the target velocity by updating the position information iteratively. Moreover, the Carmér-Rao Lower Bounds (CRLBs for both scenarios are derived, and the performance degradation of velocity estimation without the position information is also expressed. Simulation results show that the proposed estimation methods can approach the CRLBs, and the velocity estimation performance can be further improved by increasing either the number of radar antennas or the information accuracy of the target position. Furthermore, compared with the existing methods, a better estimation performance can be achieved.

  7. Maximum Likelihood DOA Estimation of Multiple Wideband Sources in the Presence of Nonuniform Sensor Noise

    Directory of Open Access Journals (Sweden)

    K. Yao

    2007-12-01

    Full Text Available We investigate the maximum likelihood (ML direction-of-arrival (DOA estimation of multiple wideband sources in the presence of unknown nonuniform sensor noise. New closed-form expression for the direction estimation Cramér-Rao-Bound (CRB has been derived. The performance of the conventional wideband uniform ML estimator under nonuniform noise has been studied. In order to mitigate the performance degradation caused by the nonuniformity of the noise, a new deterministic wideband nonuniform ML DOA estimator is derived and two associated processing algorithms are proposed. The first algorithm is based on an iterative procedure which stepwise concentrates the log-likelihood function with respect to the DOAs and the noise nuisance parameters, while the second is a noniterative algorithm that maximizes the derived approximately concentrated log-likelihood function. The performance of the proposed algorithms is tested through extensive computer simulations. Simulation results show the stepwise-concentrated ML algorithm (SC-ML requires only a few iterations to converge and both the SC-ML and the approximately-concentrated ML algorithm (AC-ML attain a solution close to the derived CRB at high signal-to-noise ratio.

  8. A theory of timing in scintillation counters based on maximum likelihood estimation

    International Nuclear Information System (INIS)

    Tomitani, Takehiro

    1982-01-01

    A theory of timing in scintillation counters based on the maximum likelihood estimation is presented. An optimum filter that minimizes the variance of timing is described. A simple formula to estimate the variance of timing is presented as a function of photoelectron number, scintillation decay constant and the single electron transit time spread in the photomultiplier. The present method was compared with the theory by E. Gatti and V. Svelto. The proposed method was applied to two simple models and rough estimations of potential time resolution of several scintillators are given. The proposed method is applicable to the timing in Cerenkov counters and semiconductor detectors as well. (author)

  9. Efficient estimators for likelihood ratio sensitivity indices of complex stochastic dynamics

    Energy Technology Data Exchange (ETDEWEB)

    Arampatzis, Georgios; Katsoulakis, Markos A.; Rey-Bellet, Luc [Department of Mathematics and Statistics, University of Massachusetts, Amherst, Massachusetts 01003 (United States)

    2016-03-14

    We demonstrate that centered likelihood ratio estimators for the sensitivity indices of complex stochastic dynamics are highly efficient with low, constant in time variance and consequently they are suitable for sensitivity analysis in long-time and steady-state regimes. These estimators rely on a new covariance formulation of the likelihood ratio that includes as a submatrix a Fisher information matrix for stochastic dynamics and can also be used for fast screening of insensitive parameters and parameter combinations. The proposed methods are applicable to broad classes of stochastic dynamics such as chemical reaction networks, Langevin-type equations and stochastic models in finance, including systems with a high dimensional parameter space and/or disparate decorrelation times between different observables. Furthermore, they are simple to implement as a standard observable in any existing simulation algorithm without additional modifications.

  10. Climate reconstruction analysis using coexistence likelihood estimation (CRACLE): a method for the estimation of climate using vegetation.

    Science.gov (United States)

    Harbert, Robert S; Nixon, Kevin C

    2015-08-01

    • Plant distributions have long been understood to be correlated with the environmental conditions to which species are adapted. Climate is one of the major components driving species distributions. Therefore, it is expected that the plants coexisting in a community are reflective of the local environment, particularly climate.• Presented here is a method for the estimation of climate from local plant species coexistence data. The method, Climate Reconstruction Analysis using Coexistence Likelihood Estimation (CRACLE), is a likelihood-based method that employs specimen collection data at a global scale for the inference of species climate tolerance. CRACLE calculates the maximum joint likelihood of coexistence given individual species climate tolerance characterization to estimate the expected climate.• Plant distribution data for more than 4000 species were used to show that this method accurately infers expected climate profiles for 165 sites with diverse climatic conditions. Estimates differ from the WorldClim global climate model by less than 1.5°C on average for mean annual temperature and less than ∼250 mm for mean annual precipitation. This is a significant improvement upon other plant-based climate-proxy methods.• CRACLE validates long hypothesized interactions between climate and local associations of plant species. Furthermore, CRACLE successfully estimates climate that is consistent with the widely used WorldClim model and therefore may be applied to the quantitative estimation of paleoclimate in future studies. © 2015 Botanical Society of America, Inc.

  11. Estimation of Road Vehicle Speed Using Two Omnidirectional Microphones: A Maximum Likelihood Approach

    Directory of Open Access Journals (Sweden)

    López-Valcarce Roberto

    2004-01-01

    Full Text Available We address the problem of estimating the speed of a road vehicle from its acoustic signature, recorded by a pair of omnidirectional microphones located next to the road. This choice of sensors is motivated by their nonintrusive nature as well as low installation and maintenance costs. A novel estimation technique is proposed, which is based on the maximum likelihood principle. It directly estimates car speed without any assumptions on the acoustic signal emitted by the vehicle. This has the advantages of bypassing troublesome intermediate delay estimation steps as well as eliminating the need for an accurate yet general enough acoustic traffic model. An analysis of the estimate for narrowband and broadband sources is provided and verified with computer simulations. The estimation algorithm uses a bank of modified crosscorrelators and therefore it is well suited to DSP implementation, performing well with preliminary field data.

  12. A Sum-of-Squares and Semidefinite Programming Approach for Maximum Likelihood DOA Estimation

    Directory of Open Access Journals (Sweden)

    Shu Cai

    2016-12-01

    Full Text Available Direction of arrival (DOA estimation using a uniform linear array (ULA is a classical problem in array signal processing. In this paper, we focus on DOA estimation based on the maximum likelihood (ML criterion, transform the estimation problem into a novel formulation, named as sum-of-squares (SOS, and then solve it using semidefinite programming (SDP. We first derive the SOS and SDP method for DOA estimation in the scenario of a single source and then extend it under the framework of alternating projection for multiple DOA estimation. The simulations demonstrate that the SOS- and SDP-based algorithms can provide stable and accurate DOA estimation when the number of snapshots is small and the signal-to-noise ratio (SNR is low. Moveover, it has a higher spatial resolution compared to existing methods based on the ML criterion.

  13. Maximum Likelihood PSD Estimation for Speech Enhancement in Reverberation and Noise

    DEFF Research Database (Denmark)

    Kuklasinski, Adam; Doclo, Simon; Jensen, Søren Holdt

    2016-01-01

    In this contribution we focus on the problem of power spectral density (PSD) estimation from multiple microphone signals in reverberant and noisy environments. The PSD estimation method proposed in this paper is based on the maximum likelihood (ML) methodology. In particular, we derive a novel ML...... instrumental measures and is shown to be higher than when the competing estimator is used. Moreover, we perform a speech intelligibility test where we demonstrate that both the proposed and the competing PSD estimators lead to similar intelligibility improvements......., it is shown numerically that the mean squared estimation error achieved by the proposed method is near the limit set by the corresponding Cram´er-Rao lower bound. The speech dereverberation performance of a multi-channel Wiener filter (MWF) based on the proposed PSD estimators is measured using several...

  14. COSMIC MICROWAVE BACKGROUND LIKELIHOOD APPROXIMATION BY A GAUSSIANIZED BLACKWELL-RAO ESTIMATOR

    International Nuclear Information System (INIS)

    Rudjord, Oe.; Groeneboom, N. E.; Eriksen, H. K.; Huey, Greg; Gorski, K. M.; Jewell, J. B.

    2009-01-01

    We introduce a new cosmic microwave background (CMB) temperature likelihood approximation called the Gaussianized Blackwell-Rao estimator. This estimator is derived by transforming the observed marginal power spectrum distributions obtained by the CMB Gibbs sampler into standard univariate Gaussians, and then approximating their joint transformed distribution by a multivariate Gaussian. The method is exact for full-sky coverage and uniform noise and an excellent approximation for sky cuts and scanning patterns relevant for modern satellite experiments such as the Wilkinson Microwave Anisotropy Probe (WMAP) and Planck. The result is a stable, accurate, and computationally very efficient CMB temperature likelihood representation that allows the user to exploit the unique error propagation capabilities of the Gibbs sampler to high ls. A single evaluation of this estimator between l = 2 and 200 takes ∼0.2 CPU milliseconds, while for comparison, a singe pixel space likelihood evaluation between l = 2 and 30 for a map with ∼2500 pixels requires ∼20 s. We apply this tool to the five-year WMAP temperature data, and re-estimate the angular temperature power spectrum, C l , and likelihood, L(C l ), for l ≤ 200, and derive new cosmological parameters for the standard six-parameter ΛCDM model. Our spectrum is in excellent agreement with the official WMAP spectrum, but we find slight differences in the derived cosmological parameters. Most importantly, the spectral index of scalar perturbations is n s = 0.973 ± 0.014, 1.9σ away from unity and 0.6σ higher than the official WMAP result, n s = 0.965 ± 0.014. This suggests that an exact likelihood treatment is required to higher ls than previously believed, reinforcing and extending our conclusions from the three-year WMAP analysis. In that case, we found that the suboptimal likelihood approximation adopted between l = 12 and 30 by the WMAP team biased n s low by 0.4σ, while here we find that the same approximation

  15. Fast and accurate estimation of the covariance between pairwise maximum likelihood distances

    Directory of Open Access Journals (Sweden)

    Manuel Gil

    2014-09-01

    Full Text Available Pairwise evolutionary distances are a model-based summary statistic for a set of molecular sequences. They represent the leaf-to-leaf path lengths of the underlying phylogenetic tree. Estimates of pairwise distances with overlapping paths covary because of shared mutation events. It is desirable to take these covariance structure into account to increase precision in any process that compares or combines distances. This paper introduces a fast estimator for the covariance of two pairwise maximum likelihood distances, estimated under general Markov models. The estimator is based on a conjecture (going back to Nei & Jin, 1989 which links the covariance to path lengths. It is proven here under a simple symmetric substitution model. A simulation shows that the estimator outperforms previously published ones in terms of the mean squared error.

  16. A new maximum likelihood blood velocity estimator incorporating spatial and temporal correlation

    DEFF Research Database (Denmark)

    Schlaikjer, Malene; Jensen, Jørgen Arendt

    2001-01-01

    and space. This paper presents a new estimator (STC-MLE), which incorporates the correlation property. It is an expansion of the maximum likelihood estimator (MLE) developed by Ferrara et al. With the MLE a cross-correlation analysis between consecutive RF-lines on complex form is carried out for a range...... of possible velocities. In the new estimator an additional similarity investigation for each evaluated velocity and the available velocity estimates in a temporal (between frames) and spatial (within frames) neighborhood is performed. An a priori probability density term in the distribution...... of the observations gives a probability measure of the correlation between the velocities. Both the MLE and the STC-MLE have been evaluated on simulated and in-vivo RF-data obtained from the carotid artery. Using the MLE 4.1% of the estimates deviate significantly from the true velocities, when the performance...

  17. Fast and accurate estimation of the covariance between pairwise maximum likelihood distances.

    Science.gov (United States)

    Gil, Manuel

    2014-01-01

    Pairwise evolutionary distances are a model-based summary statistic for a set of molecular sequences. They represent the leaf-to-leaf path lengths of the underlying phylogenetic tree. Estimates of pairwise distances with overlapping paths covary because of shared mutation events. It is desirable to take these covariance structure into account to increase precision in any process that compares or combines distances. This paper introduces a fast estimator for the covariance of two pairwise maximum likelihood distances, estimated under general Markov models. The estimator is based on a conjecture (going back to Nei & Jin, 1989) which links the covariance to path lengths. It is proven here under a simple symmetric substitution model. A simulation shows that the estimator outperforms previously published ones in terms of the mean squared error.

  18. A New Maximum-Likelihood Change Estimator for Two-Pass SAR Coherent Change Detection.

    Energy Technology Data Exchange (ETDEWEB)

    Wahl, Daniel E.; Yocky, David A.; Jakowatz, Charles V,

    2014-09-01

    In this paper, we derive a new optimal change metric to be used in synthetic aperture RADAR (SAR) coherent change detection (CCD). Previous CCD methods tend to produce false alarm states (showing change when there is none) in areas of the image that have a low clutter-to-noise power ratio (CNR). The new estimator does not suffer from this shortcoming. It is a surprisingly simple expression, easy to implement, and is optimal in the maximum-likelihood (ML) sense. The estimator produces very impressive results on the CCD collects that we have tested.

  19. A Fast Algorithm for Maximum Likelihood Estimation of Harmonic Chirp Parameters

    DEFF Research Database (Denmark)

    Jensen, Tobias Lindstrøm; Nielsen, Jesper Kjær; Jensen, Jesper Rindom

    2017-01-01

    . A statistically efficient estimator for extracting the parameters of the harmonic chirp model in additive white Gaussian noise is the maximum likelihood (ML) estimator which recently has been demonstrated to be robust to noise and accurate --- even when the model order is unknown. The main drawback of the ML......The analysis of (approximately) periodic signals is an important element in numerous applications. One generalization of standard periodic signals often occurring in practice are harmonic chirp signals where the instantaneous frequency increases/decreases linearly as a function of time...

  20. %lrasch_mml: A SAS Macro for Marginal Maximum Likelihood Estimation in Longitudinal Polytomous Rasch Models

    Directory of Open Access Journals (Sweden)

    Maja Olsbjerg

    2015-10-01

    Full Text Available Item response theory models are often applied when a number items are used to measure a unidimensional latent variable. Originally proposed and used within educational research, they are also used when focus is on physical functioning or psychological wellbeing. Modern applications often need more general models, typically models for multidimensional latent variables or longitudinal models for repeated measurements. This paper describes a SAS macro that fits two-dimensional polytomous Rasch models using a specification of the model that is sufficiently flexible to accommodate longitudinal Rasch models. The macro estimates item parameters using marginal maximum likelihood estimation. A graphical presentation of item characteristic curves is included.

  1. An iterative procedure for obtaining maximum-likelihood estimates of the parameters for a mixture of normal distributions, Addendum

    Science.gov (United States)

    Peters, B. C., Jr.; Walker, H. F.

    1975-01-01

    New results and insights concerning a previously published iterative procedure for obtaining maximum-likelihood estimates of the parameters for a mixture of normal distributions were discussed. It was shown that the procedure converges locally to the consistent maximum likelihood estimate as long as a specified parameter is bounded between two limits. Bound values were given to yield optimal local convergence.

  2. Robust Multi-Frame Adaptive Optics Image Restoration Algorithm Using Maximum Likelihood Estimation with Poisson Statistics

    Directory of Open Access Journals (Sweden)

    Dongming Li

    2017-04-01

    Full Text Available An adaptive optics (AO system provides real-time compensation for atmospheric turbulence. However, an AO image is usually of poor contrast because of the nature of the imaging process, meaning that the image contains information coming from both out-of-focus and in-focus planes of the object, which also brings about a loss in quality. In this paper, we present a robust multi-frame adaptive optics image restoration algorithm via maximum likelihood estimation. Our proposed algorithm uses a maximum likelihood method with image regularization as the basic principle, and constructs the joint log likelihood function for multi-frame AO images based on a Poisson distribution model. To begin with, a frame selection method based on image variance is applied to the observed multi-frame AO images to select images with better quality to improve the convergence of a blind deconvolution algorithm. Then, by combining the imaging conditions and the AO system properties, a point spread function estimation model is built. Finally, we develop our iterative solutions for AO image restoration addressing the joint deconvolution issue. We conduct a number of experiments to evaluate the performances of our proposed algorithm. Experimental results show that our algorithm produces accurate AO image restoration results and outperforms the current state-of-the-art blind deconvolution methods.

  3. On Maximum Likelihood Estimation for Left Censored Burr Type III Distribution

    Directory of Open Access Journals (Sweden)

    Navid Feroze

    2015-12-01

    Full Text Available Burr type III is an important distribution used to model the failure time data. The paper addresses the problem of estimation of parameters of the Burr type III distribution based on maximum likelihood estimation (MLE when the samples are left censored. As the closed form expression for the MLEs of the parameters cannot be derived, the approximate solutions have been obtained through iterative procedures. An extensive simulation study has been carried out to investigate the performance of the estimators with respect to sample size, censoring rate and true parametric values. A real life example has also been presented. The study revealed that the proposed estimators are consistent and capable of providing efficient results under small to moderate samples.

  4. Maximum Simulated Likelihood and Expectation-Maximization Methods to Estimate Random Coefficients Logit with Panel Data

    DEFF Research Database (Denmark)

    Cherchi, Elisabetta; Guevara, Cristian

    2012-01-01

    with cross-sectional or with panel data, and (d) EM systematically attained more efficient estimators than the MSL method. The results imply that if the purpose of the estimation is only to determine the ratios of the model parameters (e.g., the value of time), the EM method should be preferred. For all......The random coefficients logit model allows a more realistic representation of agents' behavior. However, the estimation of that model may involve simulation, which may become impractical with many random coefficients because of the curse of dimensionality. In this paper, the traditional maximum...... simulated likelihood (MSL) method is compared with the alternative expectation- maximization (EM) method, which does not require simulation. Previous literature had shown that for cross-sectional data, MSL outperforms the EM method in the ability to recover the true parameters and estimation time...

  5. Marginal likelihood estimation of negative binomial parameters with applications to RNA-seq data.

    Science.gov (United States)

    León-Novelo, Luis; Fuentes, Claudio; Emerson, Sarah

    2017-10-01

    RNA-Seq data characteristically exhibits large variances, which need to be appropriately accounted for in any proposed model. We first explore the effects of this variability on the maximum likelihood estimator (MLE) of the dispersion parameter of the negative binomial distribution, and propose instead to use an estimator obtained via maximization of the marginal likelihood in a conjugate Bayesian framework. We show, via simulation studies, that the marginal MLE can better control this variation and produce a more stable and reliable estimator. We then formulate a conjugate Bayesian hierarchical model, and use this new estimator to propose a Bayesian hypothesis test to detect differentially expressed genes in RNA-Seq data. We use numerical studies to show that our much simpler approach is competitive with other negative binomial based procedures, and we use a real data set to illustrate the implementation and flexibility of the procedure. © The Author 2017. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  6. Identification of contemporary selection signatures using composite log likelihood and their associations with marbling score in Korean cattle.

    Science.gov (United States)

    Ryu, Jihye; Lee, Chaeyoung

    2014-12-01

    Positive selection not only increases beneficial allele frequency but also causes augmentation of allele frequencies of sequence variants in close proximity. Signals for positive selection were detected by the statistical differences in subsequent allele frequencies. To identify selection signatures in Korean cattle, we applied a composite log-likelihood (CLL)-based method, which calculates a composite likelihood of the allelic frequencies observed across sliding windows of five adjacent loci and compares the value with the critical statistic estimated by 50,000 permutations. Data for a total of 11,799 nucleotide polymorphisms were used with 71 Korean cattle and 209 foreign beef cattle. As a result, 147 signals were identified for Korean cattle based on CLL estimates (P selected. Further genetic association analysis with 41 intragenic variants in the selection signatures with the greatest CLL for each chromosome revealed that marbling score was associated with five variants. Intensive association studies with all the selection signatures identified in this study are required to exclude signals associated with other phenotypes or signals falsely detected and thus to identify genetic markers for meat quality. © 2014 Stichting International Foundation for Animal Genetics.

  7. A score to estimate the likelihood of detecting advanced colorectal neoplasia at colonoscopy.

    Science.gov (United States)

    Kaminski, Michal F; Polkowski, Marcin; Kraszewska, Ewa; Rupinski, Maciej; Butruk, Eugeniusz; Regula, Jaroslaw

    2014-07-01

    This study aimed to develop and validate a model to estimate the likelihood of detecting advanced colorectal neoplasia in Caucasian patients. We performed a cross-sectional analysis of database records for 40-year-old to 66-year-old patients who entered a national primary colonoscopy-based screening programme for colorectal cancer in 73 centres in Poland in the year 2007. We used multivariate logistic regression to investigate the associations between clinical variables and the presence of advanced neoplasia in a randomly selected test set, and confirmed the associations in a validation set. We used model coefficients to develop a risk score for detection of advanced colorectal neoplasia. Advanced colorectal neoplasia was detected in 2544 of the 35,918 included participants (7.1%). In the test set, a logistic-regression model showed that independent risk factors for advanced colorectal neoplasia were: age, sex, family history of colorectal cancer, cigarette smoking (padvanced neoplasia: 1.00 (95% CI 0.95 to 1.06)) and had moderate discriminatory power (c-statistic 0.62). We developed a score that estimated the likelihood of detecting advanced neoplasia in the validation set, from 1.32% for patients scoring 0, to 19.12% for patients scoring 7-8. Developed and internally validated score consisting of simple clinical factors successfully estimates the likelihood of detecting advanced colorectal neoplasia in asymptomatic Caucasian patients. Once externally validated, it may be useful for counselling or designing primary prevention studies. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  8. Analysis of the maximum likelihood channel estimator for OFDM systems in the presence of unknown interference

    Science.gov (United States)

    Dermoune, Azzouz; Simon, Eric Pierre

    2017-12-01

    This paper is a theoretical analysis of the maximum likelihood (ML) channel estimator for orthogonal frequency-division multiplexing (OFDM) systems in the presence of unknown interference. The following theoretical results are presented. Firstly, the uniqueness of the ML solution for practical applications, i.e., when thermal noise is present, is analytically demonstrated when the number of transmitted OFDM symbols is strictly greater than one. The ML solution is then derived from the iterative conditional ML (CML) algorithm. Secondly, it is shown that the channel estimate can be described as an algebraic function whose inputs are the initial value and the means and variances of the received samples. Thirdly, it is theoretically demonstrated that the channel estimator is not biased. The second and the third results are obtained by employing oblique projection theory. Furthermore, these results are confirmed by numerical results.

  9. Bearing Fault Detection Based on Maximum Likelihood Estimation and Optimized ANN Using the Bees Algorithm

    Directory of Open Access Journals (Sweden)

    Behrooz Attaran

    2015-01-01

    Full Text Available Rotating machinery is the most common machinery in industry. The root of the faults in rotating machinery is often faulty rolling element bearings. This paper presents a technique using optimized artificial neural network by the Bees Algorithm for automated diagnosis of localized faults in rolling element bearings. The inputs of this technique are a number of features (maximum likelihood estimation values, which are derived from the vibration signals of test data. The results shows that the performance of the proposed optimized system is better than most previous studies, even though it uses only two features. Effectiveness of the above method is illustrated using obtained bearing vibration data.

  10. Two-Stage Maximum Likelihood Estimation (TSMLE for MT-CDMA Signals in the Indoor Environment

    Directory of Open Access Journals (Sweden)

    Sesay Abu B

    2004-01-01

    Full Text Available This paper proposes a two-stage maximum likelihood estimation (TSMLE technique suited for multitone code division multiple access (MT-CDMA system. Here, an analytical framework is presented in the indoor environment for determining the average bit error rate (BER of the system, over Rayleigh and Ricean fading channels. The analytical model is derived for quadrature phase shift keying (QPSK modulation technique by taking into account the number of tones, signal bandwidth (BW, bit rate, and transmission power. Numerical results are presented to validate the analysis, and to justify the approximations made therein. Moreover, these results are shown to agree completely with those obtained by simulation.

  11. Maximum profile likelihood estimation of differential equation parameters through model based smoothing state estimates.

    Science.gov (United States)

    Campbell, D A; Chkrebtii, O

    2013-12-01

    Statistical inference for biochemical models often faces a variety of characteristic challenges. In this paper we examine state and parameter estimation for the JAK-STAT intracellular signalling mechanism, which exemplifies the implementation intricacies common in many biochemical inference problems. We introduce an extension to the Generalized Smoothing approach for estimating delay differential equation models, addressing selection of complexity parameters, choice of the basis system, and appropriate optimization strategies. Motivated by the JAK-STAT system, we further extend the generalized smoothing approach to consider a nonlinear observation process with additional unknown parameters, and highlight how the approach handles unobserved states and unevenly spaced observations. The methodology developed is generally applicable to problems of estimation for differential equation models with delays, unobserved states, nonlinear observation processes, and partially observed histories. Crown Copyright © 2013. Published by Elsevier Inc. All rights reserved.

  12. Generalized Likelihood Uncertainty Estimation (GLUE) Using Multi-Optimization Algorithm as Sampling Method

    Science.gov (United States)

    Wang, Z.

    2015-12-01

    For decades, distributed and lumped hydrological models have furthered our understanding of hydrological system. The development of hydrological simulation in large scale and high precision elaborated the spatial descriptions and hydrological behaviors. Meanwhile, the new trend is also followed by the increment of model complexity and number of parameters, which brings new challenges of uncertainty quantification. Generalized Likelihood Uncertainty Estimation (GLUE) has been widely used in uncertainty analysis for hydrological models referring to Monte Carlo method coupled with Bayesian estimation. However, the stochastic sampling method of prior parameters adopted by GLUE appears inefficient, especially in high dimensional parameter space. The heuristic optimization algorithms utilizing iterative evolution show better convergence speed and optimality-searching performance. In light of the features of heuristic optimization algorithms, this study adopted genetic algorithm, differential evolution, shuffled complex evolving algorithm to search the parameter space and obtain the parameter sets of large likelihoods. Based on the multi-algorithm sampling, hydrological model uncertainty analysis is conducted by the typical GLUE framework. To demonstrate the superiority of the new method, two hydrological models of different complexity are examined. The results shows the adaptive method tends to be efficient in sampling and effective in uncertainty analysis, providing an alternative path for uncertainty quantilization.

  13. Maximum Likelihood Method for Predicting Environmental Conditions from Assemblage Composition: The R Package bio.infer

    Directory of Open Access Journals (Sweden)

    Lester L. Yuan

    2007-06-01

    Full Text Available This paper provides a brief introduction to the R package bio.infer, a set of scripts that facilitates the use of maximum likelihood (ML methods for predicting environmental conditions from assemblage composition. Environmental conditions can often be inferred from only biological data, and these inferences are useful when other sources of data are unavailable. ML prediction methods are statistically rigorous and applicable to a broader set of problems than more commonly used weighted averaging techniques. However, ML methods require a substantially greater investment of time to program algorithms and to perform computations. This package is designed to reduce the effort required to apply ML prediction methods.

  14. Maximum Likelihood Time-of-Arrival Estimation of Optical Pulses via Photon-Counting Photodetectors

    Science.gov (United States)

    Erkmen, Baris I.; Moision, Bruce E.

    2010-01-01

    Many optical imaging, ranging, and communications systems rely on the estimation of the arrival time of an optical pulse. Recently, such systems have been increasingly employing photon-counting photodetector technology, which changes the statistics of the observed photocurrent. This requires time-of-arrival estimators to be developed and their performances characterized. The statistics of the output of an ideal photodetector, which are well modeled as a Poisson point process, were considered. An analytical model was developed for the mean-square error of the maximum likelihood (ML) estimator, demonstrating two phenomena that cause deviations from the minimum achievable error at low signal power. An approximation was derived to the threshold at which the ML estimator essentially fails to provide better than a random guess of the pulse arrival time. Comparing the analytic model performance predictions to those obtained via simulations, it was verified that the model accurately predicts the ML performance over all regimes considered. There is little prior art that attempts to understand the fundamental limitations to time-of-arrival estimation from Poisson statistics. This work establishes both a simple mathematical description of the error behavior, and the associated physical processes that yield this behavior. Previous work on mean-square error characterization for ML estimators has predominantly focused on additive Gaussian noise. This work demonstrates that the discrete nature of the Poisson noise process leads to a distinctly different error behavior.

  15. Frequency-Domain Maximum-Likelihood Estimation of High-Voltage Pulse Transformer Model Parameters

    CERN Document Server

    Aguglia, D; Martins, C.D.A.

    2014-01-01

    This paper presents an offline frequency-domain nonlinear and stochastic identification method for equivalent model parameter estimation of high-voltage pulse transformers. Such kinds of transformers are widely used in the pulsed-power domain, and the difficulty in deriving pulsed-power converter optimal control strategies is directly linked to the accuracy of the equivalent circuit parameters. These components require models which take into account electric fields energies represented by stray capacitance in the equivalent circuit. These capacitive elements must be accurately identified, since they greatly influence the general converter performances. A nonlinear frequency-based identification method, based on maximum-likelihood estimation, is presented, and a sensitivity analysis of the best experimental test to be considered is carried out. The procedure takes into account magnetic saturation and skin effects occurring in the windings during the frequency tests. The presented method is validated by experim...

  16. 2-Step Maximum Likelihood Channel Estimation for Multicode DS-CDMA with Frequency-Domain Equalization

    Science.gov (United States)

    Kojima, Yohei; Takeda, Kazuaki; Adachi, Fumiyuki

    Frequency-domain equalization (FDE) based on the minimum mean square error (MMSE) criterion can provide better downlink bit error rate (BER) performance of direct sequence code division multiple access (DS-CDMA) than the conventional rake combining in a frequency-selective fading channel. FDE requires accurate channel estimation. In this paper, we propose a new 2-step maximum likelihood channel estimation (MLCE) for DS-CDMA with FDE in a very slow frequency-selective fading environment. The 1st step uses the conventional pilot-assisted MMSE-CE and the 2nd step carries out the MLCE using decision feedback from the 1st step. The BER performance improvement achieved by 2-step MLCE over pilot assisted MMSE-CE is confirmed by computer simulation.

  17. Multi-level restricted maximum likelihood covariance estimation and kriging for large non-gridded spatial datasets

    KAUST Repository

    Castrillon, Julio; Genton, Marc G.; Yokota, Rio

    2015-01-01

    We develop a multi-level restricted Gaussian maximum likelihood method for estimating the covariance function parameters and computing the best unbiased predictor. Our approach produces a new set of multi-level contrasts where the deterministic

  18. Bayesian Monte Carlo and Maximum Likelihood Approach for Uncertainty Estimation and Risk Management: Application to Lake Oxygen Recovery Model

    Science.gov (United States)

    Model uncertainty estimation and risk assessment is essential to environmental management and informed decision making on pollution mitigation strategies. In this study, we apply a probabilistic methodology, which combines Bayesian Monte Carlo simulation and Maximum Likelihood e...

  19. An Example of an Improvable Rao-Blackwell Improvement, Inefficient Maximum Likelihood Estimator, and Unbiased Generalized Bayes Estimator.

    Science.gov (United States)

    Galili, Tal; Meilijson, Isaac

    2016-01-02

    The Rao-Blackwell theorem offers a procedure for converting a crude unbiased estimator of a parameter θ into a "better" one, in fact unique and optimal if the improvement is based on a minimal sufficient statistic that is complete. In contrast, behind every minimal sufficient statistic that is not complete, there is an improvable Rao-Blackwell improvement. This is illustrated via a simple example based on the uniform distribution, in which a rather natural Rao-Blackwell improvement is uniformly improvable. Furthermore, in this example the maximum likelihood estimator is inefficient, and an unbiased generalized Bayes estimator performs exceptionally well. Counterexamples of this sort can be useful didactic tools for explaining the true nature of a methodology and possible consequences when some of the assumptions are violated. [Received December 2014. Revised September 2015.].

  20. Development of likelihood estimation method for criticality accidents of mixed oxide fuel fabrication facilities

    International Nuclear Information System (INIS)

    Tamaki, Hitoshi; Yoshida, Kazuo; Kimoto, Tatsuya; Hamaguchi, Yoshikane

    2010-01-01

    A criticality accident in a MOX fuel fabrication facility may occur depending on several parameters, such as mass inventory and plutonium enrichment. MOX handling units in the facility are designed and operated based on the double contingency principle to prevent criticality accidents. Control failures of at least two parameters are needed for the occurrence of criticality accident. To evaluate the probability of such control failures, the criticality conditions of each parameter for a specific handling unit are necessary for accident scenario analysis to be clarified quantitatively with a criticality analysis computer code. In addition to this issue, a computer-based control system for mass inventory is planned to be installed into MOX handling equipment in a commercial MOX fuel fabrication plant. The reliability analysis is another important issue in evaluating the likelihood of control failure caused by software malfunction. A likelihood estimation method for criticality accident has been developed with these issues been taken into consideration. In this paper, an example of analysis with the proposed method and the applicability of the method are also shown through a trial application to a model MOX fabrication facility. (author)

  1. Efficient Maximum Likelihood Estimation for Pedigree Data with the Sum-Product Algorithm.

    Science.gov (United States)

    Engelhardt, Alexander; Rieger, Anna; Tresch, Achim; Mansmann, Ulrich

    2016-01-01

    We analyze data sets consisting of pedigrees with age at onset of colorectal cancer (CRC) as phenotype. The occurrence of familial clusters of CRC suggests the existence of a latent, inheritable risk factor. We aimed to compute the probability of a family possessing this risk factor as well as the hazard rate increase for these risk factor carriers. Due to the inheritability of this risk factor, the estimation necessitates a costly marginalization of the likelihood. We propose an improved EM algorithm by applying factor graphs and the sum-product algorithm in the E-step. This reduces the computational complexity from exponential to linear in the number of family members. Our algorithm is as precise as a direct likelihood maximization in a simulation study and a real family study on CRC risk. For 250 simulated families of size 19 and 21, the runtime of our algorithm is faster by a factor of 4 and 29, respectively. On the largest family (23 members) in the real data, our algorithm is 6 times faster. We introduce a flexible and runtime-efficient tool for statistical inference in biomedical event data with latent variables that opens the door for advanced analyses of pedigree data. © 2017 S. Karger AG, Basel.

  2. A Maximum Likelihood Estimation of Vocal-Tract-Related Filter Characteristics for Single Channel Speech Separation

    Directory of Open Access Journals (Sweden)

    Dansereau Richard M

    2007-01-01

    Full Text Available We present a new technique for separating two speech signals from a single recording. The proposed method bridges the gap between underdetermined blind source separation techniques and those techniques that model the human auditory system, that is, computational auditory scene analysis (CASA. For this purpose, we decompose the speech signal into the excitation signal and the vocal-tract-related filter and then estimate the components from the mixed speech using a hybrid model. We first express the probability density function (PDF of the mixed speech's log spectral vectors in terms of the PDFs of the underlying speech signal's vocal-tract-related filters. Then, the mean vectors of PDFs of the vocal-tract-related filters are obtained using a maximum likelihood estimator given the mixed signal. Finally, the estimated vocal-tract-related filters along with the extracted fundamental frequencies are used to reconstruct estimates of the individual speech signals. The proposed technique effectively adds vocal-tract-related filter characteristics as a new cue to CASA models using a new grouping technique based on an underdetermined blind source separation. We compare our model with both an underdetermined blind source separation and a CASA method. The experimental results show that our model outperforms both techniques in terms of SNR improvement and the percentage of crosstalk suppression.

  3. A Maximum Likelihood Estimation of Vocal-Tract-Related Filter Characteristics for Single Channel Speech Separation

    Directory of Open Access Journals (Sweden)

    Mohammad H. Radfar

    2006-11-01

    Full Text Available We present a new technique for separating two speech signals from a single recording. The proposed method bridges the gap between underdetermined blind source separation techniques and those techniques that model the human auditory system, that is, computational auditory scene analysis (CASA. For this purpose, we decompose the speech signal into the excitation signal and the vocal-tract-related filter and then estimate the components from the mixed speech using a hybrid model. We first express the probability density function (PDF of the mixed speech's log spectral vectors in terms of the PDFs of the underlying speech signal's vocal-tract-related filters. Then, the mean vectors of PDFs of the vocal-tract-related filters are obtained using a maximum likelihood estimator given the mixed signal. Finally, the estimated vocal-tract-related filters along with the extracted fundamental frequencies are used to reconstruct estimates of the individual speech signals. The proposed technique effectively adds vocal-tract-related filter characteristics as a new cue to CASA models using a new grouping technique based on an underdetermined blind source separation. We compare our model with both an underdetermined blind source separation and a CASA method. The experimental results show that our model outperforms both techniques in terms of SNR improvement and the percentage of crosstalk suppression.

  4. Maximum likelihood estimation of biophysical parameters of synaptic receptors from macroscopic currents

    Directory of Open Access Journals (Sweden)

    Andrey eStepanyuk

    2014-10-01

    Full Text Available Dendritic integration and neuronal firing patterns strongly depend on biophysical properties of synaptic ligand-gated channels. However, precise estimation of biophysical parameters of these channels in their intrinsic environment is complicated and still unresolved problem. Here we describe a novel method based on a maximum likelihood approach that allows to estimate not only the unitary current of synaptic receptor channels but also their multiple conductance levels, kinetic constants, the number of receptors bound with a neurotransmitter and the peak open probability from experimentally feasible number of postsynaptic currents. The new method also improves the accuracy of evaluation of unitary current as compared to the peak-scaled non-stationary fluctuation analysis, leading to a possibility to precisely estimate this important parameter from a few postsynaptic currents recorded in steady-state conditions. Estimation of unitary current with this method is robust even if postsynaptic currents are generated by receptors having different kinetic parameters, the case when peak-scaled non-stationary fluctuation analysis is not applicable. Thus, with the new method, routinely recorded postsynaptic currents could be used to study the properties of synaptic receptors in their native biochemical environment.

  5. Local likelihood estimation of complex tail dependence structures in high dimensions, applied to US precipitation extremes

    KAUST Repository

    Camilo, Daniela Castro

    2017-10-02

    In order to model the complex non-stationary dependence structure of precipitation extremes over the entire contiguous U.S., we propose a flexible local approach based on factor copula models. Our sub-asymptotic spatial modeling framework yields non-trivial tail dependence structures, with a weakening dependence strength as events become more extreme, a feature commonly observed with precipitation data but not accounted for in classical asymptotic extreme-value models. To estimate the local extremal behavior, we fit the proposed model in small regional neighborhoods to high threshold exceedances, under the assumption of local stationarity. This allows us to gain in flexibility, while making inference for such a large and complex dataset feasible. Adopting a local censored likelihood approach, inference is made on a fine spatial grid, and local estimation is performed taking advantage of distributed computing resources and of the embarrassingly parallel nature of this estimation procedure. The local model is efficiently fitted at all grid points, and uncertainty is measured using a block bootstrap procedure. An extensive simulation study shows that our approach is able to adequately capture complex, non-stationary dependencies, while our study of U.S. winter precipitation data reveals interesting differences in local tail structures over space, which has important implications on regional risk assessment of extreme precipitation events. A comparison between past and current data suggests that extremes in certain areas might be slightly wider in extent nowadays than during the first half of the twentieth century.

  6. Maximum likelihood estimation of semiparametric mixture component models for competing risks data.

    Science.gov (United States)

    Choi, Sangbum; Huang, Xuelin

    2014-09-01

    In the analysis of competing risks data, the cumulative incidence function is a useful quantity to characterize the crude risk of failure from a specific event type. In this article, we consider an efficient semiparametric analysis of mixture component models on cumulative incidence functions. Under the proposed mixture model, latency survival regressions given the event type are performed through a class of semiparametric models that encompasses the proportional hazards model and the proportional odds model, allowing for time-dependent covariates. The marginal proportions of the occurrences of cause-specific events are assessed by a multinomial logistic model. Our mixture modeling approach is advantageous in that it makes a joint estimation of model parameters associated with all competing risks under consideration, satisfying the constraint that the cumulative probability of failing from any cause adds up to one given any covariates. We develop a novel maximum likelihood scheme based on semiparametric regression analysis that facilitates efficient and reliable estimation. Statistical inferences can be conveniently made from the inverse of the observed information matrix. We establish the consistency and asymptotic normality of the proposed estimators. We validate small sample properties with simulations and demonstrate the methodology with a data set from a study of follicular lymphoma. © 2014, The International Biometric Society.

  7. Maximum likelihood estimation-based denoising of magnetic resonance images using restricted local neighborhoods

    International Nuclear Information System (INIS)

    Rajan, Jeny; Jeurissen, Ben; Sijbers, Jan; Verhoye, Marleen; Van Audekerke, Johan

    2011-01-01

    In this paper, we propose a method to denoise magnitude magnetic resonance (MR) images, which are Rician distributed. Conventionally, maximum likelihood methods incorporate the Rice distribution to estimate the true, underlying signal from a local neighborhood within which the signal is assumed to be constant. However, if this assumption is not met, such filtering will lead to blurred edges and loss of fine structures. As a solution to this problem, we put forward the concept of restricted local neighborhoods where the true intensity for each noisy pixel is estimated from a set of preselected neighboring pixels. To this end, a reference image is created from the noisy image using a recently proposed nonlocal means algorithm. This reference image is used as a prior for further noise reduction. A scheme is developed to locally select an appropriate subset of pixels from which the underlying signal is estimated. Experimental results based on the peak signal to noise ratio, structural similarity index matrix, Bhattacharyya coefficient and mean absolute difference from synthetic and real MR images demonstrate the superior performance of the proposed method over other state-of-the-art methods.

  8. Local likelihood estimation of complex tail dependence structures in high dimensions, applied to US precipitation extremes

    KAUST Repository

    Camilo, Daniela Castro; Huser, Raphaë l

    2017-01-01

    In order to model the complex non-stationary dependence structure of precipitation extremes over the entire contiguous U.S., we propose a flexible local approach based on factor copula models. Our sub-asymptotic spatial modeling framework yields non-trivial tail dependence structures, with a weakening dependence strength as events become more extreme, a feature commonly observed with precipitation data but not accounted for in classical asymptotic extreme-value models. To estimate the local extremal behavior, we fit the proposed model in small regional neighborhoods to high threshold exceedances, under the assumption of local stationarity. This allows us to gain in flexibility, while making inference for such a large and complex dataset feasible. Adopting a local censored likelihood approach, inference is made on a fine spatial grid, and local estimation is performed taking advantage of distributed computing resources and of the embarrassingly parallel nature of this estimation procedure. The local model is efficiently fitted at all grid points, and uncertainty is measured using a block bootstrap procedure. An extensive simulation study shows that our approach is able to adequately capture complex, non-stationary dependencies, while our study of U.S. winter precipitation data reveals interesting differences in local tail structures over space, which has important implications on regional risk assessment of extreme precipitation events. A comparison between past and current data suggests that extremes in certain areas might be slightly wider in extent nowadays than during the first half of the twentieth century.

  9. EQPlanar: a maximum-likelihood method for accurate organ activity estimation from whole body planar projections

    International Nuclear Information System (INIS)

    Song, N; Frey, E C; He, B; Wahl, R L

    2011-01-01

    Optimizing targeted radionuclide therapy requires patient-specific estimation of organ doses. The organ doses are estimated from quantitative nuclear medicine imaging studies, many of which involve planar whole body scans. We have previously developed the quantitative planar (QPlanar) processing method and demonstrated its ability to provide more accurate activity estimates than conventional geometric-mean-based planar (CPlanar) processing methods using physical phantom and simulation studies. The QPlanar method uses the maximum likelihood-expectation maximization algorithm, 3D organ volume of interests (VOIs), and rigorous models of physical image degrading factors to estimate organ activities. However, the QPlanar method requires alignment between the 3D organ VOIs and the 2D planar projections and assumes uniform activity distribution in each VOI. This makes application to patients challenging. As a result, in this paper we propose an extended QPlanar (EQPlanar) method that provides independent-organ rigid registration and includes multiple background regions. We have validated this method using both Monte Carlo simulation and patient data. In the simulation study, we evaluated the precision and accuracy of the method in comparison to the original QPlanar method. For the patient studies, we compared organ activity estimates at 24 h after injection with those from conventional geometric mean-based planar quantification using a 24 h post-injection quantitative SPECT reconstruction as the gold standard. We also compared the goodness of fit of the measured and estimated projections obtained from the EQPlanar method to those from the original method at four other time points where gold standard data were not available. In the simulation study, more accurate activity estimates were provided by the EQPlanar method for all the organs at all the time points compared with the QPlanar method. Based on the patient data, we concluded that the EQPlanar method provided a

  10. Supervised maximum-likelihood weighting of composite protein networks for complex prediction

    Directory of Open Access Journals (Sweden)

    Yong Chern Han

    2012-12-01

    Full Text Available Abstract Background Protein complexes participate in many important cellular functions, so finding the set of existent complexes is essential for understanding the organization and regulation of processes in the cell. With the availability of large amounts of high-throughput protein-protein interaction (PPI data, many algorithms have been proposed to discover protein complexes from PPI networks. However, such approaches are hindered by the high rate of noise in high-throughput PPI data, including spurious and missing interactions. Furthermore, many transient interactions are detected between proteins that are not from the same complex, while not all proteins from the same complex may actually interact. As a result, predicted complexes often do not match true complexes well, and many true complexes go undetected. Results We address these challenges by integrating PPI data with other heterogeneous data sources to construct a composite protein network, and using a supervised maximum-likelihood approach to weight each edge based on its posterior probability of belonging to a complex. We then use six different clustering algorithms, and an aggregative clustering strategy, to discover complexes in the weighted network. We test our method on Saccharomyces cerevisiae and Homo sapiens, and show that complex discovery is improved: compared to previously proposed supervised and unsupervised weighting approaches, our method recalls more known complexes, achieves higher precision at all recall levels, and generates novel complexes of greater functional similarity. Furthermore, our maximum-likelihood approach allows learned parameters to be used to visualize and evaluate the evidence of novel predictions, aiding human judgment of their credibility. Conclusions Our approach integrates multiple data sources with supervised learning to create a weighted composite protein network, and uses six clustering algorithms with an aggregative clustering strategy to

  11. Qualitative release assessment to estimate the likelihood of henipavirus entering the United Kingdom.

    Directory of Open Access Journals (Sweden)

    Emma L Snary

    Full Text Available The genus Henipavirus includes Hendra virus (HeV and Nipah virus (NiV, for which fruit bats (particularly those of the genus Pteropus are considered to be the wildlife reservoir. The recognition of henipaviruses occurring across a wider geographic and host range suggests the possibility of the virus entering the United Kingdom (UK. To estimate the likelihood of henipaviruses entering the UK, a qualitative release assessment was undertaken. To facilitate the release assessment, the world was divided into four zones according to location of outbreaks of henipaviruses, isolation of henipaviruses, proximity to other countries where incidents of henipaviruses have occurred and the distribution of Pteropus spp. fruit bats. From this release assessment, the key findings are that the importation of fruit from Zone 1 and 2 and bat bushmeat from Zone 1 each have a Low annual probability of release of henipaviruses into the UK. Similarly, the importation of bat meat from Zone 2, horses and companion animals from Zone 1 and people travelling from Zone 1 and entering the UK was estimated to pose a Very Low probability of release. The annual probability of release for all other release routes was assessed to be Negligible. It is recommended that the release assessment be periodically re-assessed to reflect changes in knowledge and circumstances over time.

  12. Bias correction for estimated QTL effects using the penalized maximum likelihood method.

    Science.gov (United States)

    Zhang, J; Yue, C; Zhang, Y-M

    2012-04-01

    A penalized maximum likelihood method has been proposed as an important approach to the detection of epistatic quantitative trait loci (QTL). However, this approach is not optimal in two special situations: (1) closely linked QTL with effects in opposite directions and (2) small-effect QTL, because the method produces downwardly biased estimates of QTL effects. The present study aims to correct the bias by using correction coefficients and shifting from the use of a uniform prior on the variance parameter of a QTL effect to that of a scaled inverse chi-square prior. The results of Monte Carlo simulation experiments show that the improved method increases the power from 25 to 88% in the detection of two closely linked QTL of equal size in opposite directions and from 60 to 80% in the identification of QTL with small effects (0.5% of the total phenotypic variance). We used the improved method to detect QTL responsible for the barley kernel weight trait using 145 doubled haploid lines developed in the North American Barley Genome Mapping Project. Application of the proposed method to other shrinkage estimation of QTL effects is discussed.

  13. Efficient Levenberg-Marquardt minimization of the maximum likelihood estimator for Poisson deviates

    International Nuclear Information System (INIS)

    Laurence, T.; Chromy, B.

    2010-01-01

    Histograms of counted events are Poisson distributed, but are typically fitted without justification using nonlinear least squares fitting. The more appropriate maximum likelihood estimator (MLE) for Poisson distributed data is seldom used. We extend the use of the Levenberg-Marquardt algorithm commonly used for nonlinear least squares minimization for use with the MLE for Poisson distributed data. In so doing, we remove any excuse for not using this more appropriate MLE. We demonstrate the use of the algorithm and the superior performance of the MLE using simulations and experiments in the context of fluorescence lifetime imaging. Scientists commonly form histograms of counted events from their data, and extract parameters by fitting to a specified model. Assuming that the probability of occurrence for each bin is small, event counts in the histogram bins will be distributed according to the Poisson distribution. We develop here an efficient algorithm for fitting event counting histograms using the maximum likelihood estimator (MLE) for Poisson distributed data, rather than the non-linear least squares measure. This algorithm is a simple extension of the common Levenberg-Marquardt (L-M) algorithm, is simple to implement, quick and robust. Fitting using a least squares measure is most common, but it is the maximum likelihood estimator only for Gaussian-distributed data. Non-linear least squares methods may be applied to event counting histograms in cases where the number of events is very large, so that the Poisson distribution is well approximated by a Gaussian. However, it is not easy to satisfy this criterion in practice - which requires a large number of events. It has been well-known for years that least squares procedures lead to biased results when applied to Poisson-distributed data; a recent paper providing extensive characterization of these biases in exponential fitting is given. The more appropriate measure based on the maximum likelihood estimator (MLE

  14. On the Performance of Maximum Likelihood versus Means and Variance Adjusted Weighted Least Squares Estimation in CFA

    Science.gov (United States)

    Beauducel, Andre; Herzberg, Philipp Yorck

    2006-01-01

    This simulation study compared maximum likelihood (ML) estimation with weighted least squares means and variance adjusted (WLSMV) estimation. The study was based on confirmatory factor analyses with 1, 2, 4, and 8 factors, based on 250, 500, 750, and 1,000 cases, and on 5, 10, 20, and 40 variables with 2, 3, 4, 5, and 6 categories. There was no…

  15. Empirical likelihood

    CERN Document Server

    Owen, Art B

    2001-01-01

    Empirical likelihood provides inferences whose validity does not depend on specifying a parametric model for the data. Because it uses a likelihood, the method has certain inherent advantages over resampling methods: it uses the data to determine the shape of the confidence regions, and it makes it easy to combined data from multiple sources. It also facilitates incorporating side information, and it simplifies accounting for censored, truncated, or biased sampling.One of the first books published on the subject, Empirical Likelihood offers an in-depth treatment of this method for constructing confidence regions and testing hypotheses. The author applies empirical likelihood to a range of problems, from those as simple as setting a confidence region for a univariate mean under IID sampling, to problems defined through smooth functions of means, regression models, generalized linear models, estimating equations, or kernel smooths, and to sampling with non-identically distributed data. Abundant figures offer vi...

  16. Hypnosis and pain perception: An Activation Likelihood Estimation (ALE) meta-analysis of functional neuroimaging studies.

    Science.gov (United States)

    Del Casale, Antonio; Ferracuti, Stefano; Rapinesi, Chiara; De Rossi, Pietro; Angeletti, Gloria; Sani, Gabriele; Kotzalidis, Georgios D; Girardi, Paolo

    2015-12-01

    Several studies reported that hypnosis can modulate pain perception and tolerance by affecting cortical and subcortical activity in brain regions involved in these processes. We conducted an Activation Likelihood Estimation (ALE) meta-analysis on functional neuroimaging studies of pain perception under hypnosis to identify brain activation-deactivation patterns occurring during hypnotic suggestions aiming at pain reduction, including hypnotic analgesic, pleasant, or depersonalization suggestions (HASs). We searched the PubMed, Embase and PsycInfo databases; we included papers published in peer-reviewed journals dealing with functional neuroimaging and hypnosis-modulated pain perception. The ALE meta-analysis encompassed data from 75 healthy volunteers reported in 8 functional neuroimaging studies. HASs during experimentally-induced pain compared to control conditions correlated with significant activations of the right anterior cingulate cortex (Brodmann's Area [BA] 32), left superior frontal gyrus (BA 6), and right insula, and deactivation of right midline nuclei of the thalamus. HASs during experimental pain impact both cortical and subcortical brain activity. The anterior cingulate, left superior frontal, and right insular cortices activation increases could induce a thalamic deactivation (top-down inhibition), which may correlate with reductions in pain intensity. Copyright © 2016 Elsevier Ltd. All rights reserved.

  17. Pain anticipation: an activation likelihood estimation meta-analysis of brain imaging studies.

    Science.gov (United States)

    Palermo, Sara; Benedetti, Fabrizio; Costa, Tommaso; Amanzio, Martina

    2015-05-01

    The anticipation of pain has been investigated in a variety of brain imaging studies. Importantly, today there is no clear overall picture of the areas that are involved in different studies and the exact role of these regions in pain expectation remains especially unexploited. To address this issue, we used activation likelihood estimation meta-analysis to analyze pain anticipation in several neuroimaging studies. A total of 19 functional magnetic resonance imaging were included in the analysis to search for the cortical areas involved in pain anticipation in human experimental models. During anticipation, activated foci were found in the dorsolateral prefrontal, midcingulate and anterior insula cortices, medial and inferior frontal gyri, inferior parietal lobule, middle and superior temporal gyrus, thalamus, and caudate. Deactivated foci were found in the anterior cingulate, superior frontal gyrus, parahippocampal gyrus and in the claustrum. The results of the meta-analytic connectivity analysis provide an overall view of the brain responses triggered by the anticipation of a noxious stimulus. Such a highly distributed perceptual set of self-regulation may prime brain regions to process information where emotion, action and perception as well as their related subcategories play a central role. Not only do these findings provide important information on the neural events when anticipating pain, but also they may give a perspective into nocebo responses, whereby negative expectations may lead to pain worsening. © 2014 Wiley Periodicals, Inc.

  18. Autistic disorders and schizophrenia: related or remote? An anatomical likelihood estimation.

    Directory of Open Access Journals (Sweden)

    Charlton Cheung

    Full Text Available Shared genetic and environmental risk factors have been identified for autistic spectrum disorders (ASD and schizophrenia. Social interaction, communication, emotion processing, sensorimotor gating and executive function are disrupted in both, stimulating debate about whether these are related conditions. Brain imaging studies constitute an informative and expanding resource to determine whether brain structural phenotype of these disorders is distinct or overlapping. We aimed to synthesize existing datasets characterizing ASD and schizophrenia within a common framework, to quantify their structural similarities. In a novel modification of Anatomical Likelihood Estimation (ALE, 313 foci were extracted from 25 voxel-based studies comprising 660 participants (308 ASD, 352 first-episode schizophrenia and 801 controls. The results revealed that, compared to controls, lower grey matter volumes within limbic-striato-thalamic circuitry were common to ASD and schizophrenia. Unique features of each disorder included lower grey matter volume in amygdala, caudate, frontal and medial gyrus for schizophrenia and putamen for autism. Thus, in terms of brain volumetrics, ASD and schizophrenia have a clear degree of overlap that may reflect shared etiological mechanisms. However, the distinctive neuroanatomy also mapped in each condition raises the question about how this is arrived in the context of common etiological pressures.

  19. Statistical analysis of maximum likelihood estimator images of human brain FDG PET studies

    International Nuclear Information System (INIS)

    Llacer, J.; Veklerov, E.; Hoffman, E.J.; Nunez, J.; Coakley, K.J.

    1993-01-01

    The work presented in this paper evaluates the statistical characteristics of regional bias and expected error in reconstructions of real PET data of human brain fluorodeoxiglucose (FDG) studies carried out by the maximum likelihood estimator (MLE) method with a robust stopping rule, and compares them with the results of filtered backprojection (FBP) reconstructions and with the method of sieves. The task that the authors have investigated is that of quantifying radioisotope uptake in regions-of-interest (ROI's). They first describe a robust methodology for the use of the MLE method with clinical data which contains only one adjustable parameter: the kernel size for a Gaussian filtering operation that determines final resolution and expected regional error. Simulation results are used to establish the fundamental characteristics of the reconstructions obtained by out methodology, corresponding to the case in which the transition matrix is perfectly known. Then, data from 72 independent human brain FDG scans from four patients are used to show that the results obtained from real data are consistent with the simulation, although the quality of the data and of the transition matrix have an effect on the final outcome

  20. Nonuniform Illumination Correction Algorithm for Underwater Images Using Maximum Likelihood Estimation Method

    Directory of Open Access Journals (Sweden)

    Sonali Sachin Sankpal

    2016-01-01

    Full Text Available Scattering and absorption of light is main reason for limited visibility in water. The suspended particles and dissolved chemical compounds in water are also responsible for scattering and absorption of light in water. The limited visibility in water results in degradation of underwater images. The visibility can be increased by using artificial light source in underwater imaging system. But the artificial light illuminates the scene in a nonuniform fashion. It produces bright spot at the center with the dark region at surroundings. In some cases imaging system itself creates dark region in the image by producing shadow on the objects. The problem of nonuniform illumination is neglected by the researchers in most of the image enhancement techniques of underwater images. Also very few methods are discussed showing the results on color images. This paper suggests a method for nonuniform illumination correction for underwater images. The method assumes that natural underwater images are Rayleigh distributed. This paper used maximum likelihood estimation of scale parameter to map distribution of image to Rayleigh distribution. The method is compared with traditional methods for nonuniform illumination correction using no-reference image quality metrics like average luminance, average information entropy, normalized neighborhood function, average contrast, and comprehensive assessment function.

  1. Speech perception in autism spectrum disorder: An activation likelihood estimation meta-analysis.

    Science.gov (United States)

    Tryfon, Ana; Foster, Nicholas E V; Sharda, Megha; Hyde, Krista L

    2018-02-15

    Autism spectrum disorder (ASD) is often characterized by atypical language profiles and auditory and speech processing. These can contribute to aberrant language and social communication skills in ASD. The study of the neural basis of speech perception in ASD can serve as a potential neurobiological marker of ASD early on, but mixed results across studies renders it difficult to find a reliable neural characterization of speech processing in ASD. To this aim, the present study examined the functional neural basis of speech perception in ASD versus typical development (TD) using an activation likelihood estimation (ALE) meta-analysis of 18 qualifying studies. The present study included separate analyses for TD and ASD, which allowed us to examine patterns of within-group brain activation as well as both common and distinct patterns of brain activation across the ASD and TD groups. Overall, ASD and TD showed mostly common brain activation of speech processing in bilateral superior temporal gyrus (STG) and left inferior frontal gyrus (IFG). However, the results revealed trends for some distinct activation in the TD group showing additional activation in higher-order brain areas including left superior frontal gyrus (SFG), left medial frontal gyrus (MFG), and right IFG. These results provide a more reliable neural characterization of speech processing in ASD relative to previous single neuroimaging studies and motivate future work to investigate how these brain signatures relate to behavioral measures of speech processing in ASD. Copyright © 2017 Elsevier B.V. All rights reserved.

  2. Task-based detectability in CT image reconstruction by filtered backprojection and penalized likelihood estimation

    Energy Technology Data Exchange (ETDEWEB)

    Gang, Grace J. [Institute of Biomaterials and Biomedical Engineering, University of Toronto, Toronto, Ontario M5G 2M9, Canada and Department of Biomedical Engineering, Johns Hopkins University, Baltimore Maryland 21205 (Canada); Stayman, J. Webster; Zbijewski, Wojciech [Department of Biomedical Engineering, Johns Hopkins University, Baltimore Maryland 21205 (United States); Siewerdsen, Jeffrey H., E-mail: jeff.siewerdsen@jhu.edu [Institute of Biomaterials and Biomedical Engineering, University of Toronto, Toronto, Ontario M5G 2M9, Canada and Department of Biomedical Engineering, Johns Hopkins University, Baltimore, Maryland 21205 (United States)

    2014-08-15

    Purpose: Nonstationarity is an important aspect of imaging performance in CT and cone-beam CT (CBCT), especially for systems employing iterative reconstruction. This work presents a theoretical framework for both filtered-backprojection (FBP) and penalized-likelihood (PL) reconstruction that includes explicit descriptions of nonstationary noise, spatial resolution, and task-based detectability index. Potential utility of the model was demonstrated in the optimal selection of regularization parameters in PL reconstruction. Methods: Analytical models for local modulation transfer function (MTF) and noise-power spectrum (NPS) were investigated for both FBP and PL reconstruction, including explicit dependence on the object and spatial location. For FBP, a cascaded systems analysis framework was adapted to account for nonstationarity by separately calculating fluence and system gains for each ray passing through any given voxel. For PL, the point-spread function and covariance were derived using the implicit function theorem and first-order Taylor expansion according toFessler [“Mean and variance of implicitly defined biased estimators (such as penalized maximum likelihood): Applications to tomography,” IEEE Trans. Image Process. 5(3), 493–506 (1996)]. Detectability index was calculated for a variety of simple tasks. The model for PL was used in selecting the regularization strength parameter to optimize task-based performance, with both a constant and a spatially varying regularization map. Results: Theoretical models of FBP and PL were validated in 2D simulated fan-beam data and found to yield accurate predictions of local MTF and NPS as a function of the object and the spatial location. The NPS for both FBP and PL exhibit similar anisotropic nature depending on the pathlength (and therefore, the object and spatial location within the object) traversed by each ray, with the PL NPS experiencing greater smoothing along directions with higher noise. The MTF of FBP

  3. Estimating negative likelihood ratio confidence when test sensitivity is 100%: A bootstrapping approach.

    Science.gov (United States)

    Marill, Keith A; Chang, Yuchiao; Wong, Kim F; Friedman, Ari B

    2017-08-01

    Objectives Assessing high-sensitivity tests for mortal illness is crucial in emergency and critical care medicine. Estimating the 95% confidence interval (CI) of the likelihood ratio (LR) can be challenging when sample sensitivity is 100%. We aimed to develop, compare, and automate a bootstrapping method to estimate the negative LR CI when sample sensitivity is 100%. Methods The lowest population sensitivity that is most likely to yield sample sensitivity 100% is located using the binomial distribution. Random binomial samples generated using this population sensitivity are then used in the LR bootstrap. A free R program, "bootLR," automates the process. Extensive simulations were performed to determine how often the LR bootstrap and comparator method 95% CIs cover the true population negative LR value. Finally, the 95% CI was compared for theoretical sample sizes and sensitivities approaching and including 100% using: (1) a technique of individual extremes, (2) SAS software based on the technique of Gart and Nam, (3) the Score CI (as implemented in the StatXact, SAS, and R PropCI package), and (4) the bootstrapping technique. Results The bootstrapping approach demonstrates appropriate coverage of the nominal 95% CI over a spectrum of populations and sample sizes. Considering a study of sample size 200 with 100 patients with disease, and specificity 60%, the lowest population sensitivity with median sample sensitivity 100% is 99.31%. When all 100 patients with disease test positive, the negative LR 95% CIs are: individual extremes technique (0,0.073), StatXact (0,0.064), SAS Score method (0,0.057), R PropCI (0,0.062), and bootstrap (0,0.048). Similar trends were observed for other sample sizes. Conclusions When study samples demonstrate 100% sensitivity, available methods may yield inappropriately wide negative LR CIs. An alternative bootstrapping approach and accompanying free open-source R package were developed to yield realistic estimates easily. This

  4. Anatomical likelihood estimation meta-analysis of grey and white matter anomalies in autism spectrum disorders

    Directory of Open Access Journals (Sweden)

    Thomas P. DeRamus

    2015-01-01

    Full Text Available Autism spectrum disorders (ASD are characterized by impairments in social communication and restrictive, repetitive behaviors. While behavioral symptoms are well-documented, investigations into the neurobiological underpinnings of ASD have not resulted in firm biomarkers. Variability in findings across structural neuroimaging studies has contributed to difficulty in reliably characterizing the brain morphology of individuals with ASD. These inconsistencies may also arise from the heterogeneity of ASD, and wider age-range of participants included in MRI studies and in previous meta-analyses. To address this, the current study used coordinate-based anatomical likelihood estimation (ALE analysis of 21 voxel-based morphometry (VBM studies examining high-functioning individuals with ASD, resulting in a meta-analysis of 1055 participants (506 ASD, and 549 typically developing individuals. Results consisted of grey, white, and global differences in cortical matter between the groups. Modeled anatomical maps consisting of concentration, thickness, and volume metrics of grey and white matter revealed clusters suggesting age-related decreases in grey and white matter in parietal and inferior temporal regions of the brain in ASD, and age-related increases in grey matter in frontal and anterior-temporal regions. White matter alterations included fiber tracts thought to play key roles in information processing and sensory integration. Many current theories of pathobiology ASD suggest that the brains of individuals with ASD may have less-functional long-range (anterior-to-posterior connections. Our findings of decreased cortical matter in parietal–temporal and occipital regions, and thickening in frontal cortices in older adults with ASD may entail altered cortical anatomy, and neurodevelopmental adaptations.

  5. Uncertainty in a monthly water balance model using the generalized likelihood uncertainty estimation methodology

    Science.gov (United States)

    Rivera, Diego; Rivas, Yessica; Godoy, Alex

    2015-02-01

    Hydrological models are simplified representations of natural processes and subject to errors. Uncertainty bounds are a commonly used way to assess the impact of an input or model architecture uncertainty in model outputs. Different sets of parameters could have equally robust goodness-of-fit indicators, which is known as Equifinality. We assessed the outputs from a lumped conceptual hydrological model to an agricultural watershed in central Chile under strong interannual variability (coefficient of variability of 25%) by using the Equifinality concept and uncertainty bounds. The simulation period ran from January 1999 to December 2006. Equifinality and uncertainty bounds from GLUE methodology (Generalized Likelihood Uncertainty Estimation) were used to identify parameter sets as potential representations of the system. The aim of this paper is to exploit the use of uncertainty bounds to differentiate behavioural parameter sets in a simple hydrological model. Then, we analyze the presence of equifinality in order to improve the identification of relevant hydrological processes. The water balance model for Chillan River exhibits, at a first stage, equifinality. However, it was possible to narrow the range for the parameters and eventually identify a set of parameters representing the behaviour of the watershed (a behavioural model) in agreement with observational and soft data (calculation of areal precipitation over the watershed using an isohyetal map). The mean width of the uncertainty bound around the predicted runoff for the simulation period decreased from 50 to 20 m3s-1 after fixing the parameter controlling the areal precipitation over the watershed. This decrement is equivalent to decreasing the ratio between simulated and observed discharge from 5.2 to 2.5. Despite the criticisms against the GLUE methodology, such as the lack of statistical formality, it is identified as a useful tool assisting the modeller with the identification of critical parameters.

  6. Likelihood Estimation of the Systemic Poison-Induced Morbidity in an Adult North Eastern Romanian Population

    Directory of Open Access Journals (Sweden)

    Cătălina Lionte

    2016-12-01

    Full Text Available Purpose: Acute exposure to a systemic poison represents an important segment of medical emergencies. We aimed to estimate the likelihood of systemic poison-induced morbidity in a population admitted in a tertiary referral center from North East Romania, based on the determinant factors. Methodology: This was a prospective observational cohort study on adult poisoned patients. Demographic, clinical and laboratory characteristics were recorded in all patients. We analyzed three groups of patients, based on the associated morbidity during hospitalization. We identified significant differences between groups and predictors with significant effects on morbidity using multiple multinomial logistic regressions. ROC analysis proved that a combination of tests could improve diagnostic accuracy of poison-related morbidity. Main findings: Of the 180 patients included, aged 44.7 ± 17.2 years, 51.1% males, 49.4% had no poison-related morbidity, 28.9% developed a mild morbidity, and 21.7% had a severe morbidity, followed by death in 16 patients (8.9%. Multiple complications and deaths were recorded in patients aged 53.4 ± 17.6 years (p .001, with a lower Glasgow Coma Scale (GCS score upon admission and a significantly higher heart rate (101 ± 32 beats/min, p .011. Routine laboratory tests were significantly higher in patients with a recorded morbidity. Multiple logistic regression analysis demonstrated that a GCS < 8, a high white blood cells count (WBC, alanine aminotransferase (ALAT, myoglobin, glycemia and brain natriuretic peptide (BNP are strongly predictive for in-hospital severe morbidity. Originality: This is the first Romanian prospective study on adult poisoned patients, which identifies the factors responsible for in-hospital morbidity using logistic regression analyses, with resulting receiver operating characteristic (ROC curves. Conclusion: In acute intoxication with systemic poisons, we identified several clinical and laboratory variables

  7. The Neural Bases of Difficult Speech Comprehension and Speech Production: Two Activation Likelihood Estimation (ALE) Meta-Analyses

    Science.gov (United States)

    Adank, Patti

    2012-01-01

    The role of speech production mechanisms in difficult speech comprehension is the subject of on-going debate in speech science. Two Activation Likelihood Estimation (ALE) analyses were conducted on neuroimaging studies investigating difficult speech comprehension or speech production. Meta-analysis 1 included 10 studies contrasting comprehension…

  8. Computing maximum likelihood estimates of loglinear models from marginal sums with special attention to loglinear item response theory

    NARCIS (Netherlands)

    Kelderman, Henk

    1991-01-01

    In this paper, algorithms are described for obtaining the maximum likelihood estimates of the parameters in log-linear models. Modified versions of the iterative proportional fitting and Newton-Raphson algorithms are described that work on the minimal sufficient statistics rather than on the usual

  9. Computing maximum likelihood estimates of loglinear models from marginal sums with special attention to loglinear item response theory

    NARCIS (Netherlands)

    Kelderman, Henk

    1992-01-01

    In this paper algorithms are described for obtaining the maximum likelihood estimates of the parameters in loglinear models. Modified versions of the iterative proportional fitting and Newton-Raphson algorithms are described that work on the minimal sufficient statistics rather than on the usual

  10. Recovery of Graded Response Model Parameters: A Comparison of Marginal Maximum Likelihood and Markov Chain Monte Carlo Estimation

    Science.gov (United States)

    Kieftenbeld, Vincent; Natesan, Prathiba

    2012-01-01

    Markov chain Monte Carlo (MCMC) methods enable a fully Bayesian approach to parameter estimation of item response models. In this simulation study, the authors compared the recovery of graded response model parameters using marginal maximum likelihood (MML) and Gibbs sampling (MCMC) under various latent trait distributions, test lengths, and…

  11. An iterative procedure for obtaining maximum-likelihood estimates of the parameters for a mixture of normal distributions, 2

    Science.gov (United States)

    Peters, B. C., Jr.; Walker, H. F.

    1976-01-01

    The problem of obtaining numerically maximum likelihood estimates of the parameters for a mixture of normal distributions is addressed. In recent literature, a certain successive approximations procedure, based on the likelihood equations, is shown empirically to be effective in numerically approximating such maximum-likelihood estimates; however, the reliability of this procedure was not established theoretically. Here, a general iterative procedure is introduced, of the generalized steepest-ascent (deflected-gradient) type, which is just the procedure known in the literature when the step-size is taken to be 1. With probability 1 as the sample size grows large, it is shown that this procedure converges locally to the strongly consistent maximum-likelihood estimate whenever the step-size lies between 0 and 2. The step-size which yields optimal local convergence rates for large samples is determined in a sense by the separation of the component normal densities and is bounded below by a number between 1 and 2.

  12. An iterative procedure for obtaining maximum-likelihood estimates of the parameters for a mixture of normal distributions

    Science.gov (United States)

    Peters, B. C., Jr.; Walker, H. F.

    1978-01-01

    This paper addresses the problem of obtaining numerically maximum-likelihood estimates of the parameters for a mixture of normal distributions. In recent literature, a certain successive-approximations procedure, based on the likelihood equations, was shown empirically to be effective in numerically approximating such maximum-likelihood estimates; however, the reliability of this procedure was not established theoretically. Here, we introduce a general iterative procedure, of the generalized steepest-ascent (deflected-gradient) type, which is just the procedure known in the literature when the step-size is taken to be 1. We show that, with probability 1 as the sample size grows large, this procedure converges locally to the strongly consistent maximum-likelihood estimate whenever the step-size lies between 0 and 2. We also show that the step-size which yields optimal local convergence rates for large samples is determined in a sense by the 'separation' of the component normal densities and is bounded below by a number between 1 and 2.

  13. Smoothing of X-ray diffraction data and K (alpha)2 elimination using penalized likelihood and the composite link model

    NARCIS (Netherlands)

    De Rooi, J.J.; Van der Pers, N.M.; Hendrikx, R.W.A.; Delhez, R.; Bottger, A.J.; Eilers, P.H.C.

    2014-01-01

    X-ray diffraction scans consist of series of counts; these numbers obey Poisson distributions with varying expected values. These scans are often smoothed and the K2 component is removed. This article proposes a framework in which both issues are treated. Penalized likelihood estimation is used to

  14. Estimation of Financial Agent-Based Models with Simulated Maximum Likelihood

    Czech Academy of Sciences Publication Activity Database

    Kukačka, Jiří; Baruník, Jozef

    2017-01-01

    Roč. 85, č. 1 (2017), s. 21-45 ISSN 0165-1889 R&D Projects: GA ČR(CZ) GBP402/12/G097 Institutional support: RVO:67985556 Keywords : heterogeneous agent model, * simulated maximum likelihood * switching Subject RIV: AH - Economics OBOR OECD: Finance Impact factor: 1.000, year: 2016 http://library.utia.cas.cz/separaty/2017/E/kukacka-0478481.pdf

  15. Neural Networks Involved in Adolescent Reward Processing: An Activation Likelihood Estimation Meta-Analysis of Functional Neuroimaging Studies

    Science.gov (United States)

    Silverman, Merav H.; Jedd, Kelly; Luciana, Monica

    2015-01-01

    Behavioral responses to, and the neural processing of, rewards change dramatically during adolescence and may contribute to observed increases in risk-taking during this developmental period. Functional MRI (fMRI) studies suggest differences between adolescents and adults in neural activation during reward processing, but findings are contradictory, and effects have been found in non-predicted directions. The current study uses an activation likelihood estimation (ALE) approach for quantitative meta-analysis of functional neuroimaging studies to: 1) confirm the network of brain regions involved in adolescents’ reward processing, 2) identify regions involved in specific stages (anticipation, outcome) and valence (positive, negative) of reward processing, and 3) identify differences in activation likelihood between adolescent and adult reward-related brain activation. Results reveal a subcortical network of brain regions involved in adolescent reward processing similar to that found in adults with major hubs including the ventral and dorsal striatum, insula, and posterior cingulate cortex (PCC). Contrast analyses find that adolescents exhibit greater likelihood of activation in the insula while processing anticipation relative to outcome and greater likelihood of activation in the putamen and amygdala during outcome relative to anticipation. While processing positive compared to negative valence, adolescents show increased likelihood for activation in the posterior cingulate cortex (PCC) and ventral striatum. Contrasting adolescent reward processing with the existing ALE of adult reward processing (Liu et al., 2011) reveals increased likelihood for activation in limbic, frontolimbic, and striatal regions in adolescents compared with adults. Unlike adolescents, adults also activate executive control regions of the frontal and parietal lobes. These findings support hypothesized elevations in motivated activity during adolescence. PMID:26254587

  16. Maximum likelihood estimation of signal detection model parameters for the assessment of two-stage diagnostic strategies.

    Science.gov (United States)

    Lirio, R B; Dondériz, I C; Pérez Abalo, M C

    1992-08-01

    The methodology of Receiver Operating Characteristic curves based on the signal detection model is extended to evaluate the accuracy of two-stage diagnostic strategies. A computer program is developed for the maximum likelihood estimation of parameters that characterize the sensitivity and specificity of two-stage classifiers according to this extended methodology. Its use is briefly illustrated with data collected in a two-stage screening for auditory defects.

  17. Cosmochemical Estimates of Mantle Composition

    Science.gov (United States)

    Palme, H.; O'Neill, H. St. C.

    2003-12-01

    , and a crust. Both Daubrée and Boisse also expected that the Earth was composed of a similar sequence of concentric layers (see Burke, 1986; Marvin, 1996).At the beginning of the twentieth century Harkins at the University of Chicago thought that meteorites would provide a better estimate for the bulk composition of the Earth than the terrestrial rocks collected at the surface as we have only access to the "mere skin" of the Earth. Harkins made an attempt to reconstruct the composition of the hypothetical meteorite planet by compiling compositional data for 125 stony and 318 iron meteorites, and mixing the two components in ratios based on the observed falls of stones and irons. The results confirmed his prediction that elements with even atomic numbers are more abundant and therefore more stable than those with odd atomic numbers and he concluded that the elemental abundances in the bulk meteorite planet are determined by nucleosynthetic processes. For his meteorite planet Harkins calculated Mg/Si, Al/Si, and Fe/Si atomic ratios of 0.86, 0.079, and 0.83, very closely resembling corresponding ratios of the average solar system based on presently known element abundances in the Sun and in CI-meteorites (see Burke, 1986).If the Earth were similar compositionally to the meteorite planet, it should have a similarly high iron content, which requires that the major fraction of iron is concentrated in the interior of the Earth. The presence of a central metallic core to the Earth was suggested by Wiechert in 1897. The existence of the core was firmly established using the study of seismic wave propagation by Oldham in 1906 with the outer boundary of the core accurately located at a depth of 2,900km by Beno Gutenberg in 1913. In 1926 the fluidity of the outer core was finally accepted. The high density of the core and the high abundance of iron and nickel in meteorites led very early to the suggestion that iron and nickel are the dominant elements in the Earth's core (Brush

  18. Accuracy of maximum likelihood estimates of a two-state model in single-molecule FRET

    Energy Technology Data Exchange (ETDEWEB)

    Gopich, Irina V. [Laboratory of Chemical Physics, National Institute of Diabetes and Digestive and Kidney Diseases, National Institutes of Health, Bethesda, Maryland 20892 (United States)

    2015-01-21

    Photon sequences from single-molecule Förster resonance energy transfer (FRET) experiments can be analyzed using a maximum likelihood method. Parameters of the underlying kinetic model (FRET efficiencies of the states and transition rates between conformational states) are obtained by maximizing the appropriate likelihood function. In addition, the errors (uncertainties) of the extracted parameters can be obtained from the curvature of the likelihood function at the maximum. We study the standard deviations of the parameters of a two-state model obtained from photon sequences with recorded colors and arrival times. The standard deviations can be obtained analytically in a special case when the FRET efficiencies of the states are 0 and 1 and in the limiting cases of fast and slow conformational dynamics. These results are compared with the results of numerical simulations. The accuracy and, therefore, the ability to predict model parameters depend on how fast the transition rates are compared to the photon count rate. In the limit of slow transitions, the key parameters that determine the accuracy are the number of transitions between the states and the number of independent photon sequences. In the fast transition limit, the accuracy is determined by the small fraction of photons that are correlated with their neighbors. The relative standard deviation of the relaxation rate has a “chevron” shape as a function of the transition rate in the log-log scale. The location of the minimum of this function dramatically depends on how well the FRET efficiencies of the states are separated.

  19. Can Machines Learn Respiratory Virus Epidemiology?: A Comparative Study of Likelihood-Free Methods for the Estimation of Epidemiological Dynamics

    Directory of Open Access Journals (Sweden)

    Heidi L. Tessmer

    2018-03-01

    Full Text Available To estimate and predict the transmission dynamics of respiratory viruses, the estimation of the basic reproduction number, R0, is essential. Recently, approximate Bayesian computation methods have been used as likelihood free methods to estimate epidemiological model parameters, particularly R0. In this paper, we explore various machine learning approaches, the multi-layer perceptron, convolutional neural network, and long-short term memory, to learn and estimate the parameters. Further, we compare the accuracy of the estimates and time requirements for machine learning and the approximate Bayesian computation methods on both simulated and real-world epidemiological data from outbreaks of influenza A(H1N1pdm09, mumps, and measles. We find that the machine learning approaches can be verified and tested faster than the approximate Bayesian computation method, but that the approximate Bayesian computation method is more robust across different datasets.

  20. Estimating Amazonian rainforest stability and the likelihood for large-scale forest dieback

    Science.gov (United States)

    Rammig, Anja; Thonicke, Kirsten; Jupp, Tim; Ostberg, Sebastian; Heinke, Jens; Lucht, Wolfgang; Cramer, Wolfgang; Cox, Peter

    2010-05-01

    Annually, tropical forests process approximately 18 Pg of carbon through respiration and photosynthesis - more than twice the rate of anthropogenic fossil fuel emissions. Current climate change may be transforming this carbon sink into a carbon source by changing forest structure and dynamics. Increasing temperatures and potentially decreasing precipitation and thus prolonged drought stress may lead to increasing physiological stress and reduced productivity for trees. Resulting decreases in evapotranspiration and therefore convective precipitation could further accelerate drought conditions and destabilize the tropical ecosystem as a whole and lead to an 'Amazon forest dieback'. The projected direction and intensity of climate change vary widely within the region and between different scenarios from climate models (GCMs). In the scope of a World Bank-funded study, we assessed the 24 General Circulation Models (GCMs) evaluated in the 4th Assessment Report of the Intergovernmental Panel on Climate Change (IPCC-AR4) with respect to their capability to reproduce present-day climate in the Amazon basin using a Bayesian approach. With this approach, greater weight is assigned to the models that simulate well the annual cycle of rainfall. We then use the resulting weightings to create probability density functions (PDFs) for future forest biomass changes as simulated by the Lund-Potsdam-Jena Dynamic Global Vegetation Model (LPJmL) to estimate the risk of potential Amazon rainforest dieback. Our results show contrasting changes in forest biomass throughout five regions of northern South America: If photosynthetic capacity and water use efficiency is enhanced by CO2, biomass increases across all five regions. However, if CO2-fertilisation is assumed to be absent or less important, then substantial dieback occurs in some scenarios and thus, the risk of forest dieback is considerably higher. Particularly affected are regions in the central Amazon basin. The range of

  1. Maximum likelihood PSD estimation for speech enhancement in reverberant and noisy conditions

    DEFF Research Database (Denmark)

    Kuklasinski, Adam; Doclo, Simon; Jensen, Jesper

    2016-01-01

    of the estimator is in speech enhancement algorithms, such as the Multi-channel Wiener Filter (MWF) and the Minimum Variance Distortionless Response (MVDR) beamformer. We evaluate these two algorithms in a speech dereverberation task and compare the performance obtained using the proposed and a competing PSD...... estimator. Instrumental performance measures indicate an advantage of the proposed estimator over the competing one. In a speech intelligibility test all algorithms significantly improved the word intelligibility score. While the results suggest a minor advantage of using the proposed PSD estimator...

  2. ReplacementMatrix: a web server for maximum-likelihood estimation of amino acid replacement rate matrices.

    Science.gov (United States)

    Dang, Cuong Cao; Lefort, Vincent; Le, Vinh Sy; Le, Quang Si; Gascuel, Olivier

    2011-10-01

    Amino acid replacement rate matrices are an essential basis of protein studies (e.g. in phylogenetics and alignment). A number of general purpose matrices have been proposed (e.g. JTT, WAG, LG) since the seminal work of Margaret Dayhoff and co-workers. However, it has been shown that matrices specific to certain protein groups (e.g. mitochondrial) or life domains (e.g. viruses) differ significantly from general average matrices, and thus perform better when applied to the data to which they are dedicated. This Web server implements the maximum-likelihood estimation procedure that was used to estimate LG, and provides a number of tools and facilities. Users upload a set of multiple protein alignments from their domain of interest and receive the resulting matrix by email, along with statistics and comparisons with other matrices. A non-parametric bootstrap is performed optionally to assess the variability of replacement rate estimates. Maximum-likelihood trees, inferred using the estimated rate matrix, are also computed optionally for each input alignment. Finely tuned procedures and up-to-date ML software (PhyML 3.0, XRATE) are combined to perform all these heavy calculations on our clusters. http://www.atgc-montpellier.fr/ReplacementMatrix/ olivier.gascuel@lirmm.fr Supplementary data are available at http://www.atgc-montpellier.fr/ReplacementMatrix/

  3. Detecting changes in ultrasound backscattered statistics by using Nakagami parameters: Comparisons of moment-based and maximum likelihood estimators.

    Science.gov (United States)

    Lin, Jen-Jen; Cheng, Jung-Yu; Huang, Li-Fei; Lin, Ying-Hsiu; Wan, Yung-Liang; Tsui, Po-Hsiang

    2017-05-01

    The Nakagami distribution is an approximation useful to the statistics of ultrasound backscattered signals for tissue characterization. Various estimators may affect the Nakagami parameter in the detection of changes in backscattered statistics. In particular, the moment-based estimator (MBE) and maximum likelihood estimator (MLE) are two primary methods used to estimate the Nakagami parameters of ultrasound signals. This study explored the effects of the MBE and different MLE approximations on Nakagami parameter estimations. Ultrasound backscattered signals of different scatterer number densities were generated using a simulation model, and phantom experiments and measurements of human liver tissues were also conducted to acquire real backscattered echoes. Envelope signals were employed to estimate the Nakagami parameters by using the MBE, first- and second-order approximations of MLE (MLE 1 and MLE 2 , respectively), and Greenwood approximation (MLE gw ) for comparisons. The simulation results demonstrated that, compared with the MBE and MLE 1 , the MLE 2 and MLE gw enabled more stable parameter estimations with small sample sizes. Notably, the required data length of the envelope signal was 3.6 times the pulse length. The phantom and tissue measurement results also showed that the Nakagami parameters estimated using the MLE 2 and MLE gw could simultaneously differentiate various scatterer concentrations with lower standard deviations and reliably reflect physical meanings associated with the backscattered statistics. Therefore, the MLE 2 and MLE gw are suggested as estimators for the development of Nakagami-based methodologies for ultrasound tissue characterization. Copyright © 2017 Elsevier B.V. All rights reserved.

  4. An Activation Likelihood Estimation Meta-Analysis Study of Simple Motor Movements in Older and Young Adults

    Science.gov (United States)

    Turesky, Ted K.; Turkeltaub, Peter E.; Eden, Guinevere F.

    2016-01-01

    The functional neuroanatomy of finger movements has been characterized with neuroimaging in young adults. However, less is known about the aging motor system. Several studies have contrasted movement-related activity in older versus young adults, but there is inconsistency among their findings. To address this, we conducted an activation likelihood estimation (ALE) meta-analysis on within-group data from older adults and young adults performing regularly paced right-hand finger movement tasks in response to external stimuli. We hypothesized that older adults would show a greater likelihood of activation in right cortical motor areas (i.e., ipsilateral to the side of movement) compared to young adults. ALE maps were examined for conjunction and between-group differences. Older adults showed overlapping likelihoods of activation with young adults in left primary sensorimotor cortex (SM1), bilateral supplementary motor area, bilateral insula, left thalamus, and right anterior cerebellum. Their ALE map differed from that of the young adults in right SM1 (extending into dorsal premotor cortex), right supramarginal gyrus, medial premotor cortex, and right posterior cerebellum. The finding that older adults uniquely use ipsilateral regions for right-hand finger movements and show age-dependent modulations in regions recruited by both age groups provides a foundation by which to understand age-related motor decline and motor disorders. PMID:27799910

  5. Pilot power optimization for AF relaying using maximum likelihood channel estimation

    KAUST Repository

    Wang, Kezhi

    2014-09-01

    Bit error rates (BERs) for amplify-and-forward (AF) relaying systems with two different pilot-symbol-aided channel estimation methods, disintegrated channel estimation (DCE) and cascaded channel estimation (CCE), are derived in Rayleigh fading channels. Based on these BERs, the pilot powers at the source and at the relay are optimized when their total transmitting powers are fixed. Numerical results show that the optimized system has a better performance than other conventional nonoptimized allocation systems. They also show that the optimal pilot power in variable gain is nearly the same as that in fixed gain for similar system settings. andcopy; 2014 IEEE.

  6. PROCOV: maximum likelihood estimation of protein phylogeny under covarion models and site-specific covarion pattern analysis

    Directory of Open Access Journals (Sweden)

    Wang Huai-Chun

    2009-09-01

    Full Text Available Abstract Background The covarion hypothesis of molecular evolution holds that selective pressures on a given amino acid or nucleotide site are dependent on the identity of other sites in the molecule that change throughout time, resulting in changes of evolutionary rates of sites along the branches of a phylogenetic tree. At the sequence level, covarion-like evolution at a site manifests as conservation of nucleotide or amino acid states among some homologs where the states are not conserved in other homologs (or groups of homologs. Covarion-like evolution has been shown to relate to changes in functions at sites in different clades, and, if ignored, can adversely affect the accuracy of phylogenetic inference. Results PROCOV (protein covarion analysis is a software tool that implements a number of previously proposed covarion models of protein evolution for phylogenetic inference in a maximum likelihood framework. Several algorithmic and implementation improvements in this tool over previous versions make computationally expensive tree searches with covarion models more efficient and analyses of large phylogenomic data sets tractable. PROCOV can be used to identify covarion sites by comparing the site likelihoods under the covarion process to the corresponding site likelihoods under a rates-across-sites (RAS process. Those sites with the greatest log-likelihood difference between a 'covarion' and an RAS process were found to be of functional or structural significance in a dataset of bacterial and eukaryotic elongation factors. Conclusion Covarion models implemented in PROCOV may be especially useful for phylogenetic estimation when ancient divergences between sequences have occurred and rates of evolution at sites are likely to have changed over the tree. It can also be used to study lineage-specific functional shifts in protein families that result in changes in the patterns of site variability among subtrees.

  7. New algorithms and methods to estimate maximum-likelihood phylogenies: assessing the performance of PhyML 3.0.

    Science.gov (United States)

    Guindon, Stéphane; Dufayard, Jean-François; Lefort, Vincent; Anisimova, Maria; Hordijk, Wim; Gascuel, Olivier

    2010-05-01

    PhyML is a phylogeny software based on the maximum-likelihood principle. Early PhyML versions used a fast algorithm performing nearest neighbor interchanges to improve a reasonable starting tree topology. Since the original publication (Guindon S., Gascuel O. 2003. A simple, fast and accurate algorithm to estimate large phylogenies by maximum likelihood. Syst. Biol. 52:696-704), PhyML has been widely used (>2500 citations in ISI Web of Science) because of its simplicity and a fair compromise between accuracy and speed. In the meantime, research around PhyML has continued, and this article describes the new algorithms and methods implemented in the program. First, we introduce a new algorithm to search the tree space with user-defined intensity using subtree pruning and regrafting topological moves. The parsimony criterion is used here to filter out the least promising topology modifications with respect to the likelihood function. The analysis of a large collection of real nucleotide and amino acid data sets of various sizes demonstrates the good performance of this method. Second, we describe a new test to assess the support of the data for internal branches of a phylogeny. This approach extends the recently proposed approximate likelihood-ratio test and relies on a nonparametric, Shimodaira-Hasegawa-like procedure. A detailed analysis of real alignments sheds light on the links between this new approach and the more classical nonparametric bootstrap method. Overall, our tests show that the last version (3.0) of PhyML is fast, accurate, stable, and ready to use. A Web server and binary files are available from http://www.atgc-montpellier.fr/phyml/.

  8. Quasi-Maximum Likelihood Estimation and Bootstrap Inference in Fractional Time Series Models with Heteroskedasticity of Unknown Form

    DEFF Research Database (Denmark)

    Cavaliere, Giuseppe; Nielsen, Morten Ørregaard; Taylor, Robert

    We consider the problem of conducting estimation and inference on the parameters of univariate heteroskedastic fractionally integrated time series models. We first extend existing results in the literature, developed for conditional sum-of squares estimators in the context of parametric fractional...... time series models driven by conditionally homoskedastic shocks, to allow for conditional and unconditional heteroskedasticity both of a quite general and unknown form. Global consistency and asymptotic normality are shown to still obtain; however, the covariance matrix of the limiting distribution...... of the estimator now depends on nuisance parameters derived both from the weak dependence and heteroskedasticity present in the shocks. We then investigate classical methods of inference based on the Wald, likelihood ratio and Lagrange multiplier tests for linear hypotheses on either or both of the long and short...

  9. Maximum likelihood estimation of dose-response parameters for therapeutic operating characteristic (TOC) analysis of carcinoma of the nasopharynx

    International Nuclear Information System (INIS)

    Metz, C.E.; Tokars, R.P.; Kronman, H.B.; Griem, M.L.

    1982-01-01

    A Therapeutic Operating Characteristic (TOC) curve for radiation therapy plots, for all possible treatment doses, the probability of tumor ablation as a function of the probability of radiation-induced complication. Application of this analysis to actual therapeutic situation requires that dose-response curves for ablation and for complication be estimated from clinical data. We describe an approach in which ''maximum likelihood estimates'' of these dose-response curves are made, and we apply this approach to data collected on responses to radiotherapy for carcinoma of the nasopharynx. TOC curves constructed from the estimated dose-response curves are subject to moderately large uncertainties because of the limitations of available data.These TOC curves suggest, however, that treatment doses greater than 1800 rem may substantially increase the probability of tumor ablation with little increase in the risk of radiation-induced cervical myelopathy, especially for T1 and T2 tumors

  10. Parameter-free bearing fault detection based on maximum likelihood estimation and differentiation

    International Nuclear Information System (INIS)

    Bozchalooi, I Soltani; Liang, Ming

    2009-01-01

    Bearing faults can lead to malfunction and ultimately complete stall of many machines. The conventional high-frequency resonance (HFR) method has been commonly used for bearing fault detection. However, it is often very difficult to obtain and calibrate bandpass filter parameters, i.e. the center frequency and bandwidth, the key to the success of the HFR method. This inevitably undermines the usefulness of the conventional HFR technique. To avoid such difficulties, we propose parameter-free, versatile yet straightforward techniques to detect bearing faults. We focus on two types of measured signals frequently encountered in practice: (1) a mixture of impulsive faulty bearing vibrations and intrinsic background noise and (2) impulsive faulty bearing vibrations blended with intrinsic background noise and vibration interferences. To design a proper signal processing technique for each case, we analyze the effects of intrinsic background noise and vibration interferences on amplitude demodulation. For the first case, a maximum likelihood-based fault detection method is proposed to accommodate the Rician distribution of the amplitude-demodulated signal mixture. For the second case, we first illustrate that the high-amplitude low-frequency vibration interferences can make the amplitude demodulation ineffective. Then we propose a differentiation method to enhance the fault detectability. It is shown that the iterative application of a differentiation step can boost the relative strength of the impulsive faulty bearing signal component with respect to the vibration interferences. This preserves the effectiveness of amplitude demodulation and hence leads to more accurate fault detection. The proposed approaches are evaluated on simulated signals and experimental data acquired from faulty bearings

  11. Performance and separation occurrence of binary probit regression estimator using maximum likelihood method and Firths approach under different sample size

    Science.gov (United States)

    Lusiana, Evellin Dewi

    2017-12-01

    The parameters of binary probit regression model are commonly estimated by using Maximum Likelihood Estimation (MLE) method. However, MLE method has limitation if the binary data contains separation. Separation is the condition where there are one or several independent variables that exactly grouped the categories in binary response. It will result the estimators of MLE method become non-convergent, so that they cannot be used in modeling. One of the effort to resolve the separation is using Firths approach instead. This research has two aims. First, to identify the chance of separation occurrence in binary probit regression model between MLE method and Firths approach. Second, to compare the performance of binary probit regression model estimator that obtained by MLE method and Firths approach using RMSE criteria. Those are performed using simulation method and under different sample size. The results showed that the chance of separation occurrence in MLE method for small sample size is higher than Firths approach. On the other hand, for larger sample size, the probability decreased and relatively identic between MLE method and Firths approach. Meanwhile, Firths estimators have smaller RMSE than MLEs especially for smaller sample sizes. But for larger sample sizes, the RMSEs are not much different. It means that Firths estimators outperformed MLE estimator.

  12. Process for estimating likelihood and confidence in post detonation nuclear forensics.

    Energy Technology Data Exchange (ETDEWEB)

    Darby, John L. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Craft, Charles M. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2014-07-01

    Technical nuclear forensics (TNF) must provide answers to questions of concern to the broader community, including an estimate of uncertainty. There is significant uncertainty associated with post-detonation TNF. The uncertainty consists of a great deal of epistemic (state of knowledge) as well as aleatory (random) uncertainty, and many of the variables of interest are linguistic (words) and not numeric. We provide a process by which TNF experts can structure their process for answering questions and provide an estimate of uncertainty. The process uses belief and plausibility, fuzzy sets, and approximate reasoning.

  13. Person fit for test speededness: normal curvatures, likelihood ratio tests and empirical Bayes estimates

    NARCIS (Netherlands)

    Goegebeur, Y.; de Boeck, P.; Molenberghs, G.

    2010-01-01

    The local influence diagnostics, proposed by Cook (1986), provide a flexible way to assess the impact of minor model perturbations on key model parameters’ estimates. In this paper, we apply the local influence idea to the detection of test speededness in a model describing nonresponse in test data,

  14. Estimating Water Demand in Urban Indonesia: A Maximum Likelihood Approach to block Rate Pricing Data

    NARCIS (Netherlands)

    Rietveld, Piet; Rouwendal, Jan; Zwart, Bert

    1997-01-01

    In this paper the Burtless and Hausman model is used to estimate water demand in Salatiga, Indonesia. Other statistical models, as OLS and IV, are found to be inappropiate. A topic, which does not seem to appear in previous studies, is the fact that the density function of the loglikelihood can be

  15. Directional maximum likelihood self-estimation of the path-loss exponent

    NARCIS (Netherlands)

    Hu, Y.; Leus, G.J.T.; Dong, Min; Zheng, Thomas Fang

    2016-01-01

    The path-loss exponent (PLE) is a key parameter in wireless propagation channels. Therefore, obtaining the knowledge of the PLE is rather significant for assisting wireless communications and networking to achieve a better performance. Most existing methods for estimating the PLE not only require

  16. Practical aspects of a maximum likelihood estimation method to extract stability and control derivatives from flight data

    Science.gov (United States)

    Iliff, K. W.; Maine, R. E.

    1976-01-01

    A maximum likelihood estimation method was applied to flight data and procedures to facilitate the routine analysis of a large amount of flight data were described. Techniques that can be used to obtain stability and control derivatives from aircraft maneuvers that are less than ideal for this purpose are described. The techniques involve detecting and correcting the effects of dependent or nearly dependent variables, structural vibration, data drift, inadequate instrumentation, and difficulties with the data acquisition system and the mathematical model. The use of uncertainty levels and multiple maneuver analysis also proved to be useful in improving the quality of the estimated coefficients. The procedures used for editing the data and for overall analysis are also discussed.

  17. Multi-level restricted maximum likelihood covariance estimation and kriging for large non-gridded spatial datasets

    KAUST Repository

    Castrillon, Julio

    2015-11-10

    We develop a multi-level restricted Gaussian maximum likelihood method for estimating the covariance function parameters and computing the best unbiased predictor. Our approach produces a new set of multi-level contrasts where the deterministic parameters of the model are filtered out thus enabling the estimation of the covariance parameters to be decoupled from the deterministic component. Moreover, the multi-level covariance matrix of the contrasts exhibit fast decay that is dependent on the smoothness of the covariance function. Due to the fast decay of the multi-level covariance matrix coefficients only a small set is computed with a level dependent criterion. We demonstrate our approach on problems of up to 512,000 observations with a Matérn covariance function and highly irregular placements of the observations. In addition, these problems are numerically unstable and hard to solve with traditional methods.

  18. Generalized likelihood uncertainty estimation (GLUE) using adaptive Markov chain Monte Carlo sampling

    DEFF Research Database (Denmark)

    Blasone, Roberta-Serena; Vrugt, Jasper A.; Madsen, Henrik

    2008-01-01

    propose an alternative strategy to determine the value of the cutoff threshold based on the appropriate coverage of the resulting uncertainty bounds. We demonstrate the superiority of this revised GLUE method with three different conceptual watershed models of increasing complexity, using both synthetic......In the last few decades hydrologists have made tremendous progress in using dynamic simulation models for the analysis and understanding of hydrologic systems. However, predictions with these models are often deterministic and as such they focus on the most probable forecast, without an explicit...... of applications. However, the MC based sampling strategy of the prior parameter space typically utilized in GLUE is not particularly efficient in finding behavioral simulations. This becomes especially problematic for high-dimensional parameter estimation problems, and in the case of complex simulation models...

  19. Altered sensorimotor activation patterns in idiopathic dystonia-an activation likelihood estimation meta-analysis of functional brain imaging studies

    DEFF Research Database (Denmark)

    Løkkegaard, Annemette; Herz, Damian M; Haagensen, Brian Numelin

    2016-01-01

    Dystonia is characterized by sustained or intermittent muscle contractions causing abnormal, often repetitive, movements or postures. Functional neuroimaging studies have yielded abnormal task-related sensorimotor activation in dystonia, but the results appear to be rather variable across studies....... Further, study size was usually small including different types of dystonia. Here we performed an activation likelihood estimation (ALE) meta-analysis of functional neuroimaging studies in patients with primary dystonia to test for convergence of dystonia-related alterations in task-related activity...... postcentral gyrus, right superior temporal gyrus and dorsal midbrain. Apart from the midbrain cluster, all between-group differences in task-related activity were retrieved in a sub-analysis including only the 14 studies on patients with focal dystonia. For focal dystonia, an additional cluster of increased...

  20. The numerical evaluation of maximum-likelihood estimates of the parameters for a mixture of normal distributions from partially identified samples

    Science.gov (United States)

    Walker, H. F.

    1976-01-01

    Likelihood equations determined by the two types of samples which are necessary conditions for a maximum-likelihood estimate are considered. These equations, suggest certain successive-approximations iterative procedures for obtaining maximum-likelihood estimates. These are generalized steepest ascent (deflected gradient) procedures. It is shown that, with probability 1 as N sub 0 approaches infinity (regardless of the relative sizes of N sub 0 and N sub 1, i=1,...,m), these procedures converge locally to the strongly consistent maximum-likelihood estimates whenever the step size is between 0 and 2. Furthermore, the value of the step size which yields optimal local convergence rates is bounded from below by a number which always lies between 1 and 2.

  1. The relationship between the neural computations for speech and music perception is context-dependent: an activation likelihood estimate study

    Directory of Open Access Journals (Sweden)

    Arianna eLaCroix

    2015-08-01

    Full Text Available The relationship between the neurobiology of speech and music has been investigated for more than a century. There remains no widespread agreement regarding how (or to what extent music perception utilizes the neural circuitry that is engaged in speech processing, particularly at the cortical level. Prominent models such as Patel’s Shared Syntactic Integration Resource Hypothesis (SSIRH and Koelsch’s neurocognitive model of music perception suggest a high degree of overlap, particularly in the frontal lobe, but also perhaps more distinct representations in the temporal lobe with hemispheric asymmetries. The present meta-analysis study used activation likelihood estimate analyses to identify the brain regions consistently activated for music as compared to speech across the functional neuroimaging (fMRI and PET literature. Eighty music and 91 speech neuroimaging studies of healthy adult control subjects were analyzed. Peak activations reported in the music and speech studies were divided into four paradigm categories: passive listening, discrimination tasks, error/anomaly detection tasks and memory-related tasks. We then compared activation likelihood estimates within each category for music versus speech, and each music condition with passive listening. We found that listening to music and to speech preferentially activate distinct temporo-parietal bilateral cortical networks. We also found music and speech to have shared resources in the left pars opercularis but speech-specific resources in the left pars triangularis. The extent to which music recruited speech-activated frontal resources was modulated by task. While there are certainly limitations to meta-analysis techniques particularly regarding sensitivity, this work suggests that the extent of shared resources between speech and music may be task-dependent and highlights the need to consider how task effects may be affecting conclusions regarding the neurobiology of speech and music.

  2. The relationship between the neural computations for speech and music perception is context-dependent: an activation likelihood estimate study

    Science.gov (United States)

    LaCroix, Arianna N.; Diaz, Alvaro F.; Rogalsky, Corianne

    2015-01-01

    The relationship between the neurobiology of speech and music has been investigated for more than a century. There remains no widespread agreement regarding how (or to what extent) music perception utilizes the neural circuitry that is engaged in speech processing, particularly at the cortical level. Prominent models such as Patel's Shared Syntactic Integration Resource Hypothesis (SSIRH) and Koelsch's neurocognitive model of music perception suggest a high degree of overlap, particularly in the frontal lobe, but also perhaps more distinct representations in the temporal lobe with hemispheric asymmetries. The present meta-analysis study used activation likelihood estimate analyses to identify the brain regions consistently activated for music as compared to speech across the functional neuroimaging (fMRI and PET) literature. Eighty music and 91 speech neuroimaging studies of healthy adult control subjects were analyzed. Peak activations reported in the music and speech studies were divided into four paradigm categories: passive listening, discrimination tasks, error/anomaly detection tasks and memory-related tasks. We then compared activation likelihood estimates within each category for music vs. speech, and each music condition with passive listening. We found that listening to music and to speech preferentially activate distinct temporo-parietal bilateral cortical networks. We also found music and speech to have shared resources in the left pars opercularis but speech-specific resources in the left pars triangularis. The extent to which music recruited speech-activated frontal resources was modulated by task. While there are certainly limitations to meta-analysis techniques particularly regarding sensitivity, this work suggests that the extent of shared resources between speech and music may be task-dependent and highlights the need to consider how task effects may be affecting conclusions regarding the neurobiology of speech and music. PMID:26321976

  3. BER and optimal power allocation for amplify-and-forward relaying using pilot-aided maximum likelihood estimation

    KAUST Repository

    Wang, Kezhi

    2014-10-01

    Bit error rate (BER) and outage probability for amplify-and-forward (AF) relaying systems with two different channel estimation methods, disintegrated channel estimation and cascaded channel estimation, using pilot-aided maximum likelihood method in slowly fading Rayleigh channels are derived. Based on the BERs, the optimal values of pilot power under the total transmitting power constraints at the source and the optimal values of pilot power under the total transmitting power constraints at the relay are obtained, separately. Moreover, the optimal power allocation between the pilot power at the source, the pilot power at the relay, the data power at the source and the data power at the relay are obtained when their total transmitting power is fixed. Numerical results show that the derived BER expressions match with the simulation results. They also show that the proposed systems with optimal power allocation outperform the conventional systems without power allocation under the same other conditions. In some cases, the gain could be as large as several dB\\'s in effective signal-to-noise ratio.

  4. Measuring galaxy cluster masses with CMB lensing using a Maximum Likelihood estimator: statistical and systematic error budgets for future experiments

    Energy Technology Data Exchange (ETDEWEB)

    Raghunathan, Srinivasan; Patil, Sanjaykumar; Bianchini, Federico; Reichardt, Christian L. [School of Physics, University of Melbourne, 313 David Caro building, Swanston St and Tin Alley, Parkville VIC 3010 (Australia); Baxter, Eric J. [Department of Physics and Astronomy, University of Pennsylvania, 209 S. 33rd Street, Philadelphia, PA 19104 (United States); Bleem, Lindsey E. [Argonne National Laboratory, High-Energy Physics Division, 9700 S. Cass Avenue, Argonne, IL 60439 (United States); Crawford, Thomas M. [Kavli Institute for Cosmological Physics, University of Chicago, 5640 South Ellis Avenue, Chicago, IL 60637 (United States); Holder, Gilbert P. [Department of Astronomy and Department of Physics, University of Illinois, 1002 West Green St., Urbana, IL 61801 (United States); Manzotti, Alessandro, E-mail: srinivasan.raghunathan@unimelb.edu.au, E-mail: s.patil2@student.unimelb.edu.au, E-mail: ebax@sas.upenn.edu, E-mail: federico.bianchini@unimelb.edu.au, E-mail: bleeml@uchicago.edu, E-mail: tcrawfor@kicp.uchicago.edu, E-mail: gholder@illinois.edu, E-mail: manzotti@uchicago.edu, E-mail: christian.reichardt@unimelb.edu.au [Department of Astronomy and Astrophysics, University of Chicago, 5640 South Ellis Avenue, Chicago, IL 60637 (United States)

    2017-08-01

    We develop a Maximum Likelihood estimator (MLE) to measure the masses of galaxy clusters through the impact of gravitational lensing on the temperature and polarization anisotropies of the cosmic microwave background (CMB). We show that, at low noise levels in temperature, this optimal estimator outperforms the standard quadratic estimator by a factor of two. For polarization, we show that the Stokes Q/U maps can be used instead of the traditional E- and B-mode maps without losing information. We test and quantify the bias in the recovered lensing mass for a comprehensive list of potential systematic errors. Using realistic simulations, we examine the cluster mass uncertainties from CMB-cluster lensing as a function of an experiment's beam size and noise level. We predict the cluster mass uncertainties will be 3 - 6% for SPT-3G, AdvACT, and Simons Array experiments with 10,000 clusters and less than 1% for the CMB-S4 experiment with a sample containing 100,000 clusters. The mass constraints from CMB polarization are very sensitive to the experimental beam size and map noise level: for a factor of three reduction in either the beam size or noise level, the lensing signal-to-noise improves by roughly a factor of two.

  5. BER and optimal power allocation for amplify-and-forward relaying using pilot-aided maximum likelihood estimation

    KAUST Repository

    Wang, Kezhi; Chen, Yunfei; Alouini, Mohamed-Slim; Xu, Feng

    2014-01-01

    Bit error rate (BER) and outage probability for amplify-and-forward (AF) relaying systems with two different channel estimation methods, disintegrated channel estimation and cascaded channel estimation, using pilot-aided maximum likelihood method in slowly fading Rayleigh channels are derived. Based on the BERs, the optimal values of pilot power under the total transmitting power constraints at the source and the optimal values of pilot power under the total transmitting power constraints at the relay are obtained, separately. Moreover, the optimal power allocation between the pilot power at the source, the pilot power at the relay, the data power at the source and the data power at the relay are obtained when their total transmitting power is fixed. Numerical results show that the derived BER expressions match with the simulation results. They also show that the proposed systems with optimal power allocation outperform the conventional systems without power allocation under the same other conditions. In some cases, the gain could be as large as several dB's in effective signal-to-noise ratio.

  6. Employing a Monte Carlo algorithm in Newton-type methods for restricted maximum likelihood estimation of genetic parameters.

    Directory of Open Access Journals (Sweden)

    Kaarina Matilainen

    Full Text Available Estimation of variance components by Monte Carlo (MC expectation maximization (EM restricted maximum likelihood (REML is computationally efficient for large data sets and complex linear mixed effects models. However, efficiency may be lost due to the need for a large number of iterations of the EM algorithm. To decrease the computing time we explored the use of faster converging Newton-type algorithms within MC REML implementations. The implemented algorithms were: MC Newton-Raphson (NR, where the information matrix was generated via sampling; MC average information(AI, where the information was computed as an average of observed and expected information; and MC Broyden's method, where the zero of the gradient was searched using a quasi-Newton-type algorithm. Performance of these algorithms was evaluated using simulated data. The final estimates were in good agreement with corresponding analytical ones. MC NR REML and MC AI REML enhanced convergence compared to MC EM REML and gave standard errors for the estimates as a by-product. MC NR REML required a larger number of MC samples, while each MC AI REML iteration demanded extra solving of mixed model equations by the number of parameters to be estimated. MC Broyden's method required the largest number of MC samples with our small data and did not give standard errors for the parameters directly. We studied the performance of three different convergence criteria for the MC AI REML algorithm. Our results indicate the importance of defining a suitable convergence criterion and critical value in order to obtain an efficient Newton-type method utilizing a MC algorithm. Overall, use of a MC algorithm with Newton-type methods proved feasible and the results encourage testing of these methods with different kinds of large-scale problem settings.

  7. Estimating the spatial scale of herbicide and soil interactions by nested sampling, hierarchical analysis of variance and residual maximum likelihood

    Energy Technology Data Exchange (ETDEWEB)

    Price, Oliver R., E-mail: oliver.price@unilever.co [Warwick-HRI, University of Warwick, Wellesbourne, Warwick, CV32 6EF (United Kingdom); University of Reading, Soil Science Department, Whiteknights, Reading, RG6 6UR (United Kingdom); Oliver, Margaret A. [University of Reading, Soil Science Department, Whiteknights, Reading, RG6 6UR (United Kingdom); Walker, Allan [Warwick-HRI, University of Warwick, Wellesbourne, Warwick, CV32 6EF (United Kingdom); Wood, Martin [University of Reading, Soil Science Department, Whiteknights, Reading, RG6 6UR (United Kingdom)

    2009-05-15

    An unbalanced nested sampling design was used to investigate the spatial scale of soil and herbicide interactions at the field scale. A hierarchical analysis of variance based on residual maximum likelihood (REML) was used to analyse the data and provide a first estimate of the variogram. Soil samples were taken at 108 locations at a range of separating distances in a 9 ha field to explore small and medium scale spatial variation. Soil organic matter content, pH, particle size distribution, microbial biomass and the degradation and sorption of the herbicide, isoproturon, were determined for each soil sample. A large proportion of the spatial variation in isoproturon degradation and sorption occurred at sampling intervals less than 60 m, however, the sampling design did not resolve the variation present at scales greater than this. A sampling interval of 20-25 m should ensure that the main spatial structures are identified for isoproturon degradation rate and sorption without too great a loss of information in this field. - Estimating the spatial scale of herbicide and soil interactions by nested sampling.

  8. Estimating the spatial scale of herbicide and soil interactions by nested sampling, hierarchical analysis of variance and residual maximum likelihood

    International Nuclear Information System (INIS)

    Price, Oliver R.; Oliver, Margaret A.; Walker, Allan; Wood, Martin

    2009-01-01

    An unbalanced nested sampling design was used to investigate the spatial scale of soil and herbicide interactions at the field scale. A hierarchical analysis of variance based on residual maximum likelihood (REML) was used to analyse the data and provide a first estimate of the variogram. Soil samples were taken at 108 locations at a range of separating distances in a 9 ha field to explore small and medium scale spatial variation. Soil organic matter content, pH, particle size distribution, microbial biomass and the degradation and sorption of the herbicide, isoproturon, were determined for each soil sample. A large proportion of the spatial variation in isoproturon degradation and sorption occurred at sampling intervals less than 60 m, however, the sampling design did not resolve the variation present at scales greater than this. A sampling interval of 20-25 m should ensure that the main spatial structures are identified for isoproturon degradation rate and sorption without too great a loss of information in this field. - Estimating the spatial scale of herbicide and soil interactions by nested sampling.

  9. Is there a critical lesion site for unilateral spatial neglect? A meta-analysis using activation likelihood estimation.

    Directory of Open Access Journals (Sweden)

    Pascal eMolenberghs

    2012-04-01

    Full Text Available The critical lesion site responsible for the syndrome of unilateral spatial neglect has been debated for more than a decade. Here we performed an activation likelihood estimation (ALE to provide for the first time an objective quantitative index of the consistency of lesion sites across anatomical group studies of spatial neglect. The analysis revealed several distinct regions in which damage has consistently been associated with spatial neglect symptoms. Lesioned clusters were located in several cortical and subcortical regions of the right hemisphere, including the middle and superior temporal gyrus, inferior parietal lobule, intraparietal sulcus, precuneus, middle occipital gyrus, caudate nucleus and posterior insula, as well as in the white matter pathway corresponding to the posterior part of the superior longitudinal fasciculus. Further analyses suggested that separate lesion sites are associated with impairments in different behavioural tests, such as line bisection and target cancellation. Similarly, specific subcomponents of the heterogeneous neglect syndrome, such as extinction and allocentric and personal neglect, are associated with distinct lesion sites. Future progress in delineating the neuropathological correlates of spatial neglect will depend upon the development of more refined measures of perceptual and cognitive functions than those currently available in the clinical setting.

  10. MLE [Maximum Likelihood Estimator] reconstruction of a brain phantom using a Monte Carlo transition matrix and a statistical stopping rule

    International Nuclear Information System (INIS)

    Veklerov, E.; Llacer, J.; Hoffman, E.J.

    1987-10-01

    In order to study properties of the Maximum Likelihood Estimator (MLE) algorithm for image reconstruction in Positron Emission Tomographyy (PET), the algorithm is applied to data obtained by the ECAT-III tomograph from a brain phantom. The procedure for subtracting accidental coincidences from the data stream generated by this physical phantom is such that he resultant data are not Poisson distributed. This makes the present investigation different from other investigations based on computer-simulated phantoms. It is shown that the MLE algorithm is robust enough to yield comparatively good images, especially when the phantom is in the periphery of the field of view, even though the underlying assumption of the algorithm is violated. Two transition matrices are utilized. The first uses geometric considerations only. The second is derived by a Monte Carlo simulation which takes into account Compton scattering in the detectors, positron range, etc. in the detectors. It is demonstrated that the images obtained from the Monte Carlo matrix are superior in some specific ways. A stopping rule derived earlier and allowing the user to stop the iterative process before the images begin to deteriorate is tested. Since the rule is based on the Poisson assumption, it does not work well with the presently available data, although it is successful wit computer-simulated Poisson data

  11. Joint Maximum Likelihood Time Delay Estimation of Unknown Event-Related Potential Signals for EEG Sensor Signal Quality Enhancement

    Science.gov (United States)

    Kim, Kyungsoo; Lim, Sung-Ho; Lee, Jaeseok; Kang, Won-Seok; Moon, Cheil; Choi, Ji-Woong

    2016-01-01

    Electroencephalograms (EEGs) measure a brain signal that contains abundant information about the human brain function and health. For this reason, recent clinical brain research and brain computer interface (BCI) studies use EEG signals in many applications. Due to the significant noise in EEG traces, signal processing to enhance the signal to noise power ratio (SNR) is necessary for EEG analysis, especially for non-invasive EEG. A typical method to improve the SNR is averaging many trials of event related potential (ERP) signal that represents a brain’s response to a particular stimulus or a task. The averaging, however, is very sensitive to variable delays. In this study, we propose two time delay estimation (TDE) schemes based on a joint maximum likelihood (ML) criterion to compensate the uncertain delays which may be different in each trial. We evaluate the performance for different types of signals such as random, deterministic, and real EEG signals. The results show that the proposed schemes provide better performance than other conventional schemes employing averaged signal as a reference, e.g., up to 4 dB gain at the expected delay error of 10°. PMID:27322267

  12. Joint Maximum Likelihood Time Delay Estimation of Unknown Event-Related Potential Signals for EEG Sensor Signal Quality Enhancement

    Directory of Open Access Journals (Sweden)

    Kyungsoo Kim

    2016-06-01

    Full Text Available Electroencephalograms (EEGs measure a brain signal that contains abundant information about the human brain function and health. For this reason, recent clinical brain research and brain computer interface (BCI studies use EEG signals in many applications. Due to the significant noise in EEG traces, signal processing to enhance the signal to noise power ratio (SNR is necessary for EEG analysis, especially for non-invasive EEG. A typical method to improve the SNR is averaging many trials of event related potential (ERP signal that represents a brain’s response to a particular stimulus or a task. The averaging, however, is very sensitive to variable delays. In this study, we propose two time delay estimation (TDE schemes based on a joint maximum likelihood (ML criterion to compensate the uncertain delays which may be different in each trial. We evaluate the performance for different types of signals such as random, deterministic, and real EEG signals. The results show that the proposed schemes provide better performance than other conventional schemes employing averaged signal as a reference, e.g., up to 4 dB gain at the expected delay error of 10°.

  13. Further Evaluation of Covariate Analysis using Empirical Bayes Estimates in Population Pharmacokinetics: the Perception of Shrinkage and Likelihood Ratio Test.

    Science.gov (United States)

    Xu, Xu Steven; Yuan, Min; Yang, Haitao; Feng, Yan; Xu, Jinfeng; Pinheiro, Jose

    2017-01-01

    Covariate analysis based on population pharmacokinetics (PPK) is used to identify clinically relevant factors. The likelihood ratio test (LRT) based on nonlinear mixed effect model fits is currently recommended for covariate identification, whereas individual empirical Bayesian estimates (EBEs) are considered unreliable due to the presence of shrinkage. The objectives of this research were to investigate the type I error for LRT and EBE approaches, to confirm the similarity of power between the LRT and EBE approaches from a previous report and to explore the influence of shrinkage on LRT and EBE inferences. Using an oral one-compartment PK model with a single covariate impacting on clearance, we conducted a wide range of simulations according to a two-way factorial design. The results revealed that the EBE-based regression not only provided almost identical power for detecting a covariate effect, but also controlled the false positive rate better than the LRT approach. Shrinkage of EBEs is likely not the root cause for decrease in power or inflated false positive rate although the size of the covariate effect tends to be underestimated at high shrinkage. In summary, contrary to the current recommendations, EBEs may be a better choice for statistical tests in PPK covariate analysis compared to LRT. We proposed a three-step covariate modeling approach for population PK analysis to utilize the advantages of EBEs while overcoming their shortcomings, which allows not only markedly reducing the run time for population PK analysis, but also providing more accurate covariate tests.

  14. The rate test of speciation: estimating the likelihood of non-allopatric speciation from reproductive isolation rates in Drosophila.

    Science.gov (United States)

    Yukilevich, Roman

    2014-04-01

    Among the most debated subjects in speciation is the question of its mode. Although allopatric (geographical) speciation is assumed the null model, the importance of parapatric and sympatric speciation is extremely difficult to assess and remains controversial. Here I develop a novel approach to distinguish these modes of speciation by studying the evolution of reproductive isolation (RI) among taxa. I focus on the Drosophila genus, for which measures of RI are known. First, I incorporate RI into age-range correlations. Plots show that almost all cases of weak RI are between allopatric taxa whereas sympatric taxa have strong RI. This either implies that most reproductive isolation (RI) was initiated in allopatry or that RI evolves too rapidly in sympatry to be captured at incipient stages. To distinguish between these explanations, I develop a new "rate test of speciation" that estimates the likelihood of non-allopatric speciation given the distribution of RI rates in allopatry versus sympatry. Most sympatric taxa were found to have likely initiated RI in allopatry. However, two putative candidate species pairs for non-allopatric speciation were identified (5% of known Drosophila). In total, this study shows how using RI measures can greatly inform us about the geographical mode of speciation in nature. © 2013 The Author(s). Evolution © 2013 The Society for the Study of Evolution.

  15. Effects of stimulus type and strategy on mental rotation network:an Activation Likelihood Estimation meta-analysis

    Directory of Open Access Journals (Sweden)

    Barbara eTomasino

    2016-01-01

    Full Text Available We could predict how an object would look like if we were to see it from different viewpoints. The brain network governing mental rotation (MR has been studied using a variety of stimuli and tasks instructions. By using activation likelihood estimation (ALE meta-analysis we tested whether different MR networks can be modulated by the type of stimulus (body vs. non body parts or by the type of tasks instructions (motor imagery-based vs. non-motor imagery-based MR instructions. Testing for the bodily and non-bodily stimulus axis revealed a bilateral sensorimotor activation for bodily-related as compared to non bodily-related stimuli and a posterior right lateralized activation for non bodily-related as compared to bodily-related stimuli. A top-down modulation of the network was exerted by the MR tasks instructions frame with a bilateral (preferentially sensorimotor left network for motor imagery- vs. non-motor imagery-based MR instructions and the latter activating a preferentially posterior right occipito-temporal-parietal network. The present quantitative meta-analysis summarizes and amends previous descriptions of the brain network related to MR and shows how it is modulated by top-down and bottom-up experimental factors.

  16. Mapping grey matter reductions in schizophrenia: an anatomical likelihood estimation analysis of voxel-based morphometry studies.

    Science.gov (United States)

    Fornito, A; Yücel, M; Patti, J; Wood, S J; Pantelis, C

    2009-03-01

    Voxel-based morphometry (VBM) is a popular tool for mapping neuroanatomical changes in schizophrenia patients. Several recent meta-analyses have identified the brain regions in which patients most consistently show grey matter reductions, although they have not examined whether such changes reflect differences in grey matter concentration (GMC) or grey matter volume (GMV). These measures assess different aspects of grey matter integrity, and may therefore reflect different pathological processes. In this study, we used the Anatomical Likelihood Estimation procedure to analyse significant differences reported in 37 VBM studies of schizophrenia patients, incorporating data from 1646 patients and 1690 controls, and compared the findings of studies using either GMC or GMV to index grey matter differences. Analysis of all studies combined indicated that grey matter reductions in a network of frontal, temporal, thalamic and striatal regions are among the most frequently reported in literature. GMC reductions were generally larger and more consistent than GMV reductions, and were more frequent in the insula, medial prefrontal, medial temporal and striatal regions. GMV reductions were more frequent in dorso-medial frontal cortex, and lateral and orbital frontal areas. These findings support the primacy of frontal, limbic, and subcortical dysfunction in the pathophysiology of schizophrenia, and suggest that the grey matter changes observed with MRI may not necessarily result from a unitary pathological process.

  17. ROC [Receiver Operating Characteristics] study of maximum likelihood estimator human brain image reconstructions in PET [Positron Emission Tomography] clinical practice

    International Nuclear Information System (INIS)

    Llacer, J.; Veklerov, E.; Nolan, D.; Grafton, S.T.; Mazziotta, J.C.; Hawkins, R.A.; Hoh, C.K.; Hoffman, E.J.

    1990-10-01

    This paper will report on the progress to date in carrying out Receiver Operating Characteristics (ROC) studies comparing Maximum Likelihood Estimator (MLE) and Filtered Backprojection (FBP) reconstructions of normal and abnormal human brain PET data in a clinical setting. A previous statistical study of reconstructions of the Hoffman brain phantom with real data indicated that the pixel-to-pixel standard deviation in feasible MLE images is approximately proportional to the square root of the number of counts in a region, as opposed to a standard deviation which is high and largely independent of the number of counts in FBP. A preliminary ROC study carried out with 10 non-medical observers performing a relatively simple detectability task indicates that, for the majority of observers, lower standard deviation translates itself into a statistically significant detectability advantage in MLE reconstructions. The initial results of ongoing tests with four experienced neurologists/nuclear medicine physicians are presented. Normal cases of 18 F -- fluorodeoxyglucose (FDG) cerebral metabolism studies and abnormal cases in which a variety of lesions have been introduced into normal data sets have been evaluated. We report on the results of reading the reconstructions of 90 data sets, each corresponding to a single brain slice. It has become apparent that the design of the study based on reading single brain slices is too insensitive and we propose a variation based on reading three consecutive slices at a time, rating only the center slice. 9 refs., 2 figs., 1 tab

  18. Approximate Likelihood

    CERN Multimedia

    CERN. Geneva

    2015-01-01

    Most physics results at the LHC end in a likelihood ratio test. This includes discovery and exclusion for searches as well as mass, cross-section, and coupling measurements. The use of Machine Learning (multivariate) algorithms in HEP is mainly restricted to searches, which can be reduced to classification between two fixed distributions: signal vs. background. I will show how we can extend the use of ML classifiers to distributions parameterized by physical quantities like masses and couplings as well as nuisance parameters associated to systematic uncertainties. This allows for one to approximate the likelihood ratio while still using a high dimensional feature vector for the data. Both the MEM and ABC approaches mentioned above aim to provide inference on model parameters (like cross-sections, masses, couplings, etc.). ABC is fundamentally tied Bayesian inference and focuses on the “likelihood free” setting where only a simulator is available and one cannot directly compute the likelihood for the dat...

  19. Estimation of flashover voltage probability of overhead line insulators under industrial pollution, based on maximum likelihood method

    International Nuclear Information System (INIS)

    Arab, M.N.; Ayaz, M.

    2004-01-01

    The performance of transmission line insulator is greatly affected by dust, fumes from industrial areas and saline deposit near the coast. Such pollutants in the presence of moisture form a coating on the surface of the insulator, which in turn allows the passage of leakage current. This leakage builds up to a point where flashover develops. The flashover is often followed by permanent failure of insulation resulting in prolong outages. With the increase in system voltage owing to the greater demand of electrical energy over the past few decades, the importance of flashover due to pollution has received special attention. The objective of the present work was to study the performance of overhead line insulators in the presence of contaminants such as induced salts. A detailed review of the literature and the mechanisms of insulator flashover due to the pollution are presented. Experimental investigations on the behavior of overhead line insulators under industrial salt contamination are carried out. A special fog chamber was designed in which the contamination testing of insulators was carried out. Flashover behavior under various degrees of contamination of insulators with the most common industrial fume components such as Nitrate and Sulphate compounds was studied. Substituting the normal distribution parameter in the probability distribution function based on maximum likelihood develops a statistical method. The method gives a high accuracy in the estimation of the 50% flashover voltage, which is then used to evaluate the critical flashover index at various contamination levels. The critical flashover index is a valuable parameter in insulation design for numerous applications. (author)

  20. Enhancing resolution and contrast in second-harmonic generation microscopy using an advanced maximum likelihood estimation restoration method

    Science.gov (United States)

    Sivaguru, Mayandi; Kabir, Mohammad M.; Gartia, Manas Ranjan; Biggs, David S. C.; Sivaguru, Barghav S.; Sivaguru, Vignesh A.; Berent, Zachary T.; Wagoner Johnson, Amy J.; Fried, Glenn A.; Liu, Gang Logan; Sadayappan, Sakthivel; Toussaint, Kimani C.

    2017-02-01

    Second-harmonic generation (SHG) microscopy is a label-free imaging technique to study collagenous materials in extracellular matrix environment with high resolution and contrast. However, like many other microscopy techniques, the actual spatial resolution achievable by SHG microscopy is reduced by out-of-focus blur and optical aberrations that degrade particularly the amplitude of the detectable higher spatial frequencies. Being a two-photon scattering process, it is challenging to define a point spread function (PSF) for the SHG imaging modality. As a result, in comparison with other two-photon imaging systems like two-photon fluorescence, it is difficult to apply any PSF-engineering techniques to enhance the experimental spatial resolution closer to the diffraction limit. Here, we present a method to improve the spatial resolution in SHG microscopy using an advanced maximum likelihood estimation (AdvMLE) algorithm to recover the otherwise degraded higher spatial frequencies in an SHG image. Through adaptation and iteration, the AdvMLE algorithm calculates an improved PSF for an SHG image and enhances the spatial resolution by decreasing the full-width-at-halfmaximum (FWHM) by 20%. Similar results are consistently observed for biological tissues with varying SHG sources, such as gold nanoparticles and collagen in porcine feet tendons. By obtaining an experimental transverse spatial resolution of 400 nm, we show that the AdvMLE algorithm brings the practical spatial resolution closer to the theoretical diffraction limit. Our approach is suitable for adaptation in micro-nano CT and MRI imaging, which has the potential to impact diagnosis and treatment of human diseases.

  1. Event-related fMRI studies of false memory: An Activation Likelihood Estimation meta-analysis.

    Science.gov (United States)

    Kurkela, Kyle A; Dennis, Nancy A

    2016-01-29

    Over the last two decades, a wealth of research in the domain of episodic memory has focused on understanding the neural correlates mediating false memories, or memories for events that never happened. While several recent qualitative reviews have attempted to synthesize this literature, methodological differences amongst the empirical studies and a focus on only a sub-set of the findings has limited broader conclusions regarding the neural mechanisms underlying false memories. The current study performed a voxel-wise quantitative meta-analysis using activation likelihood estimation to investigate commonalities within the functional magnetic resonance imaging (fMRI) literature studying false memory. The results were broken down by memory phase (encoding, retrieval), as well as sub-analyses looking at differences in baseline (hit, correct rejection), memoranda (verbal, semantic), and experimental paradigm (e.g., semantic relatedness and perceptual relatedness) within retrieval. Concordance maps identified significant overlap across studies for each analysis. Several regions were identified in the general false retrieval analysis as well as multiple sub-analyses, indicating their ubiquitous, yet critical role in false retrieval (medial superior frontal gyrus, left precentral gyrus, left inferior parietal cortex). Additionally, several regions showed baseline- and paradigm-specific effects (hit/perceptual relatedness: inferior and middle occipital gyrus; CRs: bilateral inferior parietal cortex, precuneus, left caudate). With respect to encoding, analyses showed common activity in the left middle temporal gyrus and anterior cingulate cortex. No analysis identified a common cluster of activation in the medial temporal lobe. Copyright © 2015 Elsevier Ltd. All rights reserved.

  2. Comparison of least-squares vs. maximum likelihood estimation for standard spectrum technique of β−γ coincidence spectrum analysis

    International Nuclear Information System (INIS)

    Lowrey, Justin D.; Biegalski, Steven R.F.

    2012-01-01

    The spectrum deconvolution analysis tool (SDAT) software code was written and tested at The University of Texas at Austin utilizing the standard spectrum technique to determine activity levels of Xe-131m, Xe-133m, Xe-133, and Xe-135 in β–γ coincidence spectra. SDAT was originally written to utilize the method of least-squares to calculate the activity of each radionuclide component in the spectrum. Recently, maximum likelihood estimation was also incorporated into the SDAT tool. This is a robust statistical technique to determine the parameters that maximize the Poisson distribution likelihood function of the sample data. In this case it is used to parameterize the activity level of each of the radioxenon components in the spectra. A new test dataset was constructed utilizing Xe-131m placed on a Xe-133 background to compare the robustness of the least-squares and maximum likelihood estimation methods for low counting statistics data. The Xe-131m spectra were collected independently from the Xe-133 spectra and added to generate the spectra in the test dataset. The true independent counts of Xe-131m and Xe-133 are known, as they were calculated before the spectra were added together. Spectra with both high and low counting statistics are analyzed. Studies are also performed by analyzing only the 30 keV X-ray region of the β–γ coincidence spectra. Results show that maximum likelihood estimation slightly outperforms least-squares for low counting statistics data.

  3. Estimating the Causal Impact of Proximity to Gold and Copper Mines on Respiratory Diseases in Chilean Children: An Application of Targeted Maximum Likelihood Estimation

    Directory of Open Access Journals (Sweden)

    Ronald Herrera

    2017-12-01

    Full Text Available In a town located in a desert area of Northern Chile, gold and copper open-pit mining is carried out involving explosive processes. These processes are associated with increased dust exposure, which might affect children’s respiratory health. Therefore, we aimed to quantify the causal attributable risk of living close to the mines on asthma or allergic rhinoconjunctivitis risk burden in children. Data on the prevalence of respiratory diseases and potential confounders were available from a cross-sectional survey carried out in 2009 among 288 (response: 69 % children living in the community. The proximity of the children’s home addresses to the local gold and copper mine was calculated using geographical positioning systems. We applied targeted maximum likelihood estimation to obtain the causal attributable risk (CAR for asthma, rhinoconjunctivitis and both outcomes combined. Children living more than the first quartile away from the mines were used as the unexposed group. Based on the estimated CAR, a hypothetical intervention in which all children lived at least one quartile away from the copper mine would decrease the risk of rhinoconjunctivitis by 4.7 percentage points (CAR: − 4.7 ; 95 % confidence interval ( 95 % CI: − 8.4 ; − 0.11 ; and 4.2 percentage points (CAR: − 4.2 ; 95 % CI: − 7.9 ; − 0.05 for both outcomes combined. Overall, our results suggest that a hypothetical intervention intended to increase the distance between the place of residence of the highest exposed children would reduce the prevalence of respiratory disease in the community by around four percentage points. This approach could help local policymakers in the development of efficient public health strategies.

  4. Estimating the Causal Impact of Proximity to Gold and Copper Mines on Respiratory Diseases in Chilean Children: An Application of Targeted Maximum Likelihood Estimation.

    Science.gov (United States)

    Herrera, Ronald; Berger, Ursula; von Ehrenstein, Ondine S; Díaz, Iván; Huber, Stella; Moraga Muñoz, Daniel; Radon, Katja

    2017-12-27

    In a town located in a desert area of Northern Chile, gold and copper open-pit mining is carried out involving explosive processes. These processes are associated with increased dust exposure, which might affect children's respiratory health. Therefore, we aimed to quantify the causal attributable risk of living close to the mines on asthma or allergic rhinoconjunctivitis risk burden in children. Data on the prevalence of respiratory diseases and potential confounders were available from a cross-sectional survey carried out in 2009 among 288 (response: 69 % ) children living in the community. The proximity of the children's home addresses to the local gold and copper mine was calculated using geographical positioning systems. We applied targeted maximum likelihood estimation to obtain the causal attributable risk (CAR) for asthma, rhinoconjunctivitis and both outcomes combined. Children living more than the first quartile away from the mines were used as the unexposed group. Based on the estimated CAR, a hypothetical intervention in which all children lived at least one quartile away from the copper mine would decrease the risk of rhinoconjunctivitis by 4.7 percentage points (CAR: - 4.7 ; 95 % confidence interval ( 95 % CI): - 8.4 ; - 0.11 ); and 4.2 percentage points (CAR: - 4.2 ; 95 % CI: - 7.9 ; - 0.05 ) for both outcomes combined. Overall, our results suggest that a hypothetical intervention intended to increase the distance between the place of residence of the highest exposed children would reduce the prevalence of respiratory disease in the community by around four percentage points. This approach could help local policymakers in the development of efficient public health strategies.

  5. Estimation of body composition of pigs

    International Nuclear Information System (INIS)

    Ferrell, C.L.; Cornelius, S.G.

    1984-01-01

    A study was conducted to evaluate the use of deuterium oxide (D2O) for in vivo estimation of body composition of diverse types of pigs. Obese (Ob, 30) and contemporary Hampshire X Yorkshire (C, 30) types of pigs used in the study were managed and fed under typical management regimens. Indwelling catheters were placed in a jugular vein of 6 Ob and 6 C pigs at 4, 8, 12, 18 and 24 wk of age. The D2O was infused (.5 g/kg body weight) as a .9% NaCl solution into the jugular catheter. Blood samples were taken immediately before and at .25, 1, 4, 8, 12, 24 and 48 h after the D2O infusion and D2O concentration in blood water was determined. Pigs were subsequently killed by euthanasia injection. Contents of the gastrointestinal tract were removed and the empty body was then frozen and later ground and sampled for subsequent analyses. Ground body tissue samples were analyzed for water, fat, N, fat-free organic matter and ash. Pig type, age and the type X age interaction were significant sources of variation in live weight, D2O pool size and all empty body components, as well as all fat-free empty body components. Relationships between age and live weight or weight of empty body components, and between live weight, empty body weight, empty body water or D2O space and weight of empty components were highly significant but influenced, in most cases, by pig type. The results of this study suggested that, although relationships between D2O space and body component weights were highly significant, they were influenced by pig type and were little better than live weight for the estimation of body composition

  6. Semiparametric profile likelihood estimation for continuous outcomes with excess zeros in a random-threshold damage-resistance model.

    Science.gov (United States)

    Rice, John D; Tsodikov, Alex

    2017-05-30

    Continuous outcome data with a proportion of observations equal to zero (often referred to as semicontinuous data) arise frequently in biomedical studies. Typical approaches involve two-part models, with one part a logistic model for the probability of observing a zero and some parametric continuous distribution for modeling the positive part of the data. We propose a semiparametric model based on a biological system with competing damage manifestation and resistance processes. This allows us to derive a closed-form profile likelihood based on the retro-hazard function, leading to a flexible procedure for modeling continuous data with a point mass at zero. A simulation study is presented to examine the properties of the method in finite samples. We apply the method to a data set consisting of pulmonary capillary hemorrhage area in lab rats subjected to diagnostic ultrasound. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  7. Likelihood devices in spatial statistics

    NARCIS (Netherlands)

    Zwet, E.W. van

    1999-01-01

    One of the main themes of this thesis is the application to spatial data of modern semi- and nonparametric methods. Another, closely related theme is maximum likelihood estimation from spatial data. Maximum likelihood estimation is not common practice in spatial statistics. The method of moments

  8. Age-specific incidence of A/H1N1 2009 influenza infection in England from sequential antibody prevalence data using likelihood-based estimation.

    Directory of Open Access Journals (Sweden)

    Marc Baguelin

    2011-02-01

    Full Text Available Estimating the age-specific incidence of an emerging pathogen is essential for understanding its severity and transmission dynamics. This paper describes a statistical method that uses likelihoods to estimate incidence from sequential serological data. The method requires information on seroconversion intervals and allows integration of information on the temporal distribution of cases from clinical surveillance. Among a family of candidate incidences, a likelihood function is derived by reconstructing the change in seroprevalence from seroconversion following infection and comparing it with the observed sequence of positivity among the samples. This method is applied to derive the cumulative and weekly incidence of A/H1N1 pandemic influenza in England during the second wave using sera taken between September 2009 and February 2010 in four age groups (1-4, 5-14, 15-24, 25-44 years. The highest cumulative incidence was in 5-14 year olds (59%, 95% credible interval (CI: 52%, 68% followed by 1-4 year olds (49%, 95% CI: 38%, 61%, rates 20 and 40 times higher respectively than estimated from clinical surveillance. The method provides a more accurate and continuous measure of incidence than achieved by comparing prevalence in samples grouped by time period.

  9. An Estimate of the Likelihood for a Climatically Significant Volcanic Eruption Within the Present Decade (2000-2009)

    Science.gov (United States)

    Wilson, Robert M.; Franklin, M. Rose (Technical Monitor)

    2000-01-01

    Since 1750, the number of cataclysmic volcanic eruptions (i.e., those having a volcanic explosivity index, or VEI, equal to 4 or larger) per decade is found to span 2-11, with 96% located in the tropics and extra-tropical Northern Hemisphere, A two-point moving average of the time series has higher values since the 1860s than before, measuring 8.00 in the 1910s (the highest value) and measuring 6.50 in the 1980s, the highest since the 18 1 0s' peak. On the basis of the usual behavior of the first difference of the two-point moving averages, one infers that the two-point moving average for the 1990s will measure about 6.50 +/- 1.00, implying that about 7 +/- 4 cataclysmic volcanic eruptions should be expected during the present decade (2000-2009). Because cataclysmic volcanic eruptions (especially, those having VEI equal to 5 or larger) nearly always have been associated with episodes of short-term global cooling, the occurrence of even one could ameliorate the effects of global warming. Poisson probability distributions reveal that the probability of one or more VEI equal to 4 or larger events occurring within the next ten years is >99%, while it is about 49% for VEI equal to 5 or larger events and 18% for VEI equal to 6 or larger events. Hence, the likelihood that a, climatically significant volcanic eruption will occur within the next 10 years appears reasonably high.

  10. On the likelihood function of Gaussian max-stable processes

    KAUST Repository

    Genton, M. G.; Ma, Y.; Sang, H.

    2011-01-01

    We derive a closed form expression for the likelihood function of a Gaussian max-stable process indexed by ℝd at p≤d+1 sites, d≥1. We demonstrate the gain in efficiency in the maximum composite likelihood estimators of the covariance matrix from p=2 to p=3 sites in ℝ2 by means of a Monte Carlo simulation study. © 2011 Biometrika Trust.

  11. On the likelihood function of Gaussian max-stable processes

    KAUST Repository

    Genton, M. G.

    2011-05-24

    We derive a closed form expression for the likelihood function of a Gaussian max-stable process indexed by ℝd at p≤d+1 sites, d≥1. We demonstrate the gain in efficiency in the maximum composite likelihood estimators of the covariance matrix from p=2 to p=3 sites in ℝ2 by means of a Monte Carlo simulation study. © 2011 Biometrika Trust.

  12. A new approach to hierarchical data analysis: Targeted maximum likelihood estimation for the causal effect of a cluster-level exposure.

    Science.gov (United States)

    Balzer, Laura B; Zheng, Wenjing; van der Laan, Mark J; Petersen, Maya L

    2018-01-01

    We often seek to estimate the impact of an exposure naturally occurring or randomly assigned at the cluster-level. For example, the literature on neighborhood determinants of health continues to grow. Likewise, community randomized trials are applied to learn about real-world implementation, sustainability, and population effects of interventions with proven individual-level efficacy. In these settings, individual-level outcomes are correlated due to shared cluster-level factors, including the exposure, as well as social or biological interactions between individuals. To flexibly and efficiently estimate the effect of a cluster-level exposure, we present two targeted maximum likelihood estimators (TMLEs). The first TMLE is developed under a non-parametric causal model, which allows for arbitrary interactions between individuals within a cluster. These interactions include direct transmission of the outcome (i.e. contagion) and influence of one individual's covariates on another's outcome (i.e. covariate interference). The second TMLE is developed under a causal sub-model assuming the cluster-level and individual-specific covariates are sufficient to control for confounding. Simulations compare the alternative estimators and illustrate the potential gains from pairing individual-level risk factors and outcomes during estimation, while avoiding unwarranted assumptions. Our results suggest that estimation under the sub-model can result in bias and misleading inference in an observational setting. Incorporating working assumptions during estimation is more robust than assuming they hold in the underlying causal model. We illustrate our approach with an application to HIV prevention and treatment.

  13. Induction machine bearing faults detection based on a multi-dimensional MUSIC algorithm and maximum likelihood estimation.

    Science.gov (United States)

    Elbouchikhi, Elhoussin; Choqueuse, Vincent; Benbouzid, Mohamed

    2016-07-01

    Condition monitoring of electric drives is of paramount importance since it contributes to enhance the system reliability and availability. Moreover, the knowledge about the fault mode behavior is extremely important in order to improve system protection and fault-tolerant control. Fault detection and diagnosis in squirrel cage induction machines based on motor current signature analysis (MCSA) has been widely investigated. Several high resolution spectral estimation techniques have been developed and used to detect induction machine abnormal operating conditions. This paper focuses on the application of MCSA for the detection of abnormal mechanical conditions that may lead to induction machines failure. In fact, this paper is devoted to the detection of single-point defects in bearings based on parametric spectral estimation. A multi-dimensional MUSIC (MD MUSIC) algorithm has been developed for bearing faults detection based on bearing faults characteristic frequencies. This method has been used to estimate the fundamental frequency and the fault related frequency. Then, an amplitude estimator of the fault characteristic frequencies has been proposed and fault indicator has been derived for fault severity measurement. The proposed bearing faults detection approach is assessed using simulated stator currents data, issued from a coupled electromagnetic circuits approach for air-gap eccentricity emulating bearing faults. Then, experimental data are used for validation purposes. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  14. Modified Maxium Likelihood Estimation Method for Completely Separated and Quasi-Completely Separated Data for a Dose-Response Model

    Science.gov (United States)

    2015-08-01

    Software] Ross Ihaka and Robert Gentleman, University of Auckland , New Zealand), which are used to estimate an intercept and a slope using the BR method...Stuart A. The Advanced Theory of Statistics, 3rd ed.; Griffin and Company, Ltd.: London, 1969; Vol. 1. 11. Jefferys, H. An Invariant Form for

  15. An Approach Using a 1D Hydraulic Model, Landsat Imaging and Generalized Likelihood Uncertainty Estimation for an Approximation of Flood Discharge

    Directory of Open Access Journals (Sweden)

    Seung Oh Lee

    2013-10-01

    Full Text Available Collection and investigation of flood information are essential to understand the nature of floods, but this has proved difficult in data-poor environments, or in developing or under-developed countries due to economic and technological limitations. The development of remote sensing data, GIS, and modeling techniques have, therefore, proved to be useful tools in the analysis of the nature of floods. Accordingly, this study attempts to estimate a flood discharge using the generalized likelihood uncertainty estimation (GLUE methodology and a 1D hydraulic model, with remote sensing data and topographic data, under the assumed condition that there is no gauge station in the Missouri river, Nebraska, and Wabash River, Indiana, in the United States. The results show that the use of Landsat leads to a better discharge approximation on a large-scale reach than on a small-scale. Discharge approximation using the GLUE depended on the selection of likelihood measures. Consideration of physical conditions in study reaches could, therefore, contribute to an appropriate selection of informal likely measurements. The river discharge assessed by using Landsat image and the GLUE Methodology could be useful in supplementing flood information for flood risk management at a planning level in ungauged basins. However, it should be noted that this approach to the real-time application might be difficult due to the GLUE procedure.

  16. User's guide: Nimbus-7 Earth radiation budget narrow-field-of-view products. Scene radiance tape products, sorting into angular bins products, and maximum likelihood cloud estimation products

    Science.gov (United States)

    Kyle, H. Lee; Hucek, Richard R.; Groveman, Brian; Frey, Richard

    1990-01-01

    The archived Earth radiation budget (ERB) products produced from the Nimbus-7 ERB narrow field-of-view scanner are described. The principal products are broadband outgoing longwave radiation (4.5 to 50 microns), reflected solar radiation (0.2 to 4.8 microns), and the net radiation. Daily and monthly averages are presented on a fixed global equal area (500 sq km), grid for the period May 1979 to May 1980. Two independent algorithms are used to estimate the outgoing fluxes from the observed radiances. The algorithms are described and the results compared. The products are divided into three subsets: the Scene Radiance Tapes (SRT) contain the calibrated radiances; the Sorting into Angular Bins (SAB) tape contains the SAB produced shortwave, longwave, and net radiation products; and the Maximum Likelihood Cloud Estimation (MLCE) tapes contain the MLCE products. The tape formats are described in detail.

  17. Application of asymptotic expansions for maximum likelihood estimators' errors to gravitational waves from inspiraling binary systems: The network case

    International Nuclear Information System (INIS)

    Vitale, Salvatore; Zanolin, Michele

    2011-01-01

    This paper describes the most accurate analytical frequentist assessment to date of the uncertainties in the estimation of physical parameters from gravitational waves generated by nonspinning binary systems and Earth-based networks of laser interferometers. The paper quantifies how the accuracy in estimating the intrinsic parameters mostly depends on the network signal to noise ratio (SNR), but the resolution in the direction of arrival also strongly depends on the network geometry. We compare results for six different existing and possible global networks and two different choices of the parameter space. We show how the fraction of the sky where the one sigma angular resolution is below 2 square degrees increases about 3 times when transitioning from the Hanford (USA), Livingston (USA) and Cascina (Italy) network to a network made of five interferometers (while keeping the network SNR fixed). The technique adopted here is an asymptotic expansion of the uncertainties in inverse powers of the SNR where the first order is the inverse Fisher information matrix. We show that the commonly employed approach of using a simplified parameter spaces and only the Fisher information matrix can largely underestimate the uncertainties (the combined effect would lead to a factor 7 for the one sigma sky uncertainty in square degrees at a network SNR of 15).

  18. Body composition estimation from selected slices

    DEFF Research Database (Denmark)

    Lacoste Jeanson, Alizé; Dupej, Ján; Villa, Chiara

    2017-01-01

    Background Estimating volumes and masses of total body components is important for the study and treatment monitoring of nutrition and nutrition-related disorders, cancer, joint replacement, energy-expenditure and exercise physiology. While several equations have been offered for estimating total...

  19. Estimating Quartz Reserves Using Compositional Kriging

    Directory of Open Access Journals (Sweden)

    J. Taboada

    2013-01-01

    Full Text Available The aim of this study was to determine spatial distribution and volume of four commercial quartz grades, namely, silicon metal, ferrosilicon, aggregate, and kaolin (depending on content in impurities in a quartz seam. The chemical and mineralogical composition of the reserves in the seam were determined from samples collected from outcrops, blasting operations, and exploratory drilling, and compositional kriging was used to calculate the volume and distribution of the reserves. A more accurate knowledge of the deposit ensures better mine planning, leading to higher profitability and an improved relationship with the environment.

  20. Constrained Maximum Likelihood Estimation of Relative Abundances of Protein Conformation in a Heterogeneous Mixture from Small Angle X-Ray Scattering Intensity Measurements

    Science.gov (United States)

    Onuk, A. Emre; Akcakaya, Murat; Bardhan, Jaydeep P.; Erdogmus, Deniz; Brooks, Dana H.; Makowski, Lee

    2015-01-01

    In this paper, we describe a model for maximum likelihood estimation (MLE) of the relative abundances of different conformations of a protein in a heterogeneous mixture from small angle X-ray scattering (SAXS) intensities. To consider cases where the solution includes intermediate or unknown conformations, we develop a subset selection method based on k-means clustering and the Cramér-Rao bound on the mixture coefficient estimation error to find a sparse basis set that represents the space spanned by the measured SAXS intensities of the known conformations of a protein. Then, using the selected basis set and the assumptions on the model for the intensity measurements, we show that the MLE model can be expressed as a constrained convex optimization problem. Employing the adenylate kinase (ADK) protein and its known conformations as an example, and using Monte Carlo simulations, we demonstrate the performance of the proposed estimation scheme. Here, although we use 45 crystallographically determined experimental structures and we could generate many more using, for instance, molecular dynamics calculations, the clustering technique indicates that the data cannot support the determination of relative abundances for more than 5 conformations. The estimation of this maximum number of conformations is intrinsic to the methodology we have used here. PMID:26924916

  1. The influence of SO4 and NO3 to the acidity (pH) of rainwater using minimum variance quadratic unbiased estimation (MIVQUE) and maximum likelihood methods

    Science.gov (United States)

    Dilla, Shintia Ulfa; Andriyana, Yudhie; Sudartianto

    2017-03-01

    Acid rain causes many bad effects in life. It is formed by two strong acids, sulfuric acid (H2SO4) and nitric acid (HNO3), where sulfuric acid is derived from SO2 and nitric acid from NOx {x=1,2}. The purpose of the research is to find out the influence of So4 and NO3 levels contained in the rain to the acidity (pH) of rainwater. The data are incomplete panel data with two-way error component model. The panel data is a collection of some of the observations that observed from time to time. It is said incomplete if each individual has a different amount of observation. The model used in this research is in the form of random effects model (REM). Minimum variance quadratic unbiased estimation (MIVQUE) is used to estimate the variance error components, while maximum likelihood estimation is used to estimate the parameters. As a result, we obtain the following model: Ŷ* = 0.41276446 - 0.00107302X1 + 0.00215470X2.

  2. Usefulness and limitations of dK random graph models to predict interactions and functional homogeneity in biological networks under a pseudo-likelihood parameter estimation approach

    Directory of Open Access Journals (Sweden)

    Luan Yihui

    2009-09-01

    Full Text Available Abstract Background Many aspects of biological functions can be modeled by biological networks, such as protein interaction networks, metabolic networks, and gene coexpression networks. Studying the statistical properties of these networks in turn allows us to infer biological function. Complex statistical network models can potentially more accurately describe the networks, but it is not clear whether such complex models are better suited to find biologically meaningful subnetworks. Results Recent studies have shown that the degree distribution of the nodes is not an adequate statistic in many molecular networks. We sought to extend this statistic with 2nd and 3rd order degree correlations and developed a pseudo-likelihood approach to estimate the parameters. The approach was used to analyze the MIPS and BIOGRID yeast protein interaction networks, and two yeast coexpression networks. We showed that 2nd order degree correlation information gave better predictions of gene interactions in both protein interaction and gene coexpression networks. However, in the biologically important task of predicting functionally homogeneous modules, degree correlation information performs marginally better in the case of the MIPS and BIOGRID protein interaction networks, but worse in the case of gene coexpression networks. Conclusion Our use of dK models showed that incorporation of degree correlations could increase predictive power in some contexts, albeit sometimes marginally, but, in all contexts, the use of third-order degree correlations decreased accuracy. However, it is possible that other parameter estimation methods, such as maximum likelihood, will show the usefulness of incorporating 2nd and 3rd degree correlations in predicting functionally homogeneous modules.

  3. Usefulness and limitations of dK random graph models to predict interactions and functional homogeneity in biological networks under a pseudo-likelihood parameter estimation approach.

    Science.gov (United States)

    Wang, Wenhui; Nunez-Iglesias, Juan; Luan, Yihui; Sun, Fengzhu

    2009-09-03

    Many aspects of biological functions can be modeled by biological networks, such as protein interaction networks, metabolic networks, and gene coexpression networks. Studying the statistical properties of these networks in turn allows us to infer biological function. Complex statistical network models can potentially more accurately describe the networks, but it is not clear whether such complex models are better suited to find biologically meaningful subnetworks. Recent studies have shown that the degree distribution of the nodes is not an adequate statistic in many molecular networks. We sought to extend this statistic with 2nd and 3rd order degree correlations and developed a pseudo-likelihood approach to estimate the parameters. The approach was used to analyze the MIPS and BIOGRID yeast protein interaction networks, and two yeast coexpression networks. We showed that 2nd order degree correlation information gave better predictions of gene interactions in both protein interaction and gene coexpression networks. However, in the biologically important task of predicting functionally homogeneous modules, degree correlation information performs marginally better in the case of the MIPS and BIOGRID protein interaction networks, but worse in the case of gene coexpression networks. Our use of dK models showed that incorporation of degree correlations could increase predictive power in some contexts, albeit sometimes marginally, but, in all contexts, the use of third-order degree correlations decreased accuracy. However, it is possible that other parameter estimation methods, such as maximum likelihood, will show the usefulness of incorporating 2nd and 3rd degree correlations in predicting functionally homogeneous modules.

  4. Numerically stable algorithm for combining census and sample estimates with the multivariate composite estimator

    Science.gov (United States)

    R. L. Czaplewski

    2009-01-01

    The minimum variance multivariate composite estimator is a relatively simple sequential estimator for complex sampling designs (Czaplewski 2009). Such designs combine a probability sample of expensive field data with multiple censuses and/or samples of relatively inexpensive multi-sensor, multi-resolution remotely sensed data. Unfortunately, the multivariate composite...

  5. Cost estimates to guide manufacturing of composite waved beam

    International Nuclear Information System (INIS)

    Ye Jinrui; Zhang Boming; Qi Haiming

    2009-01-01

    A cost estimation model on the basis of manufacturing process has been presented. In the model, the effects of the material, labor, tool and equipment were discussed, and the corresponding formulas were provided. A method of selecting estimation variables has been provided based on a case study of composite waved beam using autoclave cure. The model parameters related to the process time estimation of the lay-up procedure were analyzed and modified for different part configurations. The result shows that there is little error while comparing the estimated process time with the practical one. The model is verified to be applicable to guide the design and manufacturing of the composite material

  6. Chinook Bycatch - Contemporary Salmon Genetic Stock Composition Estimates

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The purpose of this project is to measure and monitor impacts on ESA-listed populations and to estimate overall Chinook salmon stock composition in bycatch...

  7. Estimation of effective thermal conductivity tensor from composite microstructure images

    International Nuclear Information System (INIS)

    Thomas, M; Boyard, N; Jarny, Y; Delaunay, D

    2008-01-01

    The determination of the effective thermal properties of inhomogeneous materials is a long-standing problem of continuously interest. The impressive number of methods developed to measure or estimate the thermal properties of composite materials clearly exhibits the importance given to their knowledge. Homogenization models are a cheap way to determine or predict them. Many different approaches of homogenization were developed, but the last advances are credited to numerical methods. In this study, a new computational model is developed to estimate the 2D thermal conductivity tensor and the thermal main directions of a pure carbon/epoxy unidirectional composite. This tool is based on real composite microstructure.

  8. Advanced Composite Air Frame Life Cycle Cost Estimating

    Science.gov (United States)

    2014-06-19

    the ACCA based on the cost . This cost analysis takes into account the increased performance parameters of the new airframe structure. This research...20 Advanced Composite Cargo Aircraft ( ACCA ) ..........................................................23 viii Cost Estimation...establishing the procurement strategies and life cycle cost (LCC) model cost estimations. The current LCC models do not take into account the potential cost

  9. Quantifying the Strength of General Factors in Psychopathology: A Comparison of CFA with Maximum Likelihood Estimation, BSEM, and ESEM/EFA Bifactor Approaches.

    Science.gov (United States)

    Murray, Aja Louise; Booth, Tom; Eisner, Manuel; Obsuth, Ingrid; Ribeaud, Denis

    2018-05-22

    Whether or not importance should be placed on an all-encompassing general factor of psychopathology (or p factor) in classifying, researching, diagnosing, and treating psychiatric disorders depends (among other issues) on the extent to which comorbidity is symptom-general rather than staying largely within the confines of narrower transdiagnostic factors such as internalizing and externalizing. In this study, we compared three methods of estimating p factor strength. We compared omega hierarchical and explained common variance calculated from confirmatory factor analysis (CFA) bifactor models with maximum likelihood (ML) estimation, from exploratory structural equation modeling/exploratory factor analysis models with a bifactor rotation, and from Bayesian structural equation modeling (BSEM) bifactor models. Our simulation results suggested that BSEM with small variance priors on secondary loadings might be the preferred option. However, CFA with ML also performed well provided secondary loadings were modeled. We provide two empirical examples of applying the three methodologies using a normative sample of youth (z-proso, n = 1,286) and a university counseling sample (n = 359).

  10. Data assimilation and uncertainty analysis of environmental assessment problems--an application of Stochastic Transfer Function and Generalised Likelihood Uncertainty Estimation techniques

    International Nuclear Information System (INIS)

    Romanowicz, Renata; Young, Peter C.

    2003-01-01

    Stochastic Transfer Function (STF) and Generalised Likelihood Uncertainty Estimation (GLUE) techniques are outlined and applied to an environmental problem concerned with marine dose assessment. The goal of both methods in this application is the estimation and prediction of the environmental variables, together with their associated probability distributions. In particular, they are used to estimate the amount of radionuclides transferred to marine biota from a given source: the British Nuclear Fuel Ltd (BNFL) repository plant in Sellafield, UK. The complexity of the processes involved, together with the large dispersion and scarcity of observations regarding radionuclide concentrations in the marine environment, require efficient data assimilation techniques. In this regard, the basic STF methods search for identifiable, linear model structures that capture the maximum amount of information contained in the data with a minimal parameterisation. They can be extended for on-line use, based on recursively updated Bayesian estimation and, although applicable to only constant or time-variable parameter (non-stationary) linear systems in the form used in this paper, they have the potential for application to non-linear systems using recently developed State Dependent Parameter (SDP) non-linear STF models. The GLUE based-methods, on the other hand, formulate the problem of estimation using a more general Bayesian approach, usually without prior statistical identification of the model structure. As a result, they are applicable to almost any linear or non-linear stochastic model, although they are much less efficient both computationally and in their use of the information contained in the observations. As expected in this particular environmental application, it is shown that the STF methods give much narrower confidence limits for the estimates due to their more efficient use of the information contained in the data. Exploiting Monte Carlo Simulation (MCS) analysis

  11. The composition of engineered cartilage at the time of implantation determines the likelihood of regenerating tissue with a normal collagen architecture.

    Science.gov (United States)

    Nagel, Thomas; Kelly, Daniel J

    2013-04-01

    The biomechanical functionality of articular cartilage is derived from both its biochemical composition and the architecture of the collagen network. Failure to replicate this normal Benninghoff architecture in regenerating articular cartilage may in turn predispose the tissue to failure. In this article, the influence of the maturity (or functionality) of a tissue-engineered construct at the time of implantation into a tibial chondral defect on the likelihood of recapitulating a normal Benninghoff architecture was investigated using a computational model featuring a collagen remodeling algorithm. Such a normal tissue architecture was predicted to form in the intact tibial plateau due to the interplay between the depth-dependent extracellular matrix properties, foremost swelling pressures, and external mechanical loading. In the presence of even small empty defects in the articular surface, the collagen architecture in the surrounding cartilage was predicted to deviate significantly from the native state, indicating a possible predisposition for osteoarthritic changes. These negative alterations were alleviated by the implantation of tissue-engineered cartilage, where a mature implant was predicted to result in the formation of a more native-like collagen architecture than immature implants. The results of this study highlight the importance of cartilage graft functionality to maintain and/or re-establish joint function and suggest that engineering a tissue with a native depth-dependent composition may facilitate the establishment of a normal Benninghoff collagen architecture after implantation into load-bearing defects.

  12. Using physical properties of molten glass to estimate glass composition

    International Nuclear Information System (INIS)

    Choi, Kwan Sik; Yang, Kyoung Hwa; Park, Jong Kil

    1997-01-01

    A vitrification process is under development in KEPRI for the treatment of low-and medium-level radioactive waste. Although the project is for developing and building Vitrification Pilot Plant in Korea, one of KEPRI's concerns is the quality control of the vitrified glass. This paper discusses a methodology for the estimation of glass composition by on-line measurement of molten glass properties, which could be applied to the plant for real-time quality control of the glass product. By remotely measuring viscosity and density of the molten glass, the glass characteristics such as composition can be estimated and eventually controlled. For this purpose, using the database of glass composition vs. physical properties in isothermal three-component system of SiO 2 -Na 2 O-B 2 O 3 , a software TERNARY has been developed which determines the glass composition by using two known physical properties (e.g. density and viscosity)

  13. The metabolic network of Clostridium acetobutylicum: Comparison of the approximate Bayesian computation via sequential Monte Carlo (ABC-SMC) and profile likelihood estimation (PLE) methods for determinability analysis.

    Science.gov (United States)

    Thorn, Graeme J; King, John R

    2016-01-01

    The Gram-positive bacterium Clostridium acetobutylicum is an anaerobic endospore-forming species which produces acetone, butanol and ethanol via the acetone-butanol (AB) fermentation process, leading to biofuels including butanol. In previous work we looked to estimate the parameters in an ordinary differential equation model of the glucose metabolism network using data from pH-controlled continuous culture experiments. Here we combine two approaches, namely the approximate Bayesian computation via an existing sequential Monte Carlo (ABC-SMC) method (to compute credible intervals for the parameters), and the profile likelihood estimation (PLE) (to improve the calculation of confidence intervals for the same parameters), the parameters in both cases being derived from experimental data from forward shift experiments. We also apply the ABC-SMC method to investigate which of the models introduced previously (one non-sporulation and four sporulation models) have the greatest strength of evidence. We find that the joint approximate posterior distribution of the parameters determines the same parameters as previously, including all of the basal and increased enzyme production rates and enzyme reaction activity parameters, as well as the Michaelis-Menten kinetic parameters for glucose ingestion, while other parameters are not as well-determined, particularly those connected with the internal metabolites acetyl-CoA, acetoacetyl-CoA and butyryl-CoA. We also find that the approximate posterior is strongly non-Gaussian, indicating that our previous assumption of elliptical contours of the distribution is not valid, which has the effect of reducing the numbers of pairs of parameters that are (linearly) correlated with each other. Calculations of confidence intervals using the PLE method back this up. Finally, we find that all five of our models are equally likely, given the data available at present. Copyright © 2015 Elsevier Inc. All rights reserved.

  14. Performance of maximum likelihood mixture models to estimate nursery habitat contributions to fish stocks: a case study on sea bream Sparus aurata

    Directory of Open Access Journals (Sweden)

    Edwin J. Niklitschek

    2016-10-01

    Full Text Available Background Mixture models (MM can be used to describe mixed stocks considering three sets of parameters: the total number of contributing sources, their chemical baseline signatures and their mixing proportions. When all nursery sources have been previously identified and sampled for juvenile fish to produce baseline nursery-signatures, mixing proportions are the only unknown set of parameters to be estimated from the mixed-stock data. Otherwise, the number of sources, as well as some/all nursery-signatures may need to be also estimated from the mixed-stock data. Our goal was to assess bias and uncertainty in these MM parameters when estimated using unconditional maximum likelihood approaches (ML-MM, under several incomplete sampling and nursery-signature separation scenarios. Methods We used a comprehensive dataset containing otolith elemental signatures of 301 juvenile Sparus aurata, sampled in three contrasting years (2008, 2010, 2011, from four distinct nursery habitats. (Mediterranean lagoons Artificial nursery-source and mixed-stock datasets were produced considering: five different sampling scenarios where 0–4 lagoons were excluded from the nursery-source dataset and six nursery-signature separation scenarios that simulated data separated 0.5, 1.5, 2.5, 3.5, 4.5 and 5.5 standard deviations among nursery-signature centroids. Bias (BI and uncertainty (SE were computed to assess reliability for each of the three sets of MM parameters. Results Both bias and uncertainty in mixing proportion estimates were low (BI ≤ 0.14, SE ≤ 0.06 when all nursery-sources were sampled but exhibited large variability among cohorts and increased with the number of non-sampled sources up to BI = 0.24 and SE = 0.11. Bias and variability in baseline signature estimates also increased with the number of non-sampled sources, but tended to be less biased, and more uncertain than mixing proportion ones, across all sampling scenarios (BI < 0.13, SE < 0

  15. Performance of maximum likelihood mixture models to estimate nursery habitat contributions to fish stocks: a case study on sea bream Sparus aurata

    Science.gov (United States)

    Darnaude, Audrey M.

    2016-01-01

    Background Mixture models (MM) can be used to describe mixed stocks considering three sets of parameters: the total number of contributing sources, their chemical baseline signatures and their mixing proportions. When all nursery sources have been previously identified and sampled for juvenile fish to produce baseline nursery-signatures, mixing proportions are the only unknown set of parameters to be estimated from the mixed-stock data. Otherwise, the number of sources, as well as some/all nursery-signatures may need to be also estimated from the mixed-stock data. Our goal was to assess bias and uncertainty in these MM parameters when estimated using unconditional maximum likelihood approaches (ML-MM), under several incomplete sampling and nursery-signature separation scenarios. Methods We used a comprehensive dataset containing otolith elemental signatures of 301 juvenile Sparus aurata, sampled in three contrasting years (2008, 2010, 2011), from four distinct nursery habitats. (Mediterranean lagoons) Artificial nursery-source and mixed-stock datasets were produced considering: five different sampling scenarios where 0–4 lagoons were excluded from the nursery-source dataset and six nursery-signature separation scenarios that simulated data separated 0.5, 1.5, 2.5, 3.5, 4.5 and 5.5 standard deviations among nursery-signature centroids. Bias (BI) and uncertainty (SE) were computed to assess reliability for each of the three sets of MM parameters. Results Both bias and uncertainty in mixing proportion estimates were low (BI ≤ 0.14, SE ≤ 0.06) when all nursery-sources were sampled but exhibited large variability among cohorts and increased with the number of non-sampled sources up to BI = 0.24 and SE = 0.11. Bias and variability in baseline signature estimates also increased with the number of non-sampled sources, but tended to be less biased, and more uncertain than mixing proportion ones, across all sampling scenarios (BI nursery signatures improved reliability

  16. Analysis of Ion Composition Estimation Accuracy for Incoherent Scatter Radars

    Science.gov (United States)

    Martínez Ledesma, M.; Diaz, M. A.

    2017-12-01

    The Incoherent Scatter Radar (ISR) is one of the most powerful sounding methods developed to estimate the Ionosphere. This radar system determines the plasma parameters by sending powerful electromagnetic pulses to the Ionosphere and analyzing the received backscatter. This analysis provides information about parameters such as electron and ion temperatures, electron densities, ion composition, and ion drift velocities. Nevertheless in some cases the ISR analysis has ambiguities in the determination of the plasma characteristics. It is of particular relevance the ion composition and temperature ambiguity obtained between the F1 and the lower F2 layers. In this case very similar signals are obtained with different mixtures of molecular ions (NO2+ and O2+) and atomic oxygen ions (O+), and consequently it is not possible to completely discriminate between them. The most common solution to solve this problem is the use of empirical or theoretical models of the ionosphere in the fitting of ambiguous data. More recent works take use of parameters estimated from the Plasma Line band of the radar to reduce the number of parameters to determine. In this work we propose to determine the error estimation of the ion composition ambiguity when using Plasma Line electron density measurements. The sensibility of the ion composition estimation has been also calculated depending on the accuracy of the ionospheric model, showing that the correct estimation is highly dependent on the capacity of the model to approximate the real values. Monte Carlo simulations of data fitting at different signal to noise (SNR) ratios have been done to obtain valid and invalid estimation probability curves. This analysis provides a method to determine the probability of erroneous estimation for different signal fluctuations. Also it can be used as an empirical method to compare the efficiency of the different algorithms and methods on when solving the ion composition ambiguity.

  17. Neuroanatomical substrates of action perception and understanding: an anatomic likelihood estimation meta-analysis of lesion-symptom mapping studies in brain injured patients.

    Directory of Open Access Journals (Sweden)

    Cosimo eUrgesi

    2014-05-01

    Full Text Available Several neurophysiologic and neuroimaging studies suggested that motor and perceptual systems are tightly linked along a continuum rather than providing segregated mechanisms supporting different functions. Using correlational approaches, these studies demonstrated that action observation activates not only visual but also motor brain regions. On the other hand, brain stimulation and brain lesion evidence allows tackling the critical question of whether our action representations are necessary to perceive and understand others’ actions. In particular, recent neuropsychological studies have shown that patients with temporal, parietal and frontal lesions exhibit a number of possible deficits in the visual perception and the understanding of others’ actions. The specific anatomical substrates of such neuropsychological deficits however are still a matter of debate. Here we review the existing literature on this issue and perform an anatomic likelihood estimation meta-analysis of studies using lesion-symptom mapping methods on the causal relation between brain lesions and non-linguistic action perception and understanding deficits. The meta-analysis encompassed data from 361 patients tested in 11 studies and identified regions in the inferior frontal cortex, the inferior parietal cortex and the middle/superior temporal cortex, whose damage is consistently associated with poor performance in action perception and understanding tasks across studies. Interestingly, these areas correspond to the three nodes of the action observation network that are strongly activated in response to visual action perception in neuroimaging research and that have been targeted in previous brain stimulation studies. Thus, brain lesion mapping research provides converging causal evidence that premotor, parietal and temporal regions play a crucial role in action recognition and understanding.

  18. Voxelwise meta-ananlysis of gray matter anomalies in progressive supranuclear palsy and Parkinson’s disease using anatomic likelihood estimation

    Directory of Open Access Journals (Sweden)

    Huifang eShang

    2014-02-01

    Full Text Available Numerous voxel-based morphometry (VBM studies on gray matter (GM of patients with progressive supranuclear palsy (PSP and Parkinson’s disease (PD have been conducted separately. Identifying the different neuroanatomical changes in GM resulting from PSP and PD through meta-analysis will aid the differential diagnosis of PSP and PD. In this study, a systematic review of VBM studies of patients with PSP and PD relative to healthy controls (HC in the Embase and PubMed databases from January 1995 to April 2013 was conducted. The anatomical distribution of the coordinates of GM differences was meta-analyzed using anatomical likelihood estimation. Separate maps of GM changes were constructed and subtraction meta-analysis was performed to explore the differences in GM abnormalities between PSP and PD. Nine PSP studies and 24 PD studies were included. GM reductions were present in the bilateral thalamus, basal ganglia, midbrain, insular cortex and inferior frontal gyrus, and left precentral gyrus and anterior cingulate gyrus in PSP. Atrophy of GM was concentrated in the bilateral middle and inferior frontal gyrus, precuneus, left precentral gyrus, middle temporal gyrus, right superior parietal lobule, and right cuneus in PD. Subtraction meta-analysis indicated that GM volume was lesser in the bilateral midbrain, thalamus, and insula in PSP compared with that in PD. Our meta-analysis indicated that PSP and PD shared a similar distribution of neuroanatomical changes in the frontal lobe, including inferior frontal gyrus and precentral gyrus, and that atrophy of the midbrain, thalamus, and insula are neuroanatomical markers for differentiating PSP from PD.

  19. Obtaining reliable Likelihood Ratio tests from simulated likelihood functions

    DEFF Research Database (Denmark)

    Andersen, Laura Mørch

    It is standard practice by researchers and the default option in many statistical programs to base test statistics for mixed models on simulations using asymmetric draws (e.g. Halton draws). This paper shows that when the estimated likelihood functions depend on standard deviations of mixed param...

  20. Assessing Error Correlations in Remote Sensing-Based Estimates of Forest Attributes for Improved Composite Estimation

    Directory of Open Access Journals (Sweden)

    Sarah Ehlers

    2018-04-01

    Full Text Available Today, non-expensive remote sensing (RS data from different sensors and platforms can be obtained at short intervals and be used for assessing several kinds of forest characteristics at the level of plots, stands and landscapes. Methods such as composite estimation and data assimilation can be used for combining the different sources of information to obtain up-to-date and precise estimates of the characteristics of interest. In composite estimation a standard procedure is to assign weights to the different individual estimates inversely proportional to their variance. However, in case the estimates are correlated, the correlations must be considered in assigning weights or otherwise a composite estimator may be inefficient and its variance be underestimated. In this study we assessed the correlation of plot level estimates of forest characteristics from different RS datasets, between assessments using the same type of sensor as well as across different sensors. The RS data evaluated were SPOT-5 multispectral data, 3D airborne laser scanning data, and TanDEM-X interferometric radar data. Studies were made for plot level mean diameter, mean height, and growing stock volume. All data were acquired from a test site dominated by coniferous forest in southern Sweden. We found that the correlation between plot level estimates based on the same type of RS data were positive and strong, whereas the correlations between estimates using different sources of RS data were not as strong, and weaker for mean height than for mean diameter and volume. The implications of such correlations in composite estimation are demonstrated and it is discussed how correlations may affect results from data assimilation procedures.

  1. Evaluation of penalized likelihood estimation reconstruction on a digital time-of-flight PET/CT scanner for 18F-FDG whole-body examinations.

    Science.gov (United States)

    Lindström, Elin; Sundin, Anders; Trampal, Carlos; Lindsjö, Lars; Ilan, Ezgi; Danfors, Torsten; Antoni, Gunnar; Sörensen, Jens; Lubberink, Mark

    2018-02-15

    Resolution and quantitative accuracy of positron emission tomography (PET) are highly influenced by the reconstruction method. Penalized likelihood estimation algorithms allow for fully convergent iterative reconstruction, generating a higher image contrast while limiting noise compared to ordered subsets expectation maximization (OSEM). In this study, block-sequential regularized expectation maximization (BSREM) was compared to time-of-flight OSEM (TOF-OSEM). Various strengths of noise penalization factor β were tested along with scan durations and transaxial field of views (FOVs) with the aim to evaluate the performance and clinical use of BSREM for 18 F-FDG-PET-computed tomography (CT), both in quantitative terms and in a qualitative visual evaluation. Methods: Eleven clinical whole-body 18 F-FDG-PET/CT examinations acquired on a digital TOF PET/CT scanner were included. The data were reconstructed using BSREM with point spread function (PSF) recovery and β 133, 267, 400 and 533, and TOF-OSEM with PSF, for various acquisition times/bed position (bp) and FOVs. Noise, signal-to-noise ratio (SNR), signal-to-background ratio (SBR), and standardized uptake values (SUVs) were analysed. A blinded visual image quality evaluation, rating several aspects, performed by two nuclear medicine physicians complemented the analysis. Results: The lowest levels of noise were reached with the highest β resulting in the highest SNR, which in turn resulted in the lowest SBR. Noise equivalence to TOF-OSEM was found with β 400 but produced a significant increase of SUV max (11%), SNR (22%) and SBR (12%) compared to TOF-OSEM. BSREM with β 533 at decreased acquisition (2 min/bp) was comparable to TOF-OSEM at full acquisition duration (3 min/bp). Reconstructed FOV had an impact on BSREM outcome measures, SNR increased while SBR decreased when shifting FOV from 70 to 50 cm. The visual image quality evaluation resulted in similar scores for reconstructions although β 400 obtained the

  2. Predicting Medical Students’ Current Attitudes Toward Psychiatry, Interest in Psychiatry, and Estimated Likelihood of Working in Psychiatry: A Cross-Sectional Study in Four European Countries

    Directory of Open Access Journals (Sweden)

    Ingeborg Warnke

    2018-03-01

    Full Text Available Psychiatry as a medical discipline is becoming increasingly important due to the high and increasing worldwide burden associated with mental disorders. Surprisingly, however, there is a lack of young academics choosing psychiatry as a career. Previous evidence on medical students’ perspectives is abundant but has methodological shortcomings. Therefore, by attempting to avoid previous shortcomings, we aimed to contribute to a better understanding of the predictors of the following three outcome variables: current medical students’ attitudes toward psychiatry, interest in psychiatry, and estimated likelihood of working in psychiatry. The sample consisted of N = 1,356 medical students at 45 medical schools in Germany and Austria as well as regions of Switzerland and Hungary with a German language curriculum. We used snowball sampling via Facebook with a link to an online questionnaire as recruitment procedure. Snowball sampling is based on referrals made among people. This questionnaire included a German version of the Attitudes Toward Psychiatry Scale (ATP-30-G and further variables related to outcomes and potential predictors in terms of sociodemography (e.g., gender or medical training (e.g., curriculum-related experience with psychiatry. Data were analyzed by linear mixed models and further regression models. On average, students had a positive attitude to and high general interest in, but low professional preference for, psychiatry. A neutral attitude to psychiatry was partly related to the discipline itself, psychiatrists, or psychiatric patients. Female gender and previous experience with psychiatry, particularly curriculum-related and personal experience, were important predictors of all outcomes. Students in the first years of medical training were more interested in pursuing psychiatry as a career. Furthermore, the country of the medical school was related to the outcomes. However, statistical models explained only a small

  3. Predicting Medical Students’ Current Attitudes Toward Psychiatry, Interest in Psychiatry, and Estimated Likelihood of Working in Psychiatry: A Cross-Sectional Study in Four European Countries

    Science.gov (United States)

    Warnke, Ingeborg; Gamma, Alex; Buadze, Maria; Schleifer, Roman; Canela, Carlos; Strebel, Bernd; Tényi, Tamás; Rössler, Wulf; Rüsch, Nicolas; Liebrenz, Michael

    2018-01-01

    Psychiatry as a medical discipline is becoming increasingly important due to the high and increasing worldwide burden associated with mental disorders. Surprisingly, however, there is a lack of young academics choosing psychiatry as a career. Previous evidence on medical students’ perspectives is abundant but has methodological shortcomings. Therefore, by attempting to avoid previous shortcomings, we aimed to contribute to a better understanding of the predictors of the following three outcome variables: current medical students’ attitudes toward psychiatry, interest in psychiatry, and estimated likelihood of working in psychiatry. The sample consisted of N = 1,356 medical students at 45 medical schools in Germany and Austria as well as regions of Switzerland and Hungary with a German language curriculum. We used snowball sampling via Facebook with a link to an online questionnaire as recruitment procedure. Snowball sampling is based on referrals made among people. This questionnaire included a German version of the Attitudes Toward Psychiatry Scale (ATP-30-G) and further variables related to outcomes and potential predictors in terms of sociodemography (e.g., gender) or medical training (e.g., curriculum-related experience with psychiatry). Data were analyzed by linear mixed models and further regression models. On average, students had a positive attitude to and high general interest in, but low professional preference for, psychiatry. A neutral attitude to psychiatry was partly related to the discipline itself, psychiatrists, or psychiatric patients. Female gender and previous experience with psychiatry, particularly curriculum-related and personal experience, were important predictors of all outcomes. Students in the first years of medical training were more interested in pursuing psychiatry as a career. Furthermore, the country of the medical school was related to the outcomes. However, statistical models explained only a small proportion of variance

  4. Predicting Medical Students' Current Attitudes Toward Psychiatry, Interest in Psychiatry, and Estimated Likelihood of Working in Psychiatry: A Cross-Sectional Study in Four European Countries.

    Science.gov (United States)

    Warnke, Ingeborg; Gamma, Alex; Buadze, Maria; Schleifer, Roman; Canela, Carlos; Strebel, Bernd; Tényi, Tamás; Rössler, Wulf; Rüsch, Nicolas; Liebrenz, Michael

    2018-01-01

    Psychiatry as a medical discipline is becoming increasingly important due to the high and increasing worldwide burden associated with mental disorders. Surprisingly, however, there is a lack of young academics choosing psychiatry as a career. Previous evidence on medical students' perspectives is abundant but has methodological shortcomings. Therefore, by attempting to avoid previous shortcomings, we aimed to contribute to a better understanding of the predictors of the following three outcome variables: current medical students' attitudes toward psychiatry, interest in psychiatry, and estimated likelihood of working in psychiatry. The sample consisted of N  = 1,356 medical students at 45 medical schools in Germany and Austria as well as regions of Switzerland and Hungary with a German language curriculum. We used snowball sampling via Facebook with a link to an online questionnaire as recruitment procedure. Snowball sampling is based on referrals made among people. This questionnaire included a German version of the Attitudes Toward Psychiatry Scale (ATP-30-G) and further variables related to outcomes and potential predictors in terms of sociodemography (e.g., gender) or medical training (e.g., curriculum-related experience with psychiatry). Data were analyzed by linear mixed models and further regression models. On average, students had a positive attitude to and high general interest in, but low professional preference for, psychiatry. A neutral attitude to psychiatry was partly related to the discipline itself, psychiatrists, or psychiatric patients. Female gender and previous experience with psychiatry, particularly curriculum-related and personal experience, were important predictors of all outcomes. Students in the first years of medical training were more interested in pursuing psychiatry as a career. Furthermore, the country of the medical school was related to the outcomes. However, statistical models explained only a small proportion of variance. The

  5. A bootstrap estimation scheme for chemical compositional data with nondetects

    Science.gov (United States)

    Palarea-Albaladejo, J; Martín-Fernández, J.A; Olea, Ricardo A.

    2014-01-01

    The bootstrap method is commonly used to estimate the distribution of estimators and their associated uncertainty when explicit analytic expressions are not available or are difficult to obtain. It has been widely applied in environmental and geochemical studies, where the data generated often represent parts of whole, typically chemical concentrations. This kind of constrained data is generically called compositional data, and they require specialised statistical methods to properly account for their particular covariance structure. On the other hand, it is not unusual in practice that those data contain labels denoting nondetects, that is, concentrations falling below detection limits. Nondetects impede the implementation of the bootstrap and represent an additional source of uncertainty that must be taken into account. In this work, a bootstrap scheme is devised that handles nondetects by adding an imputation step within the resampling process and conveniently propagates their associated uncertainly. In doing so, it considers the constrained relationships between chemical concentrations originated from their compositional nature. Bootstrap estimates using a range of imputation methods, including new stochastic proposals, are compared across scenarios of increasing difficulty. They are formulated to meet compositional principles following the log-ratio approach, and an adjustment is introduced in the multivariate case to deal with nonclosed samples. Results suggest that nondetect bootstrap based on model-based imputation is generally preferable. A robust approach based on isometric log-ratio transformations appears to be particularly suited in this context. Computer routines in the R statistical programming language are provided. 

  6. Refinement of a Bias-Correction Procedure for the Weighted Likelihood Estimator of Ability. Research Report. ETS RR-07-23

    Science.gov (United States)

    Zhang, Jinming; Lu, Ting

    2007-01-01

    In practical applications of item response theory (IRT), item parameters are usually estimated first from a calibration sample. After treating these estimates as fixed and known, ability parameters are then estimated. However, the statistical inferences based on the estimated abilities can be misleading if the uncertainty of the item parameter…

  7. Essays on empirical likelihood in economics

    NARCIS (Netherlands)

    Gao, Z.

    2012-01-01

    This thesis intends to exploit the roots of empirical likelihood and its related methods in mathematical programming and computation. The roots will be connected and the connections will induce new solutions for the problems of estimation, computation, and generalization of empirical likelihood.

  8. Estimate of body composition by Hume's equation: validation with DXA.

    Science.gov (United States)

    Carnevale, Vincenzo; Piscitelli, Pamela Angela; Minonne, Rita; Castriotta, Valeria; Cipriani, Cristiana; Guglielmi, Giuseppe; Scillitani, Alfredo; Romagnoli, Elisabetta

    2015-05-01

    We investigated how the Hume's equation, using the antipyrine space, could perform in estimating fat mass (FM) and lean body mass (LBM). In 100 (40 male ad 60 female) subjects, we estimated FM and LBM by the equation and compared these values with those measured by a last generation DXA device. The correlation coefficients between measured and estimated FM were r = 0.940 (p LBM were r = 0.913 (p LBM, though the equation underestimated FM and overestimated LBM in respect to DXA. The mean difference for FM was 1.40 kg (limits of agreement of -6.54 and 8.37 kg). For LBM, the mean difference in respect to DXA was 1.36 kg (limits of agreement -8.26 and 6.52 kg). The root mean square error was 3.61 kg for FM and 3.56 kg for LBM. Our results show that in clinically stable subjects the Hume's equation could reliably assess body composition, and the estimated FM and LBM approached those measured by a modern DXA device.

  9. Logic of likelihood

    International Nuclear Information System (INIS)

    Wall, M.J.W.

    1992-01-01

    The notion of open-quotes probabilityclose quotes is generalized to that of open-quotes likelihood,close quotes and a natural logical structure is shown to exist for any physical theory which predicts likelihoods. Two physically based axioms are given for this logical structure to form an orthomodular poset, with an order-determining set of states. The results strengthen the basis of the quantum logic approach to axiomatic quantum theory. 25 refs

  10. The phylogenetic likelihood library.

    Science.gov (United States)

    Flouri, T; Izquierdo-Carrasco, F; Darriba, D; Aberer, A J; Nguyen, L-T; Minh, B Q; Von Haeseler, A; Stamatakis, A

    2015-03-01

    We introduce the Phylogenetic Likelihood Library (PLL), a highly optimized application programming interface for developing likelihood-based phylogenetic inference and postanalysis software. The PLL implements appropriate data structures and functions that allow users to quickly implement common, error-prone, and labor-intensive tasks, such as likelihood calculations, model parameter as well as branch length optimization, and tree space exploration. The highly optimized and parallelized implementation of the phylogenetic likelihood function and a thorough documentation provide a framework for rapid development of scalable parallel phylogenetic software. By example of two likelihood-based phylogenetic codes we show that the PLL improves the sequential performance of current software by a factor of 2-10 while requiring only 1 month of programming time for integration. We show that, when numerical scaling for preventing floating point underflow is enabled, the double precision likelihood calculations in the PLL are up to 1.9 times faster than those in BEAGLE. On an empirical DNA dataset with 2000 taxa the AVX version of PLL is 4 times faster than BEAGLE (scaling enabled and required). The PLL is available at http://www.libpll.org under the GNU General Public License (GPL). © The Author(s) 2014. Published by Oxford University Press, on behalf of the Society of Systematic Biologists.

  11. Spurious Latent Class Problem in the Mixed Rasch Model: A Comparison of Three Maximum Likelihood Estimation Methods under Different Ability Distributions

    Science.gov (United States)

    Sen, Sedat

    2018-01-01

    Recent research has shown that over-extraction of latent classes can be observed in the Bayesian estimation of the mixed Rasch model when the distribution of ability is non-normal. This study examined the effect of non-normal ability distributions on the number of latent classes in the mixed Rasch model when estimated with maximum likelihood…

  12. MetaPIGA v2.0: maximum likelihood large phylogeny estimation using the metapopulation genetic algorithm and other stochastic heuristics.

    Science.gov (United States)

    Helaers, Raphaël; Milinkovitch, Michel C

    2010-07-15

    The development, in the last decade, of stochastic heuristics implemented in robust application softwares has made large phylogeny inference a key step in most comparative studies involving molecular sequences. Still, the choice of a phylogeny inference software is often dictated by a combination of parameters not related to the raw performance of the implemented algorithm(s) but rather by practical issues such as ergonomics and/or the availability of specific functionalities. Here, we present MetaPIGA v2.0, a robust implementation of several stochastic heuristics for large phylogeny inference (under maximum likelihood), including a Simulated Annealing algorithm, a classical Genetic Algorithm, and the Metapopulation Genetic Algorithm (metaGA) together with complex substitution models, discrete Gamma rate heterogeneity, and the possibility to partition data. MetaPIGA v2.0 also implements the Likelihood Ratio Test, the Akaike Information Criterion, and the Bayesian Information Criterion for automated selection of substitution models that best fit the data. Heuristics and substitution models are highly customizable through manual batch files and command line processing. However, MetaPIGA v2.0 also offers an extensive graphical user interface for parameters setting, generating and running batch files, following run progress, and manipulating result trees. MetaPIGA v2.0 uses standard formats for data sets and trees, is platform independent, runs in 32 and 64-bits systems, and takes advantage of multiprocessor and multicore computers. The metaGA resolves the major problem inherent to classical Genetic Algorithms by maintaining high inter-population variation even under strong intra-population selection. Implementation of the metaGA together with additional stochastic heuristics into a single software will allow rigorous optimization of each heuristic as well as a meaningful comparison of performances among these algorithms. MetaPIGA v2.0 gives access both to high

  13. MetaPIGA v2.0: maximum likelihood large phylogeny estimation using the metapopulation genetic algorithm and other stochastic heuristics

    Directory of Open Access Journals (Sweden)

    Milinkovitch Michel C

    2010-07-01

    Full Text Available Abstract Background The development, in the last decade, of stochastic heuristics implemented in robust application softwares has made large phylogeny inference a key step in most comparative studies involving molecular sequences. Still, the choice of a phylogeny inference software is often dictated by a combination of parameters not related to the raw performance of the implemented algorithm(s but rather by practical issues such as ergonomics and/or the availability of specific functionalities. Results Here, we present MetaPIGA v2.0, a robust implementation of several stochastic heuristics for large phylogeny inference (under maximum likelihood, including a Simulated Annealing algorithm, a classical Genetic Algorithm, and the Metapopulation Genetic Algorithm (metaGA together with complex substitution models, discrete Gamma rate heterogeneity, and the possibility to partition data. MetaPIGA v2.0 also implements the Likelihood Ratio Test, the Akaike Information Criterion, and the Bayesian Information Criterion for automated selection of substitution models that best fit the data. Heuristics and substitution models are highly customizable through manual batch files and command line processing. However, MetaPIGA v2.0 also offers an extensive graphical user interface for parameters setting, generating and running batch files, following run progress, and manipulating result trees. MetaPIGA v2.0 uses standard formats for data sets and trees, is platform independent, runs in 32 and 64-bits systems, and takes advantage of multiprocessor and multicore computers. Conclusions The metaGA resolves the major problem inherent to classical Genetic Algorithms by maintaining high inter-population variation even under strong intra-population selection. Implementation of the metaGA together with additional stochastic heuristics into a single software will allow rigorous optimization of each heuristic as well as a meaningful comparison of performances among these

  14. An Estimation of the Likelihood of Significant Eruptions During 2000-2009 Using Poisson Statistics on Two-Point Moving Averages of the Volcanic Time Series

    Science.gov (United States)

    Wilson, Robert M.

    2001-01-01

    Since 1750, the number of cataclysmic volcanic eruptions (volcanic explosivity index (VEI)>=4) per decade spans 2-11, with 96 percent located in the tropics and extra-tropical Northern Hemisphere. A two-point moving average of the volcanic time series has higher values since the 1860's than before, being 8.00 in the 1910's (the highest value) and 6.50 in the 1980's, the highest since the 1910's peak. Because of the usual behavior of the first difference of the two-point moving averages, one infers that its value for the 1990's will measure approximately 6.50 +/- 1, implying that approximately 7 +/- 4 cataclysmic volcanic eruptions should be expected during the present decade (2000-2009). Because cataclysmic volcanic eruptions (especially those having VEI>=5) nearly always have been associated with short-term episodes of global cooling, the occurrence of even one might confuse our ability to assess the effects of global warming. Poisson probability distributions reveal that the probability of one or more events with a VEI>=4 within the next ten years is >99 percent. It is approximately 49 percent for an event with a VEI>=5, and 18 percent for an event with a VEI>=6. Hence, the likelihood that a climatically significant volcanic eruption will occur within the next ten years appears reasonably high.

  15. Regression methodology in groundwater composition estimation with composition predictions for Romuvaara borehole KR10

    Energy Technology Data Exchange (ETDEWEB)

    Luukkonen, A.; Korkealaakso, J.; Pitkaenen, P. [VTT Communities and Infrastructure, Espoo (Finland)

    1997-11-01

    Teollisuuden Voima Oy selected five investigation areas for preliminary site studies (1987Ae1992). The more detailed site investigation project, launched at the beginning of 1993 and presently supervised by Posiva Oy, is concentrated to three investigation areas. Romuvaara at Kuhmo is one of the present target areas, and the geochemical, structural and hydrological data used in this study are extracted from there. The aim of the study is to develop suitable methods for groundwater composition estimation based on a group of known hydrogeological variables. The input variables used are related to the host type of groundwater, hydrological conditions around the host location, mixing potentials between different types of groundwater, and minerals equilibrated with the groundwater. The output variables are electrical conductivity, Ca, Mg, Mn, Na, K, Fe, Cl, S, HS, SO{sub 4}, alkalinity, {sup 3}H, {sup 14}C, {sup 13}C, Al, Sr, F, Br and I concentrations, and pH of the groundwater. The methodology is to associate the known hydrogeological conditions (i.e. input variables), with the known water compositions (output variables), and to evaluate mathematical relations between these groups. Output estimations are done with two separate procedures: partial least squares regressions on the principal components of input variables, and by training neural networks with input-output pairs. Coefficients of linear equations and trained networks are optional methods for actual predictions. The quality of output predictions are monitored with confidence limit estimations, evaluated from input variable covariances and output variances, and with charge balance calculations. Groundwater compositions in Romuvaara borehole KR10 are predicted at 10 metre intervals with both prediction methods. 46 refs.

  16. A photogrammetric methodology for estimating construction and demolition waste composition

    International Nuclear Information System (INIS)

    Heck, H.H.; Reinhart, D.R.; Townsend, T.; Seibert, S.; Medeiros, S.; Cochran, K.; Chakrabarti, S.

    2002-01-01

    Manual sorting of construction, demolition, and renovation (C and D) waste is difficult and costly. A photogrammetric method has been developed to analyze the composition of C and D waste that eliminates the need for physical contact with the waste. The only field data collected is the weight and volume of the solid waste in the storage container and a photograph of each side of the waste pile, after it is dumped on the tipping floor. The methodology was developed and calibrated based on manual sorting studies at three different landfills in Florida, where the contents of twenty roll-off containers filled with C and D waste were sorted. The component classifications used were wood, concrete, paper products, drywall, metals, insulation, roofing, plastic, flooring, municipal solid waste, land-clearing waste, and other waste. Photographs of each side of the waste pile were taken with a digital camera and the pictures were analyzed on a computer using Photoshop software. Photoshop was used to divide the picture into eighty cells composed of ten columns and eight rows. The component distribution of each cell was estimated and results were summed to get a component distribution for the pile. Two types of distribution factors were developed that allow the component volumes and weights to be estimated. One set of distribution factors was developed to correct the volume distributions and the second set was developed to correct the weight distributions. The bulk density of each of the waste components were determined and used to convert waste volumes to weights. (author)

  17. A photogrammetric methodology for estimating construction and demolition waste composition

    Energy Technology Data Exchange (ETDEWEB)

    Heck, H.H. [Florida Inst. of Technology, Dept. of divil Engineering, Melbourne, Florida (United States); Reinhart, D.R.; Townsend, T.; Seibert, S.; Medeiros, S.; Cochran, K.; Chakrabarti, S

    2002-06-15

    Manual sorting of construction, demolition, and renovation (C and D) waste is difficult and costly. A photogrammetric method has been developed to analyze the composition of C and D waste that eliminates the need for physical contact with the waste. The only field data collected is the weight and volume of the solid waste in the storage container and a photograph of each side of the waste pile, after it is dumped on the tipping floor. The methodology was developed and calibrated based on manual sorting studies at three different landfills in Florida, where the contents of twenty roll-off containers filled with C and D waste were sorted. The component classifications used were wood, concrete, paper products, drywall, metals, insulation, roofing, plastic, flooring, municipal solid waste, land-clearing waste, and other waste. Photographs of each side of the waste pile were taken with a digital camera and the pictures were analyzed on a computer using Photoshop software. Photoshop was used to divide the picture into eighty cells composed of ten columns and eight rows. The component distribution of each cell was estimated and results were summed to get a component distribution for the pile. Two types of distribution factors were developed that allow the component volumes and weights to be estimated. One set of distribution factors was developed to correct the volume distributions and the second set was developed to correct the weight distributions. The bulk density of each of the waste components were determined and used to convert waste volumes to weights. (author)

  18. School Processes Mediate School Compositional Effects: Model Specification and Estimation

    Science.gov (United States)

    Liu, Hongqiang; Van Damme, Jan; Gielen, Sarah; Van Den Noortgate, Wim

    2015-01-01

    School composition effects have been consistently verified, but few studies ever attempted to study how school composition affects school achievement. Based on prior research findings, we employed multilevel mediation modeling to examine whether school processes mediate the effect of school composition upon school outcomes based on the data of 28…

  19. EKF composition estimation and GMC control of a reactive distillation column

    Science.gov (United States)

    Tintavon, Sirivimon; Kittisupakorn, Paisan

    2017-08-01

    This research work proposes an extended Kalman filter (EKF) estimator to give estimates of product composition and a generic model controller (GMC) to control the temperature of a reactive distillation column (RDC). One of major difficulties to control the RDC is large time delays of product composition measurement. Therefore, the estimates of the product composition are needed and determined based on available and reliable measured tray temperature via the extended Kalman Filter (EKF). With these estimates, the GMC controller is applied to control the RDC's temperature. The performance of the EKF estimator under the GMC control is evaluated in various disturbances and set point change.

  20. Earthquake likelihood model testing

    Science.gov (United States)

    Schorlemmer, D.; Gerstenberger, M.C.; Wiemer, S.; Jackson, D.D.; Rhoades, D.A.

    2007-01-01

    INTRODUCTIONThe Regional Earthquake Likelihood Models (RELM) project aims to produce and evaluate alternate models of earthquake potential (probability per unit volume, magnitude, and time) for California. Based on differing assumptions, these models are produced to test the validity of their assumptions and to explore which models should be incorporated in seismic hazard and risk evaluation. Tests based on physical and geological criteria are useful but we focus on statistical methods using future earthquake catalog data only. We envision two evaluations: a test of consistency with observed data and a comparison of all pairs of models for relative consistency. Both tests are based on the likelihood method, and both are fully prospective (i.e., the models are not adjusted to fit the test data). To be tested, each model must assign a probability to any possible event within a specified region of space, time, and magnitude. For our tests the models must use a common format: earthquake rates in specified “bins” with location, magnitude, time, and focal mechanism limits.Seismology cannot yet deterministically predict individual earthquakes; however, it should seek the best possible models for forecasting earthquake occurrence. This paper describes the statistical rules of an experiment to examine and test earthquake forecasts. The primary purposes of the tests described below are to evaluate physical models for earthquakes, assure that source models used in seismic hazard and risk studies are consistent with earthquake data, and provide quantitative measures by which models can be assigned weights in a consensus model or be judged as suitable for particular regions.In this paper we develop a statistical method for testing earthquake likelihood models. A companion paper (Schorlemmer and Gerstenberger 2007, this issue) discusses the actual implementation of these tests in the framework of the RELM initiative.Statistical testing of hypotheses is a common task and a

  1. A calibration approach to glandular tissue composition estimation in digital mammography

    International Nuclear Information System (INIS)

    Kaufhold, J.; Thomas, J.A.; Eberhard, J.W.; Galbo, C.E.; Trotter, D.E. Gonzalez

    2002-01-01

    The healthy breast is almost entirely composed of a mixture of fatty, epithelial, and stromal tissues which can be grouped into two distinctly attenuating tissue types: fatty and glandular. Further, the amount of glandular tissue is linked to breast cancer risk, so an objective quantitative analysis of glandular tissue can aid in risk estimation. Highnam and Brady have measured glandular tissue composition objectively. However, they argue that their work should only be used for 'relative' tissue measurements unless a careful calibration has been performed. In this work, we perform such a 'careful calibration' on a digital mammography system and use it to estimate breast tissue composition of patient breasts. We imaged 0%, 50%, and 100% glandular-equivalent phantoms of varying thicknesses for a number of clinically relevant x-ray techniques on a digital mammography system. From these images, we extracted mean signal and noise levels and computed calibration curves that can be used for quantitative tissue composition estimation. In this way, we calculate the percent glandular composition of a patient breast on a pixelwise basis. This tissue composition estimation method was applied to 23 digital mammograms. We estimated the quantitative impact of different error sources on the estimates of tissue composition. These error sources include compressed breast height estimation error, residual scattered radiation, quantum noise, and beam hardening. Errors in the compressed breast height estimate contribute the most error in tissue composition--on the order of ±7% for a 4 cm compressed breast height. The spatially varying scattered radiation will contribute quantitatively less error overall, but may be significant in regions near the skinline. It is calculated that for a 4 cm compressed breast height, a residual scatter signal error is mitigated by approximately sixfold in the composition estimate. The error in composition due to the quantum noise, which is the limiting

  2. Estimation of physical properties of laminated composites via the method of inverse vibration problem

    Energy Technology Data Exchange (ETDEWEB)

    Balci, Murat [Dept. of Mechanical Engineering, Bayburt University, Bayburt (Turkmenistan); Gundogdu, Omer [Dept. of Mechanical Engineering, Ataturk University, Erzurum (Turkmenistan)

    2017-01-15

    In this study, estimation of some physical properties of a laminated composite plate was conducted via the inverse vibration problem. Laminated composite plate was modelled and simulated to obtain vibration responses for different length-to-thickness ratio in ANSYS. Furthermore, a numerical finite element model was developed for the laminated composite utilizing the Kirchhoff plate theory and programmed in MATLAB for simulations. Optimizing the difference between these two vibration responses, inverse vibration problem was solved to obtain some of the physical properties of the laminated composite using genetic algorithms. The estimated parameters are compared with the theoretical results, and a very good correspondence was observed.

  3. Estimation of physical properties of laminated composites via the method of inverse vibration problem

    International Nuclear Information System (INIS)

    Balci, Murat; Gundogdu, Omer

    2017-01-01

    In this study, estimation of some physical properties of a laminated composite plate was conducted via the inverse vibration problem. Laminated composite plate was modelled and simulated to obtain vibration responses for different length-to-thickness ratio in ANSYS. Furthermore, a numerical finite element model was developed for the laminated composite utilizing the Kirchhoff plate theory and programmed in MATLAB for simulations. Optimizing the difference between these two vibration responses, inverse vibration problem was solved to obtain some of the physical properties of the laminated composite using genetic algorithms. The estimated parameters are compared with the theoretical results, and a very good correspondence was observed

  4. Radioisotopic composition of yellowcake: an estimation of stack release rates

    International Nuclear Information System (INIS)

    Momeni, M.H.; Kisieleski, W.E.; Rayno, D.R.; Sabau, C.S.

    1979-12-01

    Uranium concentrate (yellowcake) composites from four mills (Anaconda, Kerr-McGee, Highland, and Uravan) were analyzed for U-238, U-235, U-234, Th-230, Ra-226, and Pb-210. The ratio of specific activities of U-238 to U-234 in the composites suggested that secular radioactive equilibrium exists in the ore. The average activity ratios in the yellowcake were determined to be 2.7 x 10 -3 (Th-230/U-238), 5 x 10 -4 (Ra-226/U-238) and 2 x 10 -4 (Pb-210/U-238). Based on earlier EPA measurements of the release rates from the stacks, the amount of yellowcake released was determined to be 0.1% of the amount processed

  5. Maintaining symmetry of simulated likelihood functions

    DEFF Research Database (Denmark)

    Andersen, Laura Mørch

    This paper suggests solutions to two different types of simulation errors related to Quasi-Monte Carlo integration. Likelihood functions which depend on standard deviations of mixed parameters are symmetric in nature. This paper shows that antithetic draws preserve this symmetry and thereby...... improves precision substantially. Another source of error is that models testing away mixing dimensions must replicate the relevant dimensions of the quasi-random draws in the simulation of the restricted likelihood. These simulation errors are ignored in the standard estimation procedures used today...

  6. Estimation of carbon fibre composites as ITER divertor armour

    Science.gov (United States)

    Pestchanyi, S.; Safronov, V.; Landman, I.

    2004-08-01

    Exposure of the carbon fibre composites (CFC) NB31 and NS31 by multiple plasma pulses has been performed at the plasma guns MK-200UG and QSPA. Numerical simulation for the same CFCs under ITER type I ELM typical heat load has been carried out using the code PEGASUS-3D. Comparative analysis of the numerical and experimental results allowed understanding the erosion mechanism of CFC based on the simulation results. A modification of CFC structure has been proposed in order to decrease the armour erosion rate.

  7. Estimation of carbon fibre composites as ITER divertor armour

    International Nuclear Information System (INIS)

    Pestchanyi, S.; Safronov, V.; Landman, I.

    2004-01-01

    Exposure of the carbon fibre composites (CFC) NB31 and NS31 by multiple plasma pulses has been performed at the plasma guns MK-200UG and QSPA. Numerical simulation for the same CFCs under ITER type I ELM typical heat load has been carried out using the code PEGASUS-3D. Comparative analysis of the numerical and experimental results allowed understanding the erosion mechanism of CFC based on the simulation results. A modification of CFC structure has been proposed in order to decrease the armour erosion rate

  8. METHODS OF THE APPROXIMATE ESTIMATIONS OF FATIGUE DURABILITY OF COMPOSITE AIRFRAME COMPONENT TYPICAL ELEMENTS

    Directory of Open Access Journals (Sweden)

    V. E. Strizhius

    2015-01-01

    Full Text Available Methods of the approximate estimations of fatigue durability of composite airframe component typical elements which can be recommended for application at the stage of outline designing of the airplane are generated and presented.

  9. Computational estimation of soybean oil adulteration in Nepalese mustard seed oil based on fatty acid composition

    OpenAIRE

    Shrestha, Kshitij; De Meulenaer, Bruno

    2011-01-01

    The experiment was carried out for the computational estimation of soybean oil adulteration in the mustard seed oil using chemometric technique based on fatty acid composition. Principal component analysis and K-mean clustering of fatty acid composition data showed 4 major mustard/rapeseed clusters, two of high erucic and two of low erucic mustard type. Soybean and other possible adulterants made a distinct cluster from them. The methodology for estimation of soybean oil adulteration was deve...

  10. Comparing intake estimations based on food composition data with chemical analysis in Malian women

    NARCIS (Netherlands)

    Koréissi-Dembélé, Yara; Doets, Esmee L.; Fanou-Fogny, Nadia; Hulshof, Paul J.M.; Moretti, Diego; Brouwer, Inge D.

    2017-01-01

    Objective: Food composition databases are essential for estimating nutrient intakes in food consumption surveys. The present study aimed to evaluate the Mali food composition database (TACAM) for assessing intakes of energy and selected nutrients at population level. Design: Weighed food records and

  11. Using a network-based approach and targeted maximum likelihood estimation to evaluate the effect of adding pre-exposure prophylaxis to an ongoing test-and-treat trial.

    Science.gov (United States)

    Balzer, Laura; Staples, Patrick; Onnela, Jukka-Pekka; DeGruttola, Victor

    2017-04-01

    Several cluster-randomized trials are underway to investigate the implementation and effectiveness of a universal test-and-treat strategy on the HIV epidemic in sub-Saharan Africa. We consider nesting studies of pre-exposure prophylaxis within these trials. Pre-exposure prophylaxis is a general strategy where high-risk HIV- persons take antiretrovirals daily to reduce their risk of infection from exposure to HIV. We address how to target pre-exposure prophylaxis to high-risk groups and how to maximize power to detect the individual and combined effects of universal test-and-treat and pre-exposure prophylaxis strategies. We simulated 1000 trials, each consisting of 32 villages with 200 individuals per village. At baseline, we randomized the universal test-and-treat strategy. Then, after 3 years of follow-up, we considered four strategies for targeting pre-exposure prophylaxis: (1) all HIV- individuals who self-identify as high risk, (2) all HIV- individuals who are identified by their HIV+ partner (serodiscordant couples), (3) highly connected HIV- individuals, and (4) the HIV- contacts of a newly diagnosed HIV+ individual (a ring-based strategy). We explored two possible trial designs, and all villages were followed for a total of 7 years. For each village in a trial, we used a stochastic block model to generate bipartite (male-female) networks and simulated an agent-based epidemic process on these networks. We estimated the individual and combined intervention effects with a novel targeted maximum likelihood estimator, which used cross-validation to data-adaptively select from a pre-specified library the candidate estimator that maximized the efficiency of the analysis. The universal test-and-treat strategy reduced the 3-year cumulative HIV incidence by 4.0% on average. The impact of each pre-exposure prophylaxis strategy on the 4-year cumulative HIV incidence varied by the coverage of the universal test-and-treat strategy with lower coverage resulting in a larger

  12. Bioelectrical impedance analysis to estimate body composition in surgical and oncological patients: a systematic review

    NARCIS (Netherlands)

    Haverkort, E. B.; Reijven, P. L. M.; Binnekade, J. M.; de van der Schueren, M. A. E.; Earthman, C. P.; Gouma, D. J.; de Haan, R. J.

    2015-01-01

    Bioelectrical impedance analysis (BIA) is a commonly used method for the evaluation of body composition. However, BIA estimations are subject to uncertainties.The aim of this systematic review was to explore the variability of empirical prediction equations used in BIA estimations and to evaluate

  13. Body composition in elderly people: effect of criterion estimates on predictive equations

    International Nuclear Information System (INIS)

    Baumgartner, R.N.; Heymsfield, S.B.; Lichtman, S.; Wang, J.; Pierson, R.N. Jr.

    1991-01-01

    The purposes of this study were to determine whether there are significant differences between two- and four-compartment model estimates of body composition, whether these differences are associated with aqueous and mineral fractions of the fat-free mass (FFM); and whether the differences are retained in equations for predicting body composition from anthropometry and bioelectric resistance. Body composition was estimated in 98 men and women aged 65-94 y by using a four-compartment model based on hydrodensitometry, 3 H 2 O dilution, and dual-photon absorptiometry. These estimates were significantly different from those obtained by using Siri's two-compartment model. The differences were associated significantly (P less than 0.0001) with variation in the aqueous fraction of FFM. Equations for predicting body composition from anthropometry and resistance, when calibrated against two-compartment model estimates, retained these systematic errors. Equations predicting body composition in elderly people should be calibrated against estimates from multicompartment models that consider variability in FFM composition

  14. Two-Sided Estimates of Thermo-elastic Characteristics of Dispersed Inclusion Composites

    Directory of Open Access Journals (Sweden)

    V. S. Zarubin

    2015-01-01

    Full Text Available The composites, dispersion-reinforced with inclusions from high-strength and high-modulus materials are widely used in technology. Nanostructure elements can perform the role of such inclusions as well. Possible applications of such composites in heat-stressed structures under heavy mechanical and thermal influences significantly depend on a complex of thermo-mechanical characteristics including the values of the moduli of elasticity and coefficient of linear thermal expansion. There are different approaches to construction of mathematical models that allow calculating dependences to estimate elastic characteristics of composites. Relation between thermoelastic properties of matrix and inclusions of the composite with its temperature coefficient of linear expansion is studied in less detail. Thus, attention has been insufficient in estimating a degree of reliability and a possible error of derived dependencies.A dual variation formulation of the problem of thermo-elasticity in a non-uniform solids simulating the properties and structure of the composite with dispersed inclusions, makes it possible to define two-sided limits of possible values of the volume elasticity modulus, shear modulus, and coefficient of linear thermal expansion of such composite. These limits allow us to estimate the maximum possible error, if to take a half-sum of the limit values of these parameters as the thermoelastic characteristics of the composite. Implementing this approach to find possible errors, arising when using one or another calculating dependency, improves reliability of predicted thermo-elastic characteristics as applied to existing and promising composites.

  15. Exploration of a digital audio processing platform using a compositional system level performance estimation framework

    DEFF Research Database (Denmark)

    Tranberg-Hansen, Anders Sejer; Madsen, Jan

    2009-01-01

    This paper presents the application of a compositional simulation based system-level performance estimation framework on a non-trivial industrial case study. The case study is provided by the Danish company Bang & Olufsen ICEpower a/s and focuses on the exploration of a digital mobile audio...... processing platform. A short overview of the compositional performance estimation framework used is given followed by a presentation of how it is used for performance estimation using an iterative refinement process towards the final implementation. Finally, an evaluation in terms of accuracy and speed...

  16. Can you estimate body composition in dogs from photographs?

    Science.gov (United States)

    Gant, Poppy; Holden, Shelley L; Biourge, Vincent; German, Alexander J

    2016-01-20

    A validated method for assessing the visual characteristics of body condition from photographs (vBCS), would be a useful initial screening tool for client-owned dogs. In this retrospective study, photographs taken before and after weight loss from 155 overweight and obese dogs attending a weight management referral clinic were used in designing and testing the feasibility of vBCS. Observers with a range of experience examined the photographs, and estimated body condition indirectly (vBCS) using three different methods. In the first method (vBCSmeasured), the ratio of abdominal width to thoracic width (A:T) was measured, and cut-points used to determine body condition; the second method (iBCSsubjective) involved semi-quantitative examination using visual descriptors of BCS; the third (vBCSadjusted) was a combined approach whereby A:T ratio was first determined, and the final score modified if necessary after assessing photographs. When an experienced observer performed vBCS, there were moderate-to-good associations between body fat (measured by dual-energy X-ray absorptiometry) and the three vBCS methods (median Rs: 0.51-0.75; P  0.05 for all). Compared with ideal weight and obese dogs, errors in assessing body condition were more common for overweight dogs (e.g. BCS 6-7/9, P photographs, but performance varies amongst observers.

  17. Phylogenetic analysis using parsimony and likelihood methods.

    Science.gov (United States)

    Yang, Z

    1996-02-01

    The assumptions underlying the maximum-parsimony (MP) method of phylogenetic tree reconstruction were intuitively examined by studying the way the method works. Computer simulations were performed to corroborate the intuitive examination. Parsimony appears to involve very stringent assumptions concerning the process of sequence evolution, such as constancy of substitution rates between nucleotides, constancy of rates across nucleotide sites, and equal branch lengths in the tree. For practical data analysis, the requirement of equal branch lengths means similar substitution rates among lineages (the existence of an approximate molecular clock), relatively long interior branches, and also few species in the data. However, a small amount of evolution is neither a necessary nor a sufficient requirement of the method. The difficulties involved in the application of current statistical estimation theory to tree reconstruction were discussed, and it was suggested that the approach proposed by Felsenstein (1981, J. Mol. Evol. 17: 368-376) for topology estimation, as well as its many variations and extensions, differs fundamentally from the maximum likelihood estimation of a conventional statistical parameter. Evidence was presented showing that the Felsenstein approach does not share the asymptotic efficiency of the maximum likelihood estimator of a statistical parameter. Computer simulations were performed to study the probability that MP recovers the true tree under a hierarchy of models of nucleotide substitution; its performance relative to the likelihood method was especially noted. The results appeared to support the intuitive examination of the assumptions underlying MP. When a simple model of nucleotide substitution was assumed to generate data, the probability that MP recovers the true topology could be as high as, or even higher than, that for the likelihood method. When the assumed model became more complex and realistic, e.g., when substitution rates were

  18. Estimation and analysis of the sensitivity of monoenergetic electron radiography of composite materials with fluctuating composition

    International Nuclear Information System (INIS)

    Rudenko, V.N.; Yunda, N.T.

    1978-01-01

    A sensitivity analysis of the electron defectoscopy method for composite materials with fluctuating composition has been carried out. Quantitative evaluations of the testing sensitivity depending on inspection conditions have been obtained, and calculations of the instrumental error are shown. Based on numerical calculations, a comparison of error has been carried out between high-energy electron and X-ray testings. It is shown that when testing composite materials with a surface density of up to 7-10 g/cm 2 , the advantage of the electron defectoscopy method as compared to the X-ray one is the higher sensitivity and lower instrumental error. The advantage of the electron defectoscopy method over the X-ray one as regards the sensitivity is greater when a light-atom component is predomenant in the composition. A monoenergetic electron beam from a betatron with an energy of up to 30 MeV should be used for testing materials with a surface density of up to 15 g/cm 2

  19. Simultaneous estimation of diet composition and calibration coefficients with fatty acid signature data

    Science.gov (United States)

    Bromaghin, Jeffrey F.; Budge, Suzanne M.; Thiemann, Gregory W.; Rode, Karyn D.

    2017-01-01

    Knowledge of animal diets provides essential insights into their life history and ecology, although diet estimation is challenging and remains an active area of research. Quantitative fatty acid signature analysis (QFASA) has become a popular method of estimating diet composition, especially for marine species. A primary assumption of QFASA is that constants called calibration coefficients, which account for the differential metabolism of individual fatty acids, are known. In practice, however, calibration coefficients are not known, but rather have been estimated in feeding trials with captive animals of a limited number of model species. The impossibility of verifying the accuracy of feeding trial derived calibration coefficients to estimate the diets of wild animals is a foundational problem with QFASA that has generated considerable criticism. We present a new model that allows simultaneous estimation of diet composition and calibration coefficients based only on fatty acid signature samples from wild predators and potential prey. Our model performed almost flawlessly in four tests with constructed examples, estimating both diet proportions and calibration coefficients with essentially no error. We also applied the model to data from Chukchi Sea polar bears, obtaining diet estimates that were more diverse than estimates conditioned on feeding trial calibration coefficients. Our model avoids bias in diet estimates caused by conditioning on inaccurate calibration coefficients, invalidates the primary criticism of QFASA, eliminates the need to conduct feeding trials solely for diet estimation, and consequently expands the utility of fatty acid data to investigate aspects of ecology linked to animal diets.

  20. The parent magma of the Nakhla (SNC) meteorite: Reconciliation of composition estimates from magmatic inclusions and element partitioning

    Science.gov (United States)

    Treiman, A. H.

    1993-01-01

    The composition of the parent magma of the Nakhla meteorite was difficult to determine, because it is accumulate rock, enriched in olivine and augite relative to a basalt magma. A parent magma composition is estimated from electron microprobe area analyses of magmatic inclusions in olivine. This composition is consistent with an independent estimate based on the same inclusions, and with chemical equilibria with the cores of Nakhla's augites. This composition reconciles most of the previous estimates of Nakhla's magma composition, and obviates the need for complex magmatic processes. Inconsistency between this composition and those calculated previously suggests that magma flowed through and crystallized into Nakhla as it cooled.

  1. Extended likelihood inference in reliability

    International Nuclear Information System (INIS)

    Martz, H.F. Jr.; Beckman, R.J.; Waller, R.A.

    1978-10-01

    Extended likelihood methods of inference are developed in which subjective information in the form of a prior distribution is combined with sampling results by means of an extended likelihood function. The extended likelihood function is standardized for use in obtaining extended likelihood intervals. Extended likelihood intervals are derived for the mean of a normal distribution with known variance, the failure-rate of an exponential distribution, and the parameter of a binomial distribution. Extended second-order likelihood methods are developed and used to solve several prediction problems associated with the exponential and binomial distributions. In particular, such quantities as the next failure-time, the number of failures in a given time period, and the time required to observe a given number of failures are predicted for the exponential model with a gamma prior distribution on the failure-rate. In addition, six types of life testing experiments are considered. For the binomial model with a beta prior distribution on the probability of nonsurvival, methods are obtained for predicting the number of nonsurvivors in a given sample size and for predicting the required sample size for observing a specified number of nonsurvivors. Examples illustrate each of the methods developed. Finally, comparisons are made with Bayesian intervals in those cases where these are known to exist

  2. A Note on Parameter Estimation in the Composite Weibull–Pareto Distribution

    Directory of Open Access Journals (Sweden)

    Enrique Calderín-Ojeda

    2018-02-01

    Full Text Available Composite models have received much attention in the recent actuarial literature to describe heavy-tailed insurance loss data. One of the models that presents a good performance to describe this kind of data is the composite Weibull–Pareto (CWL distribution. On this note, this distribution is revisited to carry out estimation of parameters via mle and mle2 optimization functions in R. The results are compared with those obtained in a previous paper by using the nlm function, in terms of analytical and graphical methods of model selection. In addition, the consistency of the parameter estimation is examined via a simulation study.

  3. Generalized empirical likelihood methods for analyzing longitudinal data

    KAUST Repository

    Wang, S.; Qian, L.; Carroll, R. J.

    2010-01-01

    Efficient estimation of parameters is a major objective in analyzing longitudinal data. We propose two generalized empirical likelihood based methods that take into consideration within-subject correlations. A nonparametric version of the Wilks

  4. A simulation study of likelihood inference procedures in rayleigh distribution with censored data

    International Nuclear Information System (INIS)

    Baklizi, S. A.; Baker, H. M.

    2001-01-01

    Inference procedures based on the likelihood function are considered for the one parameter Rayleigh distribution with type1 and type 2 censored data. Using simulation techniques, the finite sample performances of the maximum likelihood estimator and the large sample likelihood interval estimation procedures based on the Wald, the Rao, and the likelihood ratio statistics are investigated. It appears that the maximum likelihood estimator is unbiased. The approximate variance estimates obtained from the asymptotic normal distribution of the maximum likelihood estimator are accurate under type 2 censored data while they tend to be smaller than the actual variances when considering type1 censored data of small size. It appears also that interval estimation based on the Wald and Rao statistics need much more sample size than interval estimation based on the likelihood ratio statistic to attain reasonable accuracy. (authors). 15 refs., 4 tabs

  5. Estimating Accuracy of Land-Cover Composition From Two-Stage Clustering Sampling

    Science.gov (United States)

    Land-cover maps are often used to compute land-cover composition (i.e., the proportion or percent of area covered by each class), for each unit in a spatial partition of the region mapped. We derive design-based estimators of mean deviation (MD), mean absolute deviation (MAD), ...

  6. The asymptotic behaviour of the maximum likelihood function of Kriging approximations using the Gaussian correlation function

    CSIR Research Space (South Africa)

    Kok, S

    2012-07-01

    Full Text Available continuously as the correlation function hyper-parameters approach zero. Since the global minimizer of the maximum likelihood function is an asymptote in this case, it is unclear if maximum likelihood estimation (MLE) remains valid. Numerical ill...

  7. Sample size planning for composite reliability coefficients: accuracy in parameter estimation via narrow confidence intervals.

    Science.gov (United States)

    Terry, Leann; Kelley, Ken

    2012-11-01

    Composite measures play an important role in psychology and related disciplines. Composite measures almost always have error. Correspondingly, it is important to understand the reliability of the scores from any particular composite measure. However, the point estimates of the reliability of composite measures are fallible and thus all such point estimates should be accompanied by a confidence interval. When confidence intervals are wide, there is much uncertainty in the population value of the reliability coefficient. Given the importance of reporting confidence intervals for estimates of reliability, coupled with the undesirability of wide confidence intervals, we develop methods that allow researchers to plan sample size in order to obtain narrow confidence intervals for population reliability coefficients. We first discuss composite reliability coefficients and then provide a discussion on confidence interval formation for the corresponding population value. Using the accuracy in parameter estimation approach, we develop two methods to obtain accurate estimates of reliability by planning sample size. The first method provides a way to plan sample size so that the expected confidence interval width for the population reliability coefficient is sufficiently narrow. The second method ensures that the confidence interval width will be sufficiently narrow with some desired degree of assurance (e.g., 99% assurance that the 95% confidence interval for the population reliability coefficient will be less than W units wide). The effectiveness of our methods was verified with Monte Carlo simulation studies. We demonstrate how to easily implement the methods with easy-to-use and freely available software. ©2011 The British Psychological Society.

  8. Use of tritiated water for estimating body composition in grazing ewes

    International Nuclear Information System (INIS)

    Russel, A.J.F.; Foot, J.Z.; McFarlane, D.M.

    1982-01-01

    Tritiated water was used to estimate total body water, body composition and water turnover of non-pregnant, pregnant, non-lactating and lactating grazing sheep. Body composition was estimated from equilibrated and extrapolated values of tritiated water space. These methods both overestimated the total body water measured directly. Body fat could be predicted satisfactorily from tritiated water space within the physiological states of ewes, i.e. lactating, pregnant, etc., although for lactating ewes the error of prediction is greater. It appears inadvisable at this stage to use equations derived from all classes of ewes to estimate body fat in ewes of any one physiological state. Water turnover varied, with the physiological state being highest for lactating ewes. (author)

  9. A Variational Approach to the Estimate of the Permittivity of a Composite with Dispersed Inclusions

    Directory of Open Access Journals (Sweden)

    V. S. Zarubin

    2015-01-01

    Full Text Available Composites are inhomogeneous materials (heterogeneous solid body, which fall into the matrix and inclusions. The matrix in a composite is a binder between the inclusions. The properties of the inclusions mainly determine the application of composites. Selection of the characteristics of the matrix and inclusions enables us to meet the requirements for materials to be used in various fields of technology. Composites are widely used as structural or thermal protection material and as functional materials in various electrical devices, including dielectrics. One of the most important characteristics of the composite dielectric is the relative permittivity. The latter is primarily determined by the dielectric properties of the matrix and inclusions, as well as the shape and volume concentration of inclusions.For a composite with dispersed inclusions we are able to construct adequate mathematical models which enable us to predict sufficiently reliably the dependence of its dielectric constant on these defining parameters. In this paper, among the various approaches to the construction of such models we emphasize a variational approach which allows us not only to determine this dependence, but also obtain guaranteed bilateral boundaries of the area of possible values of the dielectric constant of the composite used to estimate the highest accuracy of calculated values.The representative element of the composite structure with inclusions of spherical shape modeling the form of dispersed inclusions with dimensions close to all directions is considered. For the representative element we obtained the electrostatic potential distribution that is permissible for the minimized functional. The latter is the part of the variational form of a mathematical model which describes the dielectric properties of the considered composite. From the equality of the values of this functional on the received permissible distribution in a representative element of the

  10. Statistically-Estimated Tree Composition for the Northeastern United States at Euro-American Settlement.

    Directory of Open Access Journals (Sweden)

    Christopher J Paciorek

    Full Text Available We present a gridded 8 km-resolution data product of the estimated composition of tree taxa at the time of Euro-American settlement of the northeastern United States and the statistical methodology used to produce the product from trees recorded by land surveyors. Composition is defined as the proportion of stems larger than approximately 20 cm diameter at breast height for 22 tree taxa, generally at the genus level. The data come from settlement-era public survey records that are transcribed and then aggregated spatially, giving count data. The domain is divided into two regions, eastern (Maine to Ohio and midwestern (Indiana to Minnesota. Public Land Survey point data in the midwestern region (ca. 0.8-km resolution are aggregated to a regular 8 km grid, while data in the eastern region, from Town Proprietor Surveys, are aggregated at the township level in irregularly-shaped local administrative units. The product is based on a Bayesian statistical model fit to the count data that estimates composition on the 8 km grid across the entire domain. The statistical model is designed to handle data from both the regular grid and the irregularly-shaped townships and allows us to estimate composition at locations with no data and to smooth over noise caused by limited counts in locations with data. Critically, the model also allows us to quantify uncertainty in our composition estimates, making the product suitable for applications employing data assimilation. We expect this data product to be useful for understanding the state of vegetation in the northeastern United States prior to large-scale Euro-American settlement. In addition to specific regional questions, the data product can also serve as a baseline against which to investigate how forests and ecosystems change after intensive settlement. The data product is being made available at the NIS data portal as version 1.0.

  11. Maximum likelihood versus likelihood-free quantum system identification in the atom maser

    International Nuclear Information System (INIS)

    Catana, Catalin; Kypraios, Theodore; Guţă, Mădălin

    2014-01-01

    We consider the problem of estimating a dynamical parameter of a Markovian quantum open system (the atom maser), by performing continuous time measurements in the system's output (outgoing atoms). Two estimation methods are investigated and compared. Firstly, the maximum likelihood estimator (MLE) takes into account the full measurement data and is asymptotically optimal in terms of its mean square error. Secondly, the ‘likelihood-free’ method of approximate Bayesian computation (ABC) produces an approximation of the posterior distribution for a given set of summary statistics, by sampling trajectories at different parameter values and comparing them with the measurement data via chosen statistics. Building on previous results which showed that atom counts are poor statistics for certain values of the Rabi angle, we apply MLE to the full measurement data and estimate its Fisher information. We then select several correlation statistics such as waiting times, distribution of successive identical detections, and use them as input of the ABC algorithm. The resulting posterior distribution follows closely the data likelihood, showing that the selected statistics capture ‘most’ statistical information about the Rabi angle. (paper)

  12. Profile-likelihood Confidence Intervals in Item Response Theory Models.

    Science.gov (United States)

    Chalmers, R Philip; Pek, Jolynn; Liu, Yang

    2017-01-01

    Confidence intervals (CIs) are fundamental inferential devices which quantify the sampling variability of parameter estimates. In item response theory, CIs have been primarily obtained from large-sample Wald-type approaches based on standard error estimates, derived from the observed or expected information matrix, after parameters have been estimated via maximum likelihood. An alternative approach to constructing CIs is to quantify sampling variability directly from the likelihood function with a technique known as profile-likelihood confidence intervals (PL CIs). In this article, we introduce PL CIs for item response theory models, compare PL CIs to classical large-sample Wald-type CIs, and demonstrate important distinctions among these CIs. CIs are then constructed for parameters directly estimated in the specified model and for transformed parameters which are often obtained post-estimation. Monte Carlo simulation results suggest that PL CIs perform consistently better than Wald-type CIs for both non-transformed and transformed parameters.

  13. Estimation of different data compositions for early-season crop type classification.

    Science.gov (United States)

    Hao, Pengyu; Wu, Mingquan; Niu, Zheng; Wang, Li; Zhan, Yulin

    2018-01-01

    Timely and accurate crop type distribution maps are an important inputs for crop yield estimation and production forecasting as multi-temporal images can observe phenological differences among crops. Therefore, time series remote sensing data are essential for crop type mapping, and image composition has commonly been used to improve the quality of the image time series. However, the optimal composition period is unclear as long composition periods (such as compositions lasting half a year) are less informative and short composition periods lead to information redundancy and missing pixels. In this study, we initially acquired daily 30 m Normalized Difference Vegetation Index (NDVI) time series by fusing MODIS, Landsat, Gaofen and Huanjing (HJ) NDVI, and then composited the NDVI time series using four strategies (daily, 8-day, 16-day, and 32-day). We used Random Forest to identify crop types and evaluated the classification performances of the NDVI time series generated from four composition strategies in two studies regions from Xinjiang, China. Results indicated that crop classification performance improved as crop separabilities and classification accuracies increased, and classification uncertainties dropped in the green-up stage of the crops. When using daily NDVI time series, overall accuracies saturated at 113-day and 116-day in Bole and Luntai, and the saturated overall accuracies (OAs) were 86.13% and 91.89%, respectively. Cotton could be identified 40∼60 days and 35∼45 days earlier than the harvest in Bole and Luntai when using daily, 8-day and 16-day composition NDVI time series since both producer's accuracies (PAs) and user's accuracies (UAs) were higher than 85%. Among the four compositions, the daily NDVI time series generated the highest classification accuracies. Although the 8-day, 16-day and 32-day compositions had similar saturated overall accuracies (around 85% in Bole and 83% in Luntai), the 8-day and 16-day compositions achieved these

  14. Maximum Likelihood Reconstruction for Magnetic Resonance Fingerprinting.

    Science.gov (United States)

    Zhao, Bo; Setsompop, Kawin; Ye, Huihui; Cauley, Stephen F; Wald, Lawrence L

    2016-08-01

    This paper introduces a statistical estimation framework for magnetic resonance (MR) fingerprinting, a recently proposed quantitative imaging paradigm. Within this framework, we present a maximum likelihood (ML) formalism to estimate multiple MR tissue parameter maps directly from highly undersampled, noisy k-space data. A novel algorithm, based on variable splitting, the alternating direction method of multipliers, and the variable projection method, is developed to solve the resulting optimization problem. Representative results from both simulations and in vivo experiments demonstrate that the proposed approach yields significantly improved accuracy in parameter estimation, compared to the conventional MR fingerprinting reconstruction. Moreover, the proposed framework provides new theoretical insights into the conventional approach. We show analytically that the conventional approach is an approximation to the ML reconstruction; more precisely, it is exactly equivalent to the first iteration of the proposed algorithm for the ML reconstruction, provided that a gridding reconstruction is used as an initialization.

  15. Efficient Detection of Repeating Sites to Accelerate Phylogenetic Likelihood Calculations.

    Science.gov (United States)

    Kobert, K; Stamatakis, A; Flouri, T

    2017-03-01

    The phylogenetic likelihood function (PLF) is the major computational bottleneck in several applications of evolutionary biology such as phylogenetic inference, species delimitation, model selection, and divergence times estimation. Given the alignment, a tree and the evolutionary model parameters, the likelihood function computes the conditional likelihood vectors for every node of the tree. Vector entries for which all input data are identical result in redundant likelihood operations which, in turn, yield identical conditional values. Such operations can be omitted for improving run-time and, using appropriate data structures, reducing memory usage. We present a fast, novel method for identifying and omitting such redundant operations in phylogenetic likelihood calculations, and assess the performance improvement and memory savings attained by our method. Using empirical and simulated data sets, we show that a prototype implementation of our method yields up to 12-fold speedups and uses up to 78% less memory than one of the fastest and most highly tuned implementations of the PLF currently available. Our method is generic and can seamlessly be integrated into any phylogenetic likelihood implementation. [Algorithms; maximum likelihood; phylogenetic likelihood function; phylogenetics]. © The Author(s) 2016. Published by Oxford University Press, on behalf of the Society of Systematic Biologists.

  16. Composite Linear Models | Division of Cancer Prevention

    Science.gov (United States)

    By Stuart G. Baker The composite linear models software is a matrix approach to compute maximum likelihood estimates and asymptotic standard errors for models for incomplete multinomial data. It implements the method described in Baker SG. Composite linear models for incomplete multinomial data. Statistics in Medicine 1994;13:609-622. The software includes a library of thirty

  17. Practical likelihood analysis for spatial generalized linear mixed models

    DEFF Research Database (Denmark)

    Bonat, W. H.; Ribeiro, Paulo Justiniano

    2016-01-01

    We investigate an algorithm for maximum likelihood estimation of spatial generalized linear mixed models based on the Laplace approximation. We compare our algorithm with a set of alternative approaches for two datasets from the literature. The Rhizoctonia root rot and the Rongelap are......, respectively, examples of binomial and count datasets modeled by spatial generalized linear mixed models. Our results show that the Laplace approximation provides similar estimates to Markov Chain Monte Carlo likelihood, Monte Carlo expectation maximization, and modified Laplace approximation. Some advantages...... of Laplace approximation include the computation of the maximized log-likelihood value, which can be used for model selection and tests, and the possibility to obtain realistic confidence intervals for model parameters based on profile likelihoods. The Laplace approximation also avoids the tuning...

  18. Comparing intake estimations based on food composition data with chemical analysis in Malian women.

    Science.gov (United States)

    Koréissi-Dembélé, Yara; Doets, Esmee L; Fanou-Fogny, Nadia; Hulshof, Paul Jm; Moretti, Diego; Brouwer, Inge D

    2017-06-01

    Food composition databases are essential for estimating nutrient intakes in food consumption surveys. The present study aimed to evaluate the Mali food composition database (TACAM) for assessing intakes of energy and selected nutrients at population level. Weighed food records and duplicate portions of all foods consumed during one day were collected. Intakes of energy, protein, fat, available carbohydrates, dietary fibre, Ca, Fe, Zn and vitamin A were assessed by: (i) estimating the nutrient intake from weighed food records based on an adjusted TACAM (a-TACAM); and (ii) chemical analysis of the duplicate portions. Agreement between the two methods was determined using the Wilcoxon signed-rank test and Bland-Altman plots. Bamako, Mali. Apparently healthy non-pregnant, non-lactating women (n 36) aged 15-36 years. Correlation coefficients between estimated and analysed values ranged from 0·38 to 0·61. At population level, mean estimated and analysed nutrient intakes differed significantly for carbohydrates (203·0 v. 243·5 g/d), Fe (9·9 v. 22·8 mg/d) and vitamin A (356 v. 246 µg retinol activity equivalents). At individual level, all estimated and analysed nutrient intakes differed significantly; the differences tended to increase with higher intakes. The a-TACAM is sufficiently acceptable for measuring average intakes of macronutrients, Ca and Zn at population level in low-intake populations, but not for carbohydrate, vitamin A and Fe intakes, and nutrient densities.

  19. Soft sensor based composition estimation and controller design for an ideal reactive distillation column.

    Science.gov (United States)

    Vijaya Raghavan, S R; Radhakrishnan, T K; Srinivasan, K

    2011-01-01

    In this research work, the authors have presented the design and implementation of a recurrent neural network (RNN) based inferential state estimation scheme for an ideal reactive distillation column. Decentralized PI controllers are designed and implemented. The reactive distillation process is controlled by controlling the composition which has been estimated from the available temperature measurements using a type of RNN called Time Delayed Neural Network (TDNN). The performance of the RNN based state estimation scheme under both open loop and closed loop have been compared with a standard Extended Kalman filter (EKF) and a Feed forward Neural Network (FNN). The online training/correction has been done for both RNN and FNN schemes for every ten minutes whenever new un-trained measurements are available from a conventional composition analyzer. The performance of RNN shows better state estimation capability as compared to other state estimation schemes in terms of qualitative and quantitative performance indices. Copyright © 2010 ISA. Published by Elsevier Ltd. All rights reserved.

  20. Seasonal species interactions minimize the impact of species turnover on the likelihood of community persistence.

    Science.gov (United States)

    Saavedra, Serguei; Rohr, Rudolf P; Fortuna, Miguel A; Selva, Nuria; Bascompte, Jordi

    2016-04-01

    Many of the observed species interactions embedded in ecological communities are not permanent, but are characterized by temporal changes that are observed along with abiotic and biotic variations. While work has been done describing and quantifying these changes, little is known about their consequences for species coexistence. Here, we investigate the extent to which changes of species composition impact the likelihood of persistence of the predator-prey community in the highly seasonal Białowieza Primeval Forest (northeast Poland), and the extent to which seasonal changes of species interactions (predator diet) modulate the expected impact. This likelihood is estimated extending recent developments on the study of structural stability in ecological communities. We find that the observed species turnover strongly varies the likelihood of community persistence between summer and winter. Importantly, we demonstrate that the observed seasonal interaction changes minimize the variation in the likelihood of persistence associated with species turnover across the year. We find that these community dynamics can be explained as the coupling of individual species to their environment by minimizing both the variation in persistence conditions and the interaction changes between seasons. Our results provide a homeostatic explanation for seasonal species interactions and suggest that monitoring the association of interactions changes with the level of variation in community dynamics can provide a good indicator of the response of species to environmental pressures.

  1. Estimate of thermoelastic heat production from superconducting composites in pulsed poloidal coil systems

    International Nuclear Information System (INIS)

    Ballou, J.K.; Gray, W.H.

    1976-01-01

    In the design of the cryogenic system and superconducting magnets for the poloidal field system in a tokamak, it is important to have an accurate estimate of the heat produced in superconducting magnets as a result of rapidly changing magnetic fields. A computer code, PLASS (Pulsed Losses in Axisymmetric Superconducting Solenoids), was written to estimate the contributions to the heat production from superconductor hysteresis losses, superconductor coupling losses, stabilizing material eddy current losses, and structural material eddy current losses. Recently, it has been shown that thermoelastic dissipation in superconducting composites can contribute as much to heat production as the other loss mechanisms mentioned above. A modification of PLASS which takes into consideration thermoelastic dissipation in superconducting composites is discussed. A comparison between superconductor thermoelastic dissipation and the other superconductor loss mechanisms is presented in terms of the poloidal coil system of the ORNL Experimental Power Reactor design

  2. Thermophysical properties estimation of paraffin/graphite composite phase change material using an inverse method

    International Nuclear Information System (INIS)

    Lachheb, Mohamed; Karkri, Mustapha; Albouchi, Fethi; Mzali, Foued; Nasrallah, Sassi Ben

    2014-01-01

    Highlights: • Preparation of paraffin/graphite composites by uni-axial compression technique. • Measurement of thermophysical properties of paraffin/graphite using the periodic method. • Measurement of the experimental densities of paraffin/graphite composites. • Prediction of the effective thermal conductivity using analytical models. - Abstract: In this paper, two types of graphite were combined with paraffin in an attempt to improve thermal conductivity of paraffin phase change material (PCM): Synthetic graphite (Timrex SFG75) and graphite waste obtained from damaged Tubular graphite Heat Exchangers. These paraffin/graphite phase change material (PCM) composites are prepared by the cold uniaxial compression technique and the thermophysical properties were estimated using a periodic temperature method and an inverse technique. Results showed that the thermal conductivity and thermal diffusivity are greatly influenced by the graphite addition

  3. Ego involvement increases doping likelihood.

    Science.gov (United States)

    Ring, Christopher; Kavussanu, Maria

    2018-08-01

    Achievement goal theory provides a framework to help understand how individuals behave in achievement contexts, such as sport. Evidence concerning the role of motivation in the decision to use banned performance enhancing substances (i.e., doping) is equivocal on this issue. The extant literature shows that dispositional goal orientation has been weakly and inconsistently associated with doping intention and use. It is possible that goal involvement, which describes the situational motivational state, is a stronger determinant of doping intention. Accordingly, the current study used an experimental design to examine the effects of goal involvement, manipulated using direct instructions and reflective writing, on doping likelihood in hypothetical situations in college athletes. The ego-involving goal increased doping likelihood compared to no goal and a task-involving goal. The present findings provide the first evidence that ego involvement can sway the decision to use doping to improve athletic performance.

  4. Estimating the Pollution Risk of Cadmium in Soil Using a Composite Soil Environmental Quality Standard

    Science.gov (United States)

    Huang, Biao; Zhao, Yongcun

    2014-01-01

    Estimating standard-exceeding probabilities of toxic metals in soil is crucial for environmental evaluation. Because soil pH and land use types have strong effects on the bioavailability of trace metals in soil, they were taken into account by some environmental protection agencies in making composite soil environmental quality standards (SEQSs) that contain multiple metal thresholds under different pH and land use conditions. This study proposed a method for estimating the standard-exceeding probability map of soil cadmium using a composite SEQS. The spatial variability and uncertainty of soil pH and site-specific land use type were incorporated through simulated realizations by sequential Gaussian simulation. A case study was conducted using a sample data set from a 150 km2 area in Wuhan City and the composite SEQS for cadmium, recently set by the State Environmental Protection Administration of China. The method may be useful for evaluating the pollution risks of trace metals in soil with composite SEQSs. PMID:24672364

  5. LDR: A Package for Likelihood-Based Sufficient Dimension Reduction

    Directory of Open Access Journals (Sweden)

    R. Dennis Cook

    2011-03-01

    Full Text Available We introduce a new mlab software package that implements several recently proposed likelihood-based methods for sufficient dimension reduction. Current capabilities include estimation of reduced subspaces with a fixed dimension d, as well as estimation of d by use of likelihood-ratio testing, permutation testing and information criteria. The methods are suitable for preprocessing data for both regression and classification. Implementations of related estimators are also available. Although the software is more oriented to command-line operation, a graphical user interface is also provided for prototype computations.

  6. Estimating methane emissions from landfills based on rainfall, ambient temperature, and waste composition: The CLEEN model.

    Science.gov (United States)

    Karanjekar, Richa V; Bhatt, Arpita; Altouqui, Said; Jangikhatoonabad, Neda; Durai, Vennila; Sattler, Melanie L; Hossain, M D Sahadat; Chen, Victoria

    2015-12-01

    Accurately estimating landfill methane emissions is important for quantifying a landfill's greenhouse gas emissions and power generation potential. Current models, including LandGEM and IPCC, often greatly simplify treatment of factors like rainfall and ambient temperature, which can substantially impact gas production. The newly developed Capturing Landfill Emissions for Energy Needs (CLEEN) model aims to improve landfill methane generation estimates, but still require inputs that are fairly easy to obtain: waste composition, annual rainfall, and ambient temperature. To develop the model, methane generation was measured from 27 laboratory scale landfill reactors, with varying waste compositions (ranging from 0% to 100%); average rainfall rates of 2, 6, and 12 mm/day; and temperatures of 20, 30, and 37°C, according to a statistical experimental design. Refuse components considered were the major biodegradable wastes, food, paper, yard/wood, and textile, as well as inert inorganic waste. Based on the data collected, a multiple linear regression equation (R(2)=0.75) was developed to predict first-order methane generation rate constant values k as functions of waste composition, annual rainfall, and temperature. Because, laboratory methane generation rates exceed field rates, a second scale-up regression equation for k was developed using actual gas-recovery data from 11 landfills in high-income countries with conventional operation. The Capturing Landfill Emissions for Energy Needs (CLEEN) model was developed by incorporating both regression equations into the first-order decay based model for estimating methane generation rates from landfills. CLEEN model values were compared to actual field data from 6 US landfills, and to estimates from LandGEM and IPCC. For 4 of the 6 cases, CLEEN model estimates were the closest to actual. Copyright © 2015 Elsevier Ltd. All rights reserved.

  7. ARK: Aggregation of Reads by K-Means for Estimation of Bacterial Community Composition.

    Science.gov (United States)

    Koslicki, David; Chatterjee, Saikat; Shahrivar, Damon; Walker, Alan W; Francis, Suzanna C; Fraser, Louise J; Vehkaperä, Mikko; Lan, Yueheng; Corander, Jukka

    2015-01-01

    Estimation of bacterial community composition from high-throughput sequenced 16S rRNA gene amplicons is a key task in microbial ecology. Since the sequence data from each sample typically consist of a large number of reads and are adversely impacted by different levels of biological and technical noise, accurate analysis of such large datasets is challenging. There has been a recent surge of interest in using compressed sensing inspired and convex-optimization based methods to solve the estimation problem for bacterial community composition. These methods typically rely on summarizing the sequence data by frequencies of low-order k-mers and matching this information statistically with a taxonomically structured database. Here we show that the accuracy of the resulting community composition estimates can be substantially improved by aggregating the reads from a sample with an unsupervised machine learning approach prior to the estimation phase. The aggregation of reads is a pre-processing approach where we use a standard K-means clustering algorithm that partitions a large set of reads into subsets with reasonable computational cost to provide several vectors of first order statistics instead of only single statistical summarization in terms of k-mer frequencies. The output of the clustering is then processed further to obtain the final estimate for each sample. The resulting method is called Aggregation of Reads by K-means (ARK), and it is based on a statistical argument via mixture density formulation. ARK is found to improve the fidelity and robustness of several recently introduced methods, with only a modest increase in computational complexity. An open source, platform-independent implementation of the method in the Julia programming language is freely available at https://github.com/dkoslicki/ARK. A Matlab implementation is available at http://www.ee.kth.se/ctsoftware.

  8. Estimation of mean and median pO2 values for a composite EPR spectrum.

    Science.gov (United States)

    Ahmad, Rizwan; Vikram, Deepti S; Potter, Lee C; Kuppusamy, Periannan

    2008-06-01

    Electron paramagnetic resonance (EPR)-based oximetry is capable of quantifying oxygen content in samples. However, for a heterogeneous environment with multiple pO2 values, peak-to-peak linewidth of the composite EPR lineshape does not provide a reliable estimate of the overall pO2 in the sample. The estimate, depending on the heterogeneity, can be severely biased towards narrow components. To address this issue, we suggest a postprocessing method to recover the linewidth histogram which can be used in estimating meaningful parameters, such as the mean and median pO2 values. This information, although not as comprehensive as obtained by EPR spectral-spatial imaging, goes beyond what can be generally achieved with conventional EPR spectroscopy. Substantially shorter acquisition times, in comparison to EPR imaging, may prompt its use in clinically relevant models. For validation, simulation and EPR experiment data are presented.

  9. Estimation of luminous efficacy of daylight and illuminance for composite climate

    Energy Technology Data Exchange (ETDEWEB)

    Ahmad, Jamil M.; Tiwari, G.N. [Center for Energy Studies, Indian Institute of Technology, Hauz Khas, New Delhi-16 (India)

    2010-07-01

    This Daylighting is one of the basic components of passive solar building design and its estimation is essential. In India there are a few available data of measured illuminance as in many regions of the world. The Indian climate is generally clear with overcast conditions prevailing through the months of July to September, which provides good potential to daylighting in buildings. Therefore, an analytical model that would encompass the weather conditions of New Delhi was selected. Hourly exterior horizontal and slope daylight availability has been estimated for New Delhi using daylight modeling techniques based on solar radiation data. A model to estimate interior illuminance was investigated and validated using experimental hourly inside illuminance data of an existing skylight integrated vault roof mud house in composite climate of New Delhi. The interior illuminance model was found in good agreement with experimental value of interior illuminance.

  10. Subtracting and Fitting Histograms using Profile Likelihood

    CERN Document Server

    D'Almeida, F M L

    2008-01-01

    It is known that many interesting signals expected at LHC are of unknown shape and strongly contaminated by background events. These signals will be dif cult to detect during the rst years of LHC operation due to the initial low luminosity. In this work, one presents a method of subtracting histograms based on the pro le likelihood function when the background is previously estimated by Monte Carlo events and one has low statistics. Estimators for the signal in each bin of the histogram difference are calculated so as limits for the signals with 68.3% of Con dence Level in a low statistics case when one has a exponential background and a Gaussian signal. The method can also be used to t histograms when the signal shape is known. Our results show a good performance and avoid the problem of negative values when subtracting histograms.

  11. Manipulation of Muscle Creatine and Glycogen Changes Dual X-ray Absorptiometry Estimates of Body Composition.

    Science.gov (United States)

    Bone, Julia L; Ross, Megan L; Tomcik, Kristyen A; Jeacocke, Nikki A; Hopkins, Will G; Burke, Louise M

    2017-05-01

    Standardizing a dual x-ray absorptiometry (DXA) protocol is thought to provide a reliable measurement of body composition. We investigated the effects of manipulating muscle glycogen and creatine content independently and additively on DXA estimates of lean mass. Eighteen well-trained male cyclists undertook a parallel group application of creatine loading (n = 9) (20 g·d for 5 d loading; 3 g·d maintenance) or placebo (n = 9) with crossover application of glycogen loading (12 v 6 g·kg BM per day for 48 h) as part of a larger study involving a glycogen-depleting exercise protocol. Body composition, total body water, muscle glycogen and creatine content were assessed via DXA, bioelectrical impedance spectroscopy and standard biopsy techniques. Changes in the mean were assessed using the following effect-size scale: >0.2 small, >0.6, moderate, >1.2 large and compared with the threshold for the smallest worthwhile effect of the treatment. Glycogen loading, both with and without creatine loading, resulted in substantial increases in estimates of lean body mass (mean ± SD; 3.0% ± 0.7% and 2.0% ± 0.9%) and leg lean mass (3.1% ± 1.8% and 2.6% ± 1.0%) respectively. A substantial decrease in leg lean mass was observed after the glycogen depleting condition (-1.4% ± 1.6%). Total body water showed substantial increases after glycogen loading (2.3% ± 2.3%), creatine loading (1.4% ± 1.9%) and the combined treatment (2.3% ± 1.1%). Changes in muscle metabolites and water content alter DXA estimates of lean mass during periods in which minimal change in muscle protein mass is likely. This information needs to be considered in interpreting the results of DXA-derived estimates of body composition in athletes.

  12. Estimating Function Approaches for Spatial Point Processes

    Science.gov (United States)

    Deng, Chong

    Spatial point pattern data consist of locations of events that are often of interest in biological and ecological studies. Such data are commonly viewed as a realization from a stochastic process called spatial point process. To fit a parametric spatial point process model to such data, likelihood-based methods have been widely studied. However, while maximum likelihood estimation is often too computationally intensive for Cox and cluster processes, pairwise likelihood methods such as composite likelihood, Palm likelihood usually suffer from the loss of information due to the ignorance of correlation among pairs. For many types of correlated data other than spatial point processes, when likelihood-based approaches are not desirable, estimating functions have been widely used for model fitting. In this dissertation, we explore the estimating function approaches for fitting spatial point process models. These approaches, which are based on the asymptotic optimal estimating function theories, can be used to incorporate the correlation among data and yield more efficient estimators. We conducted a series of studies to demonstrate that these estmating function approaches are good alternatives to balance the trade-off between computation complexity and estimating efficiency. First, we propose a new estimating procedure that improves the efficiency of pairwise composite likelihood method in estimating clustering parameters. Our approach combines estimating functions derived from pairwise composite likeli-hood estimation and estimating functions that account for correlations among the pairwise contributions. Our method can be used to fit a variety of parametric spatial point process models and can yield more efficient estimators for the clustering parameters than pairwise composite likelihood estimation. We demonstrate its efficacy through a simulation study and an application to the longleaf pine data. Second, we further explore the quasi-likelihood approach on fitting

  13. Estimation of the annual production and composition of C&D Debris in Galicia (Spain).

    Science.gov (United States)

    Martínez Lage, Isabel; Martínez Abella, Fernando; Herrero, Cristina Vázquez; Ordóñez, Juan Luis Pérez

    2010-04-01

    One of the key aspects that must be taken into consideration within the framework of Sustainable Construction is the management of Construction and Demolition (C&D) Debris. As for other types of waste, specific handling procedures are required to manage C&D Debris; these include reduction, reuse, recycling, and if all other possibilities fail, recovery or disposal. For public planning strategies aimed at the management of C&D Debris to be effective, it is first necessary to have specific knowledge of the type of waste materials generated in a particular region. After verifying that the methods available to determine the production and composition of C&D Debris are limited, this paper presents a procedure to ascertain the production and composition of C&D Debris, in any region. The procedure utilizes data on the surface areas of newly constructed buildings, renovations and demolitions, which are estimated from available data for recent years, as well as information on the quantity of debris generated per surface area in any type of construction site, which is obtained from recently executed constructions or from the ground plans of older buildings. The method proposed here has been applied to Galicia, one of Spain's autonomous communities, for which the quantity and composition of C&D Debris have been estimated for the horizon year 2011. Copyright 2009 Elsevier Ltd. All rights reserved.

  14. Theoretical Estimation of Thermal Effects in Drilling of Woven Carbon Fiber Composite

    Directory of Open Access Journals (Sweden)

    José Díaz-Álvarez

    2014-06-01

    Full Text Available Carbon Fiber Reinforced Polymer (CFRPs composites are extensively used in structural applications due to their attractive properties. Although the components are usually made near net shape, machining processes are needed to achieve dimensional tolerance and assembly requirements. Drilling is a common operation required for further mechanical joining of the components. CFRPs are vulnerable to processing induced damage; mainly delamination, fiber pull-out, and thermal degradation, drilling induced defects being one of the main causes of component rejection during manufacturing processes. Despite the importance of analyzing thermal phenomena involved in the machining of composites, only few authors have focused their attention on this problem, most of them using an experimental approach. The temperature at the workpiece could affect surface quality of the component and its measurement during processing is difficult. The estimation of the amount of heat generated during drilling is important; however, numerical modeling of drilling processes involves a high computational cost. This paper presents a combined approach to thermal analysis of composite drilling, using both an analytical estimation of heat generated during drilling and numerical modeling for heat propagation. Promising results for indirect detection of risk of thermal damage, through the measurement of thrust force and cutting torque, are obtained.

  15. Estimation of carcass composition using rib dissection of calf-fed Holstein steers supplemented zilpaterol hydrochloride.

    Science.gov (United States)

    McEvers, T J; May, N D; Reed, J A; Walter, L J; Hutcheson, J P; Lawrence, T E

    2018-04-14

    A serial harvest was conducted every 28 d from 254 to 534 d on feed (DOF) to quantify changes in growth and composition of calf-fed Holstein steers (n = 115, initial body weight (BW) = 449.2 ± 19.9 kg). One-half were supplemented with the β-2 adrenergic agonist zilpaterol hydrochloride (ZH; 8.33 mg/kg 100% dry matter (DM) basis) during the final 20 d followed by a 3-d withdrawal prior to harvest; the remainder was fed a non-ZH control (CON) ration. Five steers were randomly selected and harvested after 226 DOF which served as a reference point for modeling purposes. Fabricated carcass soft tissue was ground, mixed, and subsampled for proximate analysis. Moreover, following the traditional method of rib dissection which includes the 9th, 10th, and 11th rib contained within the IMPS 103 primal, the relationship of carcass chemical composition to 9-10-11 rib composition was evaluated. Carcasses in this investigation had more (P carcasses and rib dissections. Using regression procedures, models were constructed to describe the relationship of rib dissection (RD) composition including separable lean (RDSL), separable fat (RDSF), separable bone (RDSB), ether extract (RDEE), protein (RDP), moisture (RDM), and ash (RDA) with carcass composition. Carcass lean (CL), carcass fat (CF), and carcass bone (CB) were correlated (P carcass, carcass ether extract (CEE), carcass protein (CP), carcass moisture (CM), and carcass ash (CA) were correlated (P ≤ 0.01) with simple r values of 0.75, 0.31, 0.66, and 0.37, respectively. Equations to predict carcass fatness from rib dissection variables and ZH supplementation status were only able to account for 50 and 56%, of the variability of CF and CEE, respectively. Overall, the relationships quantified and equations developed in this investigation do not support use of 9/10/11 rib dissection for estimation of carcass composition of calf-fed Holstein steers.

  16. Estimation of urinary stone composition by automated processing of CT images.

    Science.gov (United States)

    Chevreau, Grégoire; Troccaz, Jocelyne; Conort, Pierre; Renard-Penna, Raphaëlle; Mallet, Alain; Daudon, Michel; Mozer, Pierre

    2009-10-01

    The objective of this article was developing an automated tool for routine clinical practice to estimate urinary stone composition from CT images based on the density of all constituent voxels. A total of 118 stones for which the composition had been determined by infrared spectroscopy were placed in a helical CT scanner. A standard acquisition, low-dose and high-dose acquisitions were performed. All voxels constituting each stone were automatically selected. A dissimilarity index evaluating variations of density around each voxel was created in order to minimize partial volume effects: stone composition was established on the basis of voxel density of homogeneous zones. Stone composition was determined in 52% of cases. Sensitivities for each compound were: uric acid: 65%, struvite: 19%, cystine: 78%, carbapatite: 33.5%, calcium oxalate dihydrate: 57%, calcium oxalate monohydrate: 66.5%, brushite: 75%. Low-dose acquisition did not lower the performances (P < 0.05). This entirely automated approach eliminates manual intervention on the images by the radiologist while providing identical performances including for low-dose protocols.

  17. Estimation of grey seal (Halichoerus grypus diet composition in the Baltic Sea

    Directory of Open Access Journals (Sweden)

    Karl Lundström

    2007-01-01

    Full Text Available We examined the digestive tract contents from 145 grey seals (Halichoerus grypus collected between 2001 and 2004 in the Baltic Sea. We compensated for biases introduced by erosion of otoliths, both by using additional hard-part structures other than otoliths, and species-specific size and numerical correction factors. In the absence of numerical correction factors based on feeding experiments for some species, we used correction factors based on a relationship between otolith recoveryrate and otolith width. A total of 24 prey taxa were identified but only a few species contributed substantially to the diet. The estimated diet composition was, independently of the prey number estimation method and diet composition estimation model used, dominated by herring (Clupea harengus, both by numbers and biomass. In addition to herring, common whitefish (Coregonus lavaretus and sprat (Sprattus sprattus were important prey, but cyprinids (Cyprinidae, eelpout (Zoarces viviparus, flounder (Platichtys flesus and salmon (Salmo salar also contributed significantly. Our results indicated dietary differences between grey seals of different age as well as between seals from the northern (Gulf of Bothnia and the southern (Baltic Proper Baltic Sea.

  18. Posterior distributions for likelihood ratios in forensic science.

    Science.gov (United States)

    van den Hout, Ardo; Alberink, Ivo

    2016-09-01

    Evaluation of evidence in forensic science is discussed using posterior distributions for likelihood ratios. Instead of eliminating the uncertainty by integrating (Bayes factor) or by conditioning on parameter values, uncertainty in the likelihood ratio is retained by parameter uncertainty derived from posterior distributions. A posterior distribution for a likelihood ratio can be summarised by the median and credible intervals. Using the posterior mean of the distribution is not recommended. An analysis of forensic data for body height estimation is undertaken. The posterior likelihood approach has been criticised both theoretically and with respect to applicability. This paper addresses the latter and illustrates an interesting application area. Copyright © 2016 The Chartered Society of Forensic Sciences. Published by Elsevier Ireland Ltd. All rights reserved.

  19. Maximum likelihood of phylogenetic networks.

    Science.gov (United States)

    Jin, Guohua; Nakhleh, Luay; Snir, Sagi; Tuller, Tamir

    2006-11-01

    Horizontal gene transfer (HGT) is believed to be ubiquitous among bacteria, and plays a major role in their genome diversification as well as their ability to develop resistance to antibiotics. In light of its evolutionary significance and implications for human health, developing accurate and efficient methods for detecting and reconstructing HGT is imperative. In this article we provide a new HGT-oriented likelihood framework for many problems that involve phylogeny-based HGT detection and reconstruction. Beside the formulation of various likelihood criteria, we show that most of these problems are NP-hard, and offer heuristics for efficient and accurate reconstruction of HGT under these criteria. We implemented our heuristics and used them to analyze biological as well as synthetic data. In both cases, our criteria and heuristics exhibited very good performance with respect to identifying the correct number of HGT events as well as inferring their correct location on the species tree. Implementation of the criteria as well as heuristics and hardness proofs are available from the authors upon request. Hardness proofs can also be downloaded at http://www.cs.tau.ac.il/~tamirtul/MLNET/Supp-ML.pdf

  20. Likelihood ratio model for classification of forensic evidence

    Energy Technology Data Exchange (ETDEWEB)

    Zadora, G., E-mail: gzadora@ies.krakow.pl [Institute of Forensic Research, Westerplatte 9, 31-033 Krakow (Poland); Neocleous, T., E-mail: tereza@stats.gla.ac.uk [University of Glasgow, Department of Statistics, 15 University Gardens, Glasgow G12 8QW (United Kingdom)

    2009-05-29

    One of the problems of analysis of forensic evidence such as glass fragments, is the determination of their use-type category, e.g. does a glass fragment originate from an unknown window or container? Very small glass fragments arise during various accidents and criminal offences, and could be carried on the clothes, shoes and hair of participants. It is therefore necessary to obtain information on their physicochemical composition in order to solve the classification problem. Scanning Electron Microscopy coupled with an Energy Dispersive X-ray Spectrometer and the Glass Refractive Index Measurement method are routinely used in many forensic institutes for the investigation of glass. A natural form of glass evidence evaluation for forensic purposes is the likelihood ratio-LR = p(E|H{sub 1})/p(E|H{sub 2}). The main aim of this paper was to study the performance of LR models for glass object classification which considered one or two sources of data variability, i.e. between-glass-object variability and(or) within-glass-object variability. Within the proposed model a multivariate kernel density approach was adopted for modelling the between-object distribution and a multivariate normal distribution was adopted for modelling within-object distributions. Moreover, a graphical method of estimating the dependence structure was employed to reduce the highly multivariate problem to several lower-dimensional problems. The performed analysis showed that the best likelihood model was the one which allows to include information about between and within-object variability, and with variables derived from elemental compositions measured by SEM-EDX, and refractive values determined before (RI{sub b}) and after (RI{sub a}) the annealing process, in the form of dRI = log{sub 10}|RI{sub a} - RI{sub b}|. This model gave better results than the model with only between-object variability considered. In addition, when dRI and variables derived from elemental compositions were used, this

  1. Likelihood ratio model for classification of forensic evidence

    International Nuclear Information System (INIS)

    Zadora, G.; Neocleous, T.

    2009-01-01

    One of the problems of analysis of forensic evidence such as glass fragments, is the determination of their use-type category, e.g. does a glass fragment originate from an unknown window or container? Very small glass fragments arise during various accidents and criminal offences, and could be carried on the clothes, shoes and hair of participants. It is therefore necessary to obtain information on their physicochemical composition in order to solve the classification problem. Scanning Electron Microscopy coupled with an Energy Dispersive X-ray Spectrometer and the Glass Refractive Index Measurement method are routinely used in many forensic institutes for the investigation of glass. A natural form of glass evidence evaluation for forensic purposes is the likelihood ratio-LR = p(E|H 1 )/p(E|H 2 ). The main aim of this paper was to study the performance of LR models for glass object classification which considered one or two sources of data variability, i.e. between-glass-object variability and(or) within-glass-object variability. Within the proposed model a multivariate kernel density approach was adopted for modelling the between-object distribution and a multivariate normal distribution was adopted for modelling within-object distributions. Moreover, a graphical method of estimating the dependence structure was employed to reduce the highly multivariate problem to several lower-dimensional problems. The performed analysis showed that the best likelihood model was the one which allows to include information about between and within-object variability, and with variables derived from elemental compositions measured by SEM-EDX, and refractive values determined before (RI b ) and after (RI a ) the annealing process, in the form of dRI = log 10 |RI a - RI b |. This model gave better results than the model with only between-object variability considered. In addition, when dRI and variables derived from elemental compositions were used, this model outperformed two other

  2. Environmental impact estimation of municipal solidwaste treatment based on their composition and properties

    Directory of Open Access Journals (Sweden)

    Il'inykh Galina Viktorovna

    2014-02-01

    Full Text Available Municipal solid waste (MSW is a significant environmental and sanitarian problem for urban areas. Different, often alternative, measures are considered in order to reduce the environmental impact of MSW management system, so adequate technique of comparative assessment of their environmental efficiency is needed. The problem is that waste composition, dangerous and organic matter content are often ignored when environmental impacts of MSW management system are calculated. Therefore, an algorithm of environmental impact estimation of municipal solid waste treatment based on their composition and properties is a question of considerable importance.The main difficulty in performing environmental impact calculation in compliance with MSW composition is the evaluation of the emissions per waste unit. Waste component content and biodegradable carbon content in every component are taken into account as basic waste features for emission estimation. Methane generation potential is calculated as a function of biodegradable carbon content.Environmental impacts of waste treatment on manual sorting plant in Yekaterinburg are given as an example. Waste composition analysis was carried out there in 2012. Material flow analysis allowed clarifying mass balance of the process. About 10 % of income waste mass are going out of the waste management system as a recyclables and determine the decreasing of environmental impacts. 1.24 % of biodegradable carbon don’t reach landfills, so it means that production of about ten cubic meters of biogas per ton of income MSW are prevented. When converting this data in money, it results in 47.1 rubles per ton of MSW or about 4.7 million rubles annually.

  3. Identifications of Carcass Characteristic for Estimating the Composition of Beef Carcass

    OpenAIRE

    Hafid, H; Gurnadi, R.E; Priyanto, R; Saefuddin, A

    2010-01-01

    The research aimed to identify carcass characteristic that can be used for estimating composition ofbeef carcass. It was used 165 Brahman crossbred cattle in this research. Carcass characteristics wereweight of a half cold carcass (WC) ranged from 96 to 151 kg, loin eye area (LEA) ranged from 22.09 to304.8 mm2, 12th rib fat thickness (FT12) ranged from 0.80 to 2.90 mm, meat ranged from 53.55 to 90.10kg and carcass fat ranged from 5.54 to 39.72 kg. Result showed that a half weight cold carcass...

  4. Maximum Likelihood, Consistency and Data Envelopment Analysis: A Statistical Foundation

    OpenAIRE

    Rajiv D. Banker

    1993-01-01

    This paper provides a formal statistical basis for the efficiency evaluation techniques of data envelopment analysis (DEA). DEA estimators of the best practice monotone increasing and concave production function are shown to be also maximum likelihood estimators if the deviation of actual output from the efficient output is regarded as a stochastic variable with a monotone decreasing probability density function. While the best practice frontier estimator is biased below the theoretical front...

  5. Estimation of Supraglacial Dust and Debris Geochemical Composition via Satellite Reflectance and Emissivity

    Science.gov (United States)

    Casey, Kimberly Ann; Kaab, Andreas

    2012-01-01

    We demonstrate spectral estimation of supraglacial dust, debris, ash and tephra geochemical composition from glaciers and ice fields in Iceland, Nepal, New Zealand and Switzerland. Surface glacier material was collected and analyzed via X-ray fluorescence spectroscopy (XRF) and X-ray diffraction (XRD) for geochemical composition and mineralogy. In situ data was used as ground truth for comparison with satellite derived geochemical results. Supraglacial debris spectral response patterns and emissivity-derived silica weight percent are presented. Qualitative spectral response patterns agreed well with XRF elemental abundances. Quantitative emissivity estimates of supraglacial SiO2 in continental areas were 67% (Switzerland) and 68% (Nepal), while volcanic supraglacial SiO2 averages were 58% (Iceland) and 56% (New Zealand), yielding general agreement. Ablation season supraglacial temperature variation due to differing dust and debris type and coverage was also investigated, with surface debris temperatures ranging from 5.9 to 26.6 C in the study regions. Applications of the supraglacial geochemical reflective and emissive characterization methods include glacier areal extent mapping, debris source identification, glacier kinematics and glacier energy balance considerations.

  6. Estimation of Supraglacial Dust and Debris Geochemical Composition via Satellite Reflectance and Emissivity

    Directory of Open Access Journals (Sweden)

    Kimberly Casey

    2012-09-01

    Full Text Available We demonstrate spectral estimation of supraglacial dust, debris, ash and tephra geochemical composition from glaciers and ice fields in Iceland, Nepal, New Zealand and Switzerland. Surface glacier material was collected and analyzed via X-ray fluorescence spectroscopy (XRF and X-ray diffraction (XRD for geochemical composition and mineralogy. In situ data was used as ground truth for comparison with satellite derived geochemical results. Supraglacial debris spectral response patterns and emissivity-derived silica weight percent are presented. Qualitative spectral response patterns agreed well with XRF elemental abundances. Quantitative emissivity estimates of supraglacial SiO2 in continental areas were 67% (Switzerland and 68% (Nepal, while volcanic supraglacial SiO2 averages were 58% (Iceland and 56% (New Zealand, yielding general agreement. Ablation season supraglacial temperature variation due to differing dust and debris type and coverage was also investigated, with surface debris temperatures ranging from 5.9 to 26.6 C in the study regions. Applications of the supraglacial geochemical reflective and emissive characterization methods include glacier areal extent mapping, debris source identification, glacier kinematics and glacier energy balance considerations.

  7. Body composition of lactating and dry Holstein cows estimated by deuterium dilution

    International Nuclear Information System (INIS)

    Martin, R.A.; Ehle, F.R.

    1986-01-01

    In three experiments patterns of water turnover and body composition estimated by deuterium oxide were studied in Holstein cows. In the first experiment, four lactating cows were infused with deuterium oxide, and blood samples were taken during 4-d collection. Milking was stopped; cows were reinfused with deuterium oxide and resampled. Slopes of deuterium oxide dilution curves indicated lactating cows turned water over more rapidly than nonlactating cows. In the second experiment with the same four cows, during 4-d collection, deuterium oxide concentrations in milk, urine, and feces showed dilution patterns similar to deuterium oxide in blood. Sampling milk may be an alternative to sampling blood. In the third experiment, 36 Holstein cows were fed 55, 65, or 75% alfalfa, smooth bromegrass, or equal parts of each forage as total mixed rations; remaining portions of rations were a grain mixture. Body composition was estimated at -1, 1, 2, 3, 4, and 5 mo postpartum. Empty body water, protein, mineral, fat, and fat percentage decreased from prepartum to postpartum. First calf heifers contained less empty body water, protein, and mineral than older cows. Cows fed diets with 55% forage had more body fat than those fed diets with 75% forage. Cows fed alfalfa-based diets had more gastrointestinal fill regardless of grain than cows fed diets that contained alfalfa and smooth bromegrass. Gastrointestinal fill of cows increased from prepartum to 5 mo postpartum

  8. Estimation of lean and fat composition of pork ham using image processing measurements

    Science.gov (United States)

    Jia, Jiancheng; Schinckel, Allan P.; Forrest, John C.

    1995-01-01

    This paper presents a method of estimating the lean and fat composition in pork ham from cross-sectional area measurements using image processing technology. The relationship between the quantity of ham lean and fat mass with the ham lean and fat areas was studied. The prediction equations for pork ham composition based on the ham cross-sectional area measurements were developed. The results show that ham lean weight was related to the ham lean area (r equals .75, P lean weight was highly related to the product of ham total weight times percentage ham lean area (r equals .96, P product of ham total weight times percentage ham fat area (r equals .88, P lean weight was trimmed wholesale ham weight and percentage ham fat area with a coefficient of determination of 92%. The best combination of independent variables for estimating ham fat weight was trimmed wholesale ham weight and percentage ham fat area with a coefficient of determination of 78%. Prediction equations with either two or three independent variables did not significantly increase the accuracy of prediction. The results of this study indicate that the weight of ham lean and fat could be predicted from ham cross-sectional area measurements using image analysis in combination with wholesale ham weight.

  9. Effective Mechanical Property Estimation of Composite Solid Propellants Based on VCFEM

    Directory of Open Access Journals (Sweden)

    Liu-Lei Shen

    2018-01-01

    Full Text Available A solid rocket motor is one of the critical components of solid missiles, and its life and reliability mostly depend on the mechanical behavior of a composite solid propellant (CSP. Effective mechanical properties are critical material constants to analyze the structural integrity of propellant grain. They are estimated by a numerical method that combines the Voronoi cell finite element method (VCFEM and the homogenization method in the present paper. The correctness of this combined method has been validated by comparing with a standard finite element method and conventional theoretical models. The effective modulus and the effective Poisson’s ratio of a CSP varying with volume fraction and component material properties are estimated. The result indicates that the variations of the volume fraction of inclusions and the properties of the matrix have obvious influences on the effective mechanical properties of a CSP. The microscopic numerical analysis method proposed in this paper can also be used to provide references for the design and the analysis of other large volume fraction composite materials.

  10. Crustal composition in the Hidaka Metamorphic Belt estimated from seismic velocity by laboratory measurements

    Science.gov (United States)

    Yamauchi, K.; Ishikawa, M.; Sato, H.; Iwasaki, T.; Toyoshima, T.

    2015-12-01

    To understand the dynamics of the lithosphere in subduction systems, the knowledge of rock composition is significant. However, rock composition of the overriding plate is still poorly understood. To estimate rock composition of the lithosphere, it is an effective method to compare the elastic wave velocities measured under the high pressure and temperature condition with the seismic velocities obtained by active source experiment and earthquake observation. Due to an arc-arc collision in central Hokkaido, middle to lower crust is exposed along the Hidaka Metamorphic Belt (HMB), providing exceptional opportunities to study crust composition of an island arc. Across the HMB, P-wave velocity model has been constructed by refraction/wide-angle reflection seismic profiling (Iwasaki et al., 2004). Furthermore, because of the interpretation of the crustal structure (Ito, 2000), we can follow a continuous pass from the surface to the middle-lower crust. We corrected representative rock samples from HMB and measured ultrasonic P-wave (Vp) and S-wave velocities (Vs) under the pressure up to 1.0 GPa in a temperature range from 25 to 400 °C. For example, the Vp values measured at 25 °C and 0.5 GPa are 5.88 km/s for the granite (74.29 wt.% SiO2), 6.02-6.34 km/s for the tonalites (66.31-68.92 wt.% SiO2), 6.34 km/s for the gneiss (64.69 wt.% SiO2), 6.41-7.05 km/s for the amphibolites (50.06-51.13 wt.% SiO2), and 7.42 km/s for the mafic granulite (50.94 wt.% SiO2). And, Vp of tonalites showed a correlation with SiO2 (wt.%). Comparing with the velocity profiles across the HMB (Iwasaki et al., 2004), we estimate that the lower to middle crust consists of amphibolite and tonalite, and the estimated acoustic impedance contrast between them suggests an existence of a clear reflective boundary, which accords well to the obtained seismic reflection profile (Iwasaki et al., 2014). And, we can obtain the same tendency from comparing measured Vp/Vs ratio and Vp/Vs ratio structure model

  11. Generalized empirical likelihood methods for analyzing longitudinal data

    KAUST Repository

    Wang, S.

    2010-02-16

    Efficient estimation of parameters is a major objective in analyzing longitudinal data. We propose two generalized empirical likelihood based methods that take into consideration within-subject correlations. A nonparametric version of the Wilks theorem for the limiting distributions of the empirical likelihood ratios is derived. It is shown that one of the proposed methods is locally efficient among a class of within-subject variance-covariance matrices. A simulation study is conducted to investigate the finite sample properties of the proposed methods and compare them with the block empirical likelihood method by You et al. (2006) and the normal approximation with a correctly estimated variance-covariance. The results suggest that the proposed methods are generally more efficient than existing methods which ignore the correlation structure, and better in coverage compared to the normal approximation with correctly specified within-subject correlation. An application illustrating our methods and supporting the simulation study results is also presented.

  12. Updated folate data in the Dutch Food Composition Database and implications for intake estimates

    Directory of Open Access Journals (Sweden)

    Susanne Westenbrink

    2012-04-01

    Full Text Available Background and objective: Nutrient values are influenced by the analytical method used. Food folate measured by high performance liquid chromatography (HPLC or by microbiological assay (MA yield different results, with in general higher results from MA than from HPLC. This leads to the question of how to deal with different analytical methods in compiling standardised and internationally comparable food composition databases? A recent inventory on folate in European food composition databases indicated that currently MA is more widely used than HPCL. Since older Dutch values are produced by HPLC and newer values by MA, analytical methods and procedures for compiling folate data in the Dutch Food Composition Database (NEVO were reconsidered and folate values were updated. This article describes the impact of this revision of folate values in the NEVO database as well as the expected impact on the folate intake assessment in the Dutch National Food Consumption Survey (DNFCS. Design: The folate values were revised by replacing HPLC with MA values from recent Dutch analyses. Previously MA folate values taken from foreign food composition tables had been recalculated to the HPLC level, assuming a 27% lower value from HPLC analyses. These recalculated values were replaced by the original MA values. Dutch HPLC and MA values were compared to each other. Folate intake was assessed for a subgroup within the DNFCS to estimate the impact of the update. Results: In the updated NEVO database nearly all folate values were produced by MA or derived from MA values which resulted in an average increase of 24%. The median habitual folate intake in young children was increased by 11–15% using the updated folate values. Conclusion: The current approach for folate in NEVO resulted in more transparency in data production and documentation and higher comparability among European databases. Results of food consumption surveys are expected to show higher folate intakes

  13. In vivo body composition estimation in nongravid and reproducing first-litter sows with deuterium oxide

    International Nuclear Information System (INIS)

    Shields, R.G. Jr.; Mahan, D.C.; Byers, F.M.

    1984-01-01

    An experiment was conducted with 64 first-litter sows to evaluate the efficacy of a D 2 O dilution procedure for measuring in vivo body composition during the reproduction cycle. Eight gilts were each infused at breeding, 57 and 105 d postcoitum and at 5 and 25 d postpartum, with equivalent numbers of nongravid controls infused at corresponding periods except at 5 d postpartum. Results from D 2 O dilution were compared with body water estimates obtained from chemical analysis. An early-equilibrating D 2 O pool (before 15 min) was similar quantitatively to empty body (ingesta free) water in nongravid and lactating animals, but not in pregnant sows. Because of inconsistent D 2 O equilibration patterns in gravid sows, the early pool was considered to have equilibrated with part but not all of the water in the conceptus products. Total body D 2 O space measurement obtained from data following equilibration of D 2 O in the entire body (1 to 2 h) overestimated total body water (including gastrointestinal water) by approximately 19%. Coefficients of determination for equations relating total body D 2 O space to empty body and maternal body water were .96 and .88, respectively, in gestating sows and .67 and .74, respectively, for lactating sows, while coefficients of variation were below 6% in all cases. Prediction equations were developed to estimate empty and maternal body components (protein, fat and ash) from body weight and D 2 O space. Accuracy of protein and ash weight prediction is lowest with this procedure because it involves the composite error of estimation of the other body components

  14. Estimating body weight and body composition of chickens by using noninvasive measurements.

    Science.gov (United States)

    Latshaw, J D; Bishop, B L

    2001-07-01

    The major objective of this research was to develop equations to estimate BW and body composition using measurements taken with inexpensive instruments. We used five groups of chickens that were created with different genetic stocks and feeding programs. Four of the five groups were from broiler genetic stock, and one was from sex-linked heavy layers. The goal was to sample six males from each group when the group weight was 1.20, 1.75, and 2.30 kg. Each male was weighed and measured for back length, pelvis width, circumference, breast width, keel length, and abdominal skinfold thickness. A cloth tape measure, calipers, and skinfold calipers were used for measurement. Chickens were scanned for total body electrical conductivity (TOBEC) before being euthanized and frozen. Six females were selected at weights similar to those for males and were measured in the same way. Each whole chicken was ground, and a portion of ground material of each was used to measure water, fat, ash, and energy content. Multiple linear regression was used to estimate BW from body measurements. The best single measurement was pelvis width, with an R2 = 0.67. Inclusion of three body measurements in an equation resulted in R2 = 0.78 and the following equation: BW (g) = -930.0 + 68.5 (breast, cm) + 48.5 (circumference, cm) + 62.8 (pelvis, cm). The best single measurement to estimate body fat was abdominal skinfold thickness, expressed as a natural logarithm. Inclusion of weight and skinfold thickness resulted in R2 = 0.63 for body fat according to the following equation: fat (%) = 24.83 + 6.75 (skinfold, ln cm) - 3.87 (wt, kg). Inclusion of the result of TOBEC and the effect of sex improved the R2 to 0.78 for body fat. Regression analysis was used to develop additional equations, based on fat, to estimate water and energy contents of the body. The body water content (%) = 72.1 - 0.60 (body fat, %), and body energy (kcal/g) = 1.097 + 0.080 (body fat, %). The results of the present study

  15. Estimating Classification Errors under Edit Restrictions in Composite Survey-Register Data Using Multiple Imputation Latent Class Modelling (MILC)

    NARCIS (Netherlands)

    Boeschoten, Laura; Oberski, Daniel; De Waal, Ton

    2017-01-01

    Both registers and surveys can contain classification errors. These errors can be estimated by making use of a composite data set. We propose a new method based on latent class modelling to estimate the number of classification errors across several sources while taking into account impossible

  16. Estimation of the duodenal flow of microbial nitrogen in ruminants based on the chemical composition of forages: a literature review

    NARCIS (Netherlands)

    Gosselink, J.M.J.; Poncet, C.; Dulphy, J.P.; Cone, J.W.

    2003-01-01

    The objective of this study was to evaluate the estimation of the duodenal flow of microbial nitrogen (N) in ruminants fed forage only, per kilogram of dry matter (DM) intake, which is the yield of microbial protein (YMP). The estimation was based on the chemical composition of forages. A data file

  17. Likelihood-based inference for clustered line transect data

    DEFF Research Database (Denmark)

    Waagepetersen, Rasmus; Schweder, Tore

    2006-01-01

    The uncertainty in estimation of spatial animal density from line transect surveys depends on the degree of spatial clustering in the animal population. To quantify the clustering we model line transect data as independent thinnings of spatial shot-noise Cox processes. Likelihood-based inference...

  18. Likelihood-based inference for clustered line transect data

    DEFF Research Database (Denmark)

    Waagepetersen, Rasmus Plenge; Schweder, Tore

    The uncertainty in estimation of spatial animal density from line transect surveys depends on the degree of spatial clustering in the animal population. To quantify the clustering we model line transect data as independent thinnings of spatial shot-noise Cox processes. Likelihood-based inference...

  19. Background estimation in short-wave region during determination of total sample composition by x-ray fluorescence method

    International Nuclear Information System (INIS)

    Simakov, V.A.; Kordyukov, S.V.; Petrov, E.N.

    1988-01-01

    Method of background estimation in short-wave spectral region during determination of total sample composition by X-ray fluorescence method is described. 13 types of different rocks with considerable variations of base composition and Zr, Nb, Th, U content below 7x10 -3 % are investigated. The suggested method of background accounting provides for a less statistical error of the background estimation than direct isolated measurement and reliability of its determination in a short-wave region independent on the sample base. Possibilities of suggested method for artificial mixtures conforming by the content of main component to technological concemtrates - niobium, zirconium, tantalum are estimated

  20. Performances of the likelihood-ratio classifier based on different data modelings

    NARCIS (Netherlands)

    Chen, C.; Veldhuis, Raymond N.J.

    2008-01-01

    The classical likelihood ratio classifier easily collapses in many biometric applications especially with independent training-test subjects. The reason lies in the inaccurate estimation of the underlying user-specific feature density. Firstly, the feature density estimation suffers from

  1. Comparing Fatigue Life Estimations of Composite Wind Turbine Blades using different Fatigue Analysis Tools

    DEFF Research Database (Denmark)

    Ardila, Oscar Gerardo Castro; Lennie, Matthew; Branner, Kim

    2015-01-01

    In this paper, fatigue lifetime prediction of NREL 5MW reference wind turbine is presented. The fatigue response of materials used in selected blade cross sections was obtained by applying macroscopic fatigue approaches and assuming uniaxial stress states. Power production and parked load cases...... suggested by the IEC 61400-1 standard were studied employing different load time intervals and by using two novel fatigue tools called ALBdeS and BECAS+F. The aeroelastic loads were defined thought aeroelastic simulations performed with both FAST and HAWC2 tools. The stress spectra at each layer were...... calculated employing laminated composite theory and beam cross section methods. The Palmgren-Miner linear damage rule was used to calculate the accumulation damage. The theoretical results produced by both fatigue tools proved a prominent effect of analysed design load conditions on the estimated lifetime...

  2. A Central Composite Face-Centered Design for Parameters Estimation of PEM Fuel Cell Electrochemical Model

    Directory of Open Access Journals (Sweden)

    Khaled MAMMAR

    2013-11-01

    Full Text Available In this paper, a new approach based on Experimental of design methodology (DoE is used to estimate the optimal of unknown model parameters proton exchange membrane fuel cell (PEMFC. This proposed approach combines the central composite face-centered (CCF and numerical PEMFC electrochemical. Simulation results obtained using electrochemical model help to predict the cell voltage in terms of inlet partial pressures of hydrogen and oxygen, stack temperature, and operating current. The value of the previous model and (CCF design methodology is used for parametric analysis of electrochemical model. Thus it is possible to evaluate the relative importance of each parameter to the simulation accuracy. However this methodology is able to define the exact values of the parameters from the manufacture data. It was tested for the BCS 500-W stack PEM Generator, a stack rated at 500 W, manufactured by American Company BCS Technologies FC.

  3. Estimation of computed tomography dose in various phantom shapes and compositions

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Chang Lae [Dept. of Radiological Science, Yonsei University, Seoul (Korea, Republic of)

    2017-03-15

    The purpose of this study was to investigate CTDI (computed tomography dose index at center) for various phantom shapes, sizes, and compositions by using GATE (geant4 application for tomographic emission) simulations. GATE simulations were performed for various phantom shapes (cylinder, elliptical, and hexagonal prism PMMA phantoms) and phantom compositions (water, PMMA, polyethylene, polyoxymethylene) with various diameters (1-50 cm) at various kVp and mAs levels. The CTDI100center values of cylinder, elliptical, and hexagonal prism phantom at 120 kVp, 200 mAs resulted in 11.1, 13.4, and 12.2 mGy, respectively. The volume is the same, but CTDI{sub 100center} values are different depending on the type of phantom. The water, PMMA, and polyoxymethylene phantom CTDI{sub 100center} values were relatively low as the material density increased. However, in the case of Polyethylene, the CTDI{sub 100center} value was higher than that of PMMA at diameters exceeding 15 cm (CTDI{sub 100center} : 35.0 mGy). And a diameter greater than 30 cm (CTDI{sub 100center} : 17.7 mGy) showed more CTDI{sub 100center} than Water. We have used limited phantoms to evaluate CT doses. In this study, CTDI{sub 100center} values were estimated and simulated by GATE simulation according to the material and shape of the phantom. CT dosimetry can be estimated more accurately by using various materials and phantom shapes close to human body.

  4. A maximum likelihood framework for protein design

    Directory of Open Access Journals (Sweden)

    Philippe Hervé

    2006-06-01

    Full Text Available Abstract Background The aim of protein design is to predict amino-acid sequences compatible with a given target structure. Traditionally envisioned as a purely thermodynamic question, this problem can also be understood in a wider context, where additional constraints are captured by learning the sequence patterns displayed by natural proteins of known conformation. In this latter perspective, however, we still need a theoretical formalization of the question, leading to general and efficient learning methods, and allowing for the selection of fast and accurate objective functions quantifying sequence/structure compatibility. Results We propose a formulation of the protein design problem in terms of model-based statistical inference. Our framework uses the maximum likelihood principle to optimize the unknown parameters of a statistical potential, which we call an inverse potential to contrast with classical potentials used for structure prediction. We propose an implementation based on Markov chain Monte Carlo, in which the likelihood is maximized by gradient descent and is numerically estimated by thermodynamic integration. The fit of the models is evaluated by cross-validation. We apply this to a simple pairwise contact potential, supplemented with a solvent-accessibility term, and show that the resulting models have a better predictive power than currently available pairwise potentials. Furthermore, the model comparison method presented here allows one to measure the relative contribution of each component of the potential, and to choose the optimal number of accessibility classes, which turns out to be much higher than classically considered. Conclusion Altogether, this reformulation makes it possible to test a wide diversity of models, using different forms of potentials, or accounting for other factors than just the constraint of thermodynamic stability. Ultimately, such model-based statistical analyses may help to understand the forces

  5. Bayesian interpretation of Generalized empirical likelihood by maximum entropy

    OpenAIRE

    Rochet , Paul

    2011-01-01

    We study a parametric estimation problem related to moment condition models. As an alternative to the generalized empirical likelihood (GEL) and the generalized method of moments (GMM), a Bayesian approach to the problem can be adopted, extending the MEM procedure to parametric moment conditions. We show in particular that a large number of GEL estimators can be interpreted as a maximum entropy solution. Moreover, we provide a more general field of applications by proving the method to be rob...

  6. Estimation of PWR spent fuel composition using SCALE and SWAT code systems

    Energy Technology Data Exchange (ETDEWEB)

    Shin, Hee Sung [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of); Kenya, Suyama; Hiroshi, Okuno [Japan Atomic Energy Research Institute, Tokyo (Japan)

    2001-05-01

    The isotopic composition calculations were performed for 26 spent fuel samples from Obrigheim PWR reactor and 55 spent fuel samples from 7 PWR reactors using SCALE4.4 SAS2H with 27, 44 and 238 group cross-section libraries and SWAT with 107 group cross-section library. For convenience, the ratio of the measured to calculated value was used as a parameter. The four kinds of the calculation results were compared with the measured data. For many important nuclides for burnup credit criticality safety evaluation, the four methods applied in this study showed good coincidence with measurements in general. More precise observations showed the following results. Less unity ratios were found for Pu-239 and -241 for selected 16 samples out of the 26 samples from Obrigheim reactor. Larger than unity ratios were found for Am-241 for both the 16 and 55 samples. Larger than unity ratios were found for Sm-149 for the 55 samples. In the case of 26 sample SWAT was generally accompanied by larger ratios than those of SAS2H with some exceptions. Based on the measured-to-calculated ratios for 71 samples of a combined set in which 16 selected samples and 55 samples were included, the correction factors that should be multiplied to the calculated isotopic compositions were generated for a conservative estimate of the neutron multiplication factor of a system containing PWR spent fuel, taking burnup credit into account.

  7. Non-destructive estimates of soil carbonic anhydrase activity and associated soil water oxygen isotope composition

    Science.gov (United States)

    Jones, Sam P.; Ogée, Jérôme; Sauze, Joana; Wohl, Steven; Saavedra, Noelia; Fernández-Prado, Noelia; Maire, Juliette; Launois, Thomas; Bosc, Alexandre; Wingate, Lisa

    2017-12-01

    The contribution of photosynthesis and soil respiration to net land-atmosphere carbon dioxide (CO2) exchange can be estimated based on the differential influence of leaves and soils on budgets of the oxygen isotope composition (δ18O) of atmospheric CO2. To do so, the activity of carbonic anhydrases (CAs), a group of enzymes that catalyse the hydration of CO2 in soils and plants, needs to be understood. Measurements of soil CA activity typically involve the inversion of models describing the δ18O of CO2 fluxes to solve for the apparent, potentially catalysed, rate of CO2 hydration. This requires information about the δ18O of CO2 in isotopic equilibrium with soil water, typically obtained from destructive, depth-resolved sampling and extraction of soil water. In doing so, an assumption is made about the soil water pool that CO2 interacts with, which may bias estimates of CA activity if incorrect. Furthermore, this can represent a significant challenge in data collection given the potential for spatial and temporal variability in the δ18O of soil water and limited a priori information with respect to the appropriate sampling resolution and depth. We investigated whether we could circumvent this requirement by inferring the rate of CO2 hydration and the δ18O of soil water from the relationship between the δ18O of CO2 fluxes and the δ18O of CO2 at the soil surface measured at different ambient CO2 conditions. This approach was tested through laboratory incubations of air-dried soils that were re-wetted with three waters of different δ18O. Gas exchange measurements were made on these soils to estimate the rate of hydration and the δ18O of soil water, followed by soil water extraction to allow for comparison. Estimated rates of CO2 hydration were 6.8-14.6 times greater than the theoretical uncatalysed rate of hydration, indicating that CA were active in these soils. Importantly, these estimates were not significantly different among water treatments, suggesting

  8. Effect of the choice of food composition table on nutrient estimates: a comparison between the British and American (Chilean) tables.

    Science.gov (United States)

    Garcia, V; Rona, R J; Chinn, S

    2004-06-01

    To determine the level of agreement between the American (Chilean) and British food composition tables in estimating intakes of macronutrients and antioxidants. Information based on a food-frequency questionnaire with emphasis on antioxidants was collected from 95 Chileans aged 24-28 years. Nutritional composition was analysed using the British table of food composition and the American table of food composition modified by Chilean food items. Mean differences and limits of agreement (LOAs) of estimated intake were assessed. Mean differences between the two tables of food composition ranged from 5.3% to 8.9% higher estimates when using the American (Chilean) table for macronutrients. For micronutrients, a bias towards a higher mean was observed for vitamin E, iron and magnesium when the American (Chilean) table was used, but the opposite was observed for vitamin A and selenium. The intra-class correlation coefficient (ICC) ranged from 0.86 (95% confidence interval (CI) 0.81-0.91) to 0.998 (95% CI 0.995-1.00), indicating high to excellent agreement. LOAs for macronutrients and vitamins A and C were satisfactory, as they were sufficiently narrow. There was more uncertainty for other micronutrients. The American table gives relative overestimates of macronutrients in comparison to the British table, but the relative biases for micronutrients are inconsistent. Estimates of agreement between the two food composition tables provide reassurance that results are interchangeable for the majority of nutrients.

  9. Moment Conditions Selection Based on Adaptive Penalized Empirical Likelihood

    Directory of Open Access Journals (Sweden)

    Yunquan Song

    2014-01-01

    Full Text Available Empirical likelihood is a very popular method and has been widely used in the fields of artificial intelligence (AI and data mining as tablets and mobile application and social media dominate the technology landscape. This paper proposes an empirical likelihood shrinkage method to efficiently estimate unknown parameters and select correct moment conditions simultaneously, when the model is defined by moment restrictions in which some are possibly misspecified. We show that our method enjoys oracle-like properties; that is, it consistently selects the correct moment conditions and at the same time its estimator is as efficient as the empirical likelihood estimator obtained by all correct moment conditions. Moreover, unlike the GMM, our proposed method allows us to carry out confidence regions for the parameters included in the model without estimating the covariances of the estimators. For empirical implementation, we provide some data-driven procedures for selecting the tuning parameter of the penalty function. The simulation results show that the method works remarkably well in terms of correct moment selection and the finite sample properties of the estimators. Also, a real-life example is carried out to illustrate the new methodology.

  10. Flow measurement and thrust estimation of a vibrating ionic polymer metal composite

    International Nuclear Information System (INIS)

    Chae, Woojin; Cha, Youngsu; Peterson, Sean D; Porfiri, Maurizio

    2015-01-01

    Ionic polymer metal composites (IPMCs) are an emerging class of soft active materials that are finding growing application as underwater propulsors for miniature biomimetic swimmers. Understanding the hydrodynamics generated by an IPMC vibrating under water is central to the design of such biomimetic swimmers. In this paper, we propose the use of time-resolved particle image velocimetry to detail the fluid kinematics and kinetics in the vicinity of an IPMC vibrating along its fundamental structural mode. The reconstructed pressure field is ultimately used to estimate the thrust produced by the IPMC. The vibration frequency is systematically varied to elucidate the role of the Reynolds number on the flow physics and the thrust production. Experimental results indicate the formation and shedding of vortical structures from the IPMC tip during its vibration. Vorticity shedding is sustained by the pressure gradients along each side of the IPMC, which are most severe in the vicinity of the tip. The mean thrust is found to robustly increase with the Reynolds number, closely following a power law that has been derived from direct three-dimensional numerical simulations. A reduced order distributed model is proposed to describe IPMC underwater vibration and estimate thrust production, offering insight into the physics of underwater propulsion and aiding in the design of IPMC-based propulsors. (paper)

  11. The Laplace Likelihood Ratio Test for Heteroscedasticity

    Directory of Open Access Journals (Sweden)

    J. Martin van Zyl

    2011-01-01

    Full Text Available It is shown that the likelihood ratio test for heteroscedasticity, assuming the Laplace distribution, gives good results for Gaussian and fat-tailed data. The likelihood ratio test, assuming normality, is very sensitive to any deviation from normality, especially when the observations are from a distribution with fat tails. Such a likelihood test can also be used as a robust test for a constant variance in residuals or a time series if the data is partitioned into groups.

  12. MXLKID: a maximum likelihood parameter identifier

    International Nuclear Information System (INIS)

    Gavel, D.T.

    1980-07-01

    MXLKID (MaXimum LiKelihood IDentifier) is a computer program designed to identify unknown parameters in a nonlinear dynamic system. Using noisy measurement data from the system, the maximum likelihood identifier computes a likelihood function (LF). Identification of system parameters is accomplished by maximizing the LF with respect to the parameters. The main body of this report briefly summarizes the maximum likelihood technique and gives instructions and examples for running the MXLKID program. MXLKID is implemented LRLTRAN on the CDC7600 computer at LLNL. A detailed mathematical description of the algorithm is given in the appendices. 24 figures, 6 tables

  13. Estimate of compressive strength of an unidirectional composite lamina using cross-ply and angle-ply laminates

    OpenAIRE

    Scafè, M.; Raiteri, G.; Brentari, A.; Dlacic, R.; Troiani, E.; Falaschetti, M. P.; Besseghini, E.

    2014-01-01

    In this work has been estimated the compressive strength of a unidirectional lamina of a carbon/epoxy composite material, using the cross-ply and angle-ply laminates. Over the years various methods have been developed to deduce compressive properties of composite materials reinforced with long fibres. Each of these methods is characterized by a specific way of applying load to the specimen. The method chosen to perform the compression tests is the Wyoming Combined Loading Compr...

  14. Bias correction in the hierarchical likelihood approach to the analysis of multivariate survival data.

    Science.gov (United States)

    Jeon, Jihyoun; Hsu, Li; Gorfine, Malka

    2012-07-01

    Frailty models are useful for measuring unobserved heterogeneity in risk of failures across clusters, providing cluster-specific risk prediction. In a frailty model, the latent frailties shared by members within a cluster are assumed to act multiplicatively on the hazard function. In order to obtain parameter and frailty variate estimates, we consider the hierarchical likelihood (H-likelihood) approach (Ha, Lee and Song, 2001. Hierarchical-likelihood approach for frailty models. Biometrika 88, 233-243) in which the latent frailties are treated as "parameters" and estimated jointly with other parameters of interest. We find that the H-likelihood estimators perform well when the censoring rate is low, however, they are substantially biased when the censoring rate is moderate to high. In this paper, we propose a simple and easy-to-implement bias correction method for the H-likelihood estimators under a shared frailty model. We also extend the method to a multivariate frailty model, which incorporates complex dependence structure within clusters. We conduct an extensive simulation study and show that the proposed approach performs very well for censoring rates as high as 80%. We also illustrate the method with a breast cancer data set. Since the H-likelihood is the same as the penalized likelihood function, the proposed bias correction method is also applicable to the penalized likelihood estimators.

  15. Pendeteksian Outlier pada Regresi Nonlinier dengan Metode statistik Likelihood Displacement

    Directory of Open Access Journals (Sweden)

    Siti Tabi'atul Hasanah

    2012-11-01

    Full Text Available Outlier is an observation that much different (extreme from the other observational data, or data can be interpreted that do not follow the general pattern of the model. Sometimes outliers provide information that can not be provided by other data. That's why outliers should not just be eliminated. Outliers can also be an influential observation. There are many methods that can be used to detect of outliers. In previous studies done on outlier detection of linear regression. Next will be developed detection of outliers in nonlinear regression. Nonlinear regression here is devoted to multiplicative nonlinear regression. To detect is use of statistical method likelihood displacement. Statistical methods abbreviated likelihood displacement (LD is a method to detect outliers by removing the suspected outlier data. To estimate the parameters are used to the maximum likelihood method, so we get the estimate of the maximum. By using LD method is obtained i.e likelihood displacement is thought to contain outliers. Further accuracy of LD method in detecting the outliers are shown by comparing the MSE of LD with the MSE from the regression in general. Statistic test used is Λ. Initial hypothesis was rejected when proved so is an outlier.

  16. Empirical Likelihood in Nonignorable Covariate-Missing Data Problems.

    Science.gov (United States)

    Xie, Yanmei; Zhang, Biao

    2017-04-20

    Missing covariate data occurs often in regression analysis, which frequently arises in the health and social sciences as well as in survey sampling. We study methods for the analysis of a nonignorable covariate-missing data problem in an assumed conditional mean function when some covariates are completely observed but other covariates are missing for some subjects. We adopt the semiparametric perspective of Bartlett et al. (Improving upon the efficiency of complete case analysis when covariates are MNAR. Biostatistics 2014;15:719-30) on regression analyses with nonignorable missing covariates, in which they have introduced the use of two working models, the working probability model of missingness and the working conditional score model. In this paper, we study an empirical likelihood approach to nonignorable covariate-missing data problems with the objective of effectively utilizing the two working models in the analysis of covariate-missing data. We propose a unified approach to constructing a system of unbiased estimating equations, where there are more equations than unknown parameters of interest. One useful feature of these unbiased estimating equations is that they naturally incorporate the incomplete data into the data analysis, making it possible to seek efficient estimation of the parameter of interest even when the working regression function is not specified to be the optimal regression function. We apply the general methodology of empirical likelihood to optimally combine these unbiased estimating equations. We propose three maximum empirical likelihood estimators of the underlying regression parameters and compare their efficiencies with other existing competitors. We present a simulation study to compare the finite-sample performance of various methods with respect to bias, efficiency, and robustness to model misspecification. The proposed empirical likelihood method is also illustrated by an analysis of a data set from the US National Health and

  17. Between-day reliability of a method for non-invasive estimation of muscle composition.

    Science.gov (United States)

    Simunič, Boštjan

    2012-08-01

    Tensiomyography is a method for valid and non-invasive estimation of skeletal muscle fibre type composition. The validity of selected temporal tensiomyographic measures has been well established recently; there is, however, no evidence regarding the method's between-day reliability. Therefore it is the aim of this paper to establish the between-day repeatability of tensiomyographic measures in three skeletal muscles. For three consecutive days, 10 healthy male volunteers (mean±SD: age 24.6 ± 3.0 years; height 177.9 ± 3.9 cm; weight 72.4 ± 5.2 kg) were examined in a supine position. Four temporal measures (delay, contraction, sustain, and half-relaxation time) and maximal amplitude were extracted from the displacement-time tensiomyogram. A reliability analysis was performed with calculations of bias, random error, coefficient of variation (CV), standard error of measurement, and intra-class correlation coefficient (ICC) with a 95% confidence interval. An analysis of ICC demonstrated excellent agreement (ICC were over 0.94 in 14 out of 15 tested parameters). However, lower CV was observed in half-relaxation time, presumably because of the specifics of the parameter definition itself. These data indicate that for the three muscles tested, tensiomyographic measurements were reproducible across consecutive test days. Furthermore, we indicated the most possible origin of the lowest reliability detected in half-relaxation time. Copyright © 2012 Elsevier Ltd. All rights reserved.

  18. Use of deterministic sampling for exploring likelihoods in linkage analysis for quantitative traits.

    NARCIS (Netherlands)

    Mackinnon, M.J.; Beek, van der S.; Kinghorn, B.P.

    1996-01-01

    Deterministic sampling was used to numerically evaluate the expected log-likelihood surfaces of QTL-marker linkage models in large pedigrees with simple structures. By calculating the expected values of likelihoods, questions of power of experimental designs, bias in parameter estimates, approximate

  19. Methodology to Estimate the Quantity, Composition, and Management of Construction and Demolition Debris in the United States

    Science.gov (United States)

    This report, Methodology to Estimate the Quantity, Composition and Management of Construction and Demolition Debris in the US, was developed to expand access to data on CDD in the US and to support research on CDD and sustainable materials management. Since past US EPA CDD estima...

  20. Online gas composition estimation in solid oxide fuel cell systems with anode off-gas recycle configuration

    Science.gov (United States)

    Dolenc, B.; Vrečko, D.; Juričić, Ð.; Pohjoranta, A.; Pianese, C.

    2017-03-01

    Degradation and poisoning of solid oxide fuel cell (SOFC) stacks are continuously shortening the lifespan of SOFC systems. Poisoning mechanisms, such as carbon deposition, form a coating layer, hence rapidly decreasing the efficiency of the fuel cells. Gas composition of inlet gases is known to have great impact on the rate of coke formation. Therefore, monitoring of these variables can be of great benefit for overall management of SOFCs. Although measuring the gas composition of the gas stream is feasible, it is too costly for commercial applications. This paper proposes three distinct approaches for the design of gas composition estimators of an SOFC system in anode off-gas recycle configuration which are (i.) accurate, and (ii.) easy to implement on a programmable logic controller. Firstly, a classical approach is briefly revisited and problems related to implementation complexity are discussed. Secondly, the model is simplified and adapted for easy implementation. Further, an alternative data-driven approach for gas composition estimation is developed. Finally, a hybrid estimator employing experimental data and 1st-principles is proposed. Despite the structural simplicity of the estimators, the experimental validation shows a high precision for all of the approaches. Experimental validation is performed on a 10 kW SOFC system.

  1. Modeling gene expression measurement error: a quasi-likelihood approach

    Directory of Open Access Journals (Sweden)

    Strimmer Korbinian

    2003-03-01

    Full Text Available Abstract Background Using suitable error models for gene expression measurements is essential in the statistical analysis of microarray data. However, the true probabilistic model underlying gene expression intensity readings is generally not known. Instead, in currently used approaches some simple parametric model is assumed (usually a transformed normal distribution or the empirical distribution is estimated. However, both these strategies may not be optimal for gene expression data, as the non-parametric approach ignores known structural information whereas the fully parametric models run the risk of misspecification. A further related problem is the choice of a suitable scale for the model (e.g. observed vs. log-scale. Results Here a simple semi-parametric model for gene expression measurement error is presented. In this approach inference is based an approximate likelihood function (the extended quasi-likelihood. Only partial knowledge about the unknown true distribution is required to construct this function. In case of gene expression this information is available in the form of the postulated (e.g. quadratic variance structure of the data. As the quasi-likelihood behaves (almost like a proper likelihood, it allows for the estimation of calibration and variance parameters, and it is also straightforward to obtain corresponding approximate confidence intervals. Unlike most other frameworks, it also allows analysis on any preferred scale, i.e. both on the original linear scale as well as on a transformed scale. It can also be employed in regression approaches to model systematic (e.g. array or dye effects. Conclusions The quasi-likelihood framework provides a simple and versatile approach to analyze gene expression data that does not make any strong distributional assumptions about the underlying error model. For several simulated as well as real data sets it provides a better fit to the data than competing models. In an example it also

  2. Dietary compositions and their seasonal shifts in Japanese resident birds, estimated from the analysis of volunteer monitoring data.

    Directory of Open Access Journals (Sweden)

    Tetsuro Yoshikawa

    Full Text Available Determining the composition of a bird's diet and its seasonal shifts are fundamental for understanding the ecology and ecological functions of a species. Various methods have been used to estimate the dietary compositions of birds, which have their own advantages and disadvantages. In this study, we examined the possibility of using long-term volunteer monitoring data as the source of dietary information for 15 resident bird species in Kanagawa Prefecture, Japan. The data were collected from field observations reported by volunteers of regional naturalist groups. Based on these monitoring data, we calculated the monthly dietary composition of each bird species directly, and we also estimated unidentified items within the reported foraging episodes using Bayesian models that contained additional information regarding foraging locations. Next, to examine the validity of the estimated dietary compositions, we compared them with the dietary information for focal birds based on stomach analysis methods, collected from past literatures. The dietary trends estimated from the monitoring data were largely consistent with the general food habits determined from the previous studies of focal birds. Thus, the estimates based on the volunteer monitoring data successfully detected noticeable seasonal shifts in many of the birds from plant materials to animal diets during spring-summer. Comparisons with stomach analysis data supported the qualitative validity of the monitoring-based dietary information and the effectiveness of the Bayesian models for improving the estimates. This comparison suggests that one advantage of using monitoring data is its ability to detect dietary items such as fleshy fruits, flower nectar, and vertebrates. These results emphasize the potential importance of observation data collecting and mining by citizens, especially free descriptive observation data, for use in bird ecology studies.

  3. Gaussian copula as a likelihood function for environmental models

    Science.gov (United States)

    Wani, O.; Espadas, G.; Cecinati, F.; Rieckermann, J.

    2017-12-01

    Parameter estimation of environmental models always comes with uncertainty. To formally quantify this parametric uncertainty, a likelihood function needs to be formulated, which is defined as the probability of observations given fixed values of the parameter set. A likelihood function allows us to infer parameter values from observations using Bayes' theorem. The challenge is to formulate a likelihood function that reliably describes the error generating processes which lead to the observed monitoring data, such as rainfall and runoff. If the likelihood function is not representative of the error statistics, the parameter inference will give biased parameter values. Several uncertainty estimation methods that are currently being used employ Gaussian processes as a likelihood function, because of their favourable analytical properties. Box-Cox transformation is suggested to deal with non-symmetric and heteroscedastic errors e.g. for flow data which are typically more uncertain in high flows than in periods with low flows. Problem with transformations is that the results are conditional on hyper-parameters, for which it is difficult to formulate the analyst's belief a priori. In an attempt to address this problem, in this research work we suggest learning the nature of the error distribution from the errors made by the model in the "past" forecasts. We use a Gaussian copula to generate semiparametric error distributions . 1) We show that this copula can be then used as a likelihood function to infer parameters, breaking away from the practice of using multivariate normal distributions. Based on the results from a didactical example of predicting rainfall runoff, 2) we demonstrate that the copula captures the predictive uncertainty of the model. 3) Finally, we find that the properties of autocorrelation and heteroscedasticity of errors are captured well by the copula, eliminating the need to use transforms. In summary, our findings suggest that copulas are an

  4. Asymptotic Likelihood Distribution for Correlated & Constrained Systems

    CERN Document Server

    Agarwal, Ujjwal

    2016-01-01

    It describes my work as summer student at CERN. The report discusses the asymptotic distribution of the likelihood ratio for total no. of parameters being h and 2 out of these being are constrained and correlated.

  5. Maximum-Likelihood Detection Of Noncoherent CPM

    Science.gov (United States)

    Divsalar, Dariush; Simon, Marvin K.

    1993-01-01

    Simplified detectors proposed for use in maximum-likelihood-sequence detection of symbols in alphabet of size M transmitted by uncoded, full-response continuous phase modulation over radio channel with additive white Gaussian noise. Structures of receivers derived from particular interpretation of maximum-likelihood metrics. Receivers include front ends, structures of which depends only on M, analogous to those in receivers of coherent CPM. Parts of receivers following front ends have structures, complexity of which would depend on N.

  6. A comparison of maximum entropy and maximum likelihood estimation

    NARCIS (Netherlands)

    Oude Lansink, A.G.J.M.

    1999-01-01

    Gegevens betreffende het ondernemerschap op Nederlandse akkerbouwbedrijven zijn in 2 benaderingsmethodes verwerkt, welke onderling op voorspellende nauwkeurigheid en op prijs-elasticiteit zijn vergeleken

  7. Evaluation Methodologies for Estimating the Likelihood of Program Implementation Failure

    Science.gov (United States)

    Durand, Roger; Decker, Phillip J.; Kirkman, Dorothy M.

    2014-01-01

    Despite our best efforts as evaluators, program implementation failures abound. A wide variety of valuable methodologies have been adopted to explain and evaluate the "why" of these failures. Yet, typically these methodologies have been employed concurrently (e.g., project monitoring) or to the post-hoc assessment of program activities.…

  8. Debris Likelihood, based on GhostNet, NASA Aqua MODIS, and GOES Imager, EXPERIMENTAL

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Debris Likelihood Index (Estimated) is calculated from GhostNet, NASA Aqua MODIS Chl a and NOAA GOES Imager SST data. THIS IS AN EXPERIMENTAL PRODUCT: intended...

  9. A biclustering algorithm for binary matrices based on penalized Bernoulli likelihood

    KAUST Repository

    Lee, Seokho; Huang, Jianhua Z.

    2013-01-01

    We propose a new biclustering method for binary data matrices using the maximum penalized Bernoulli likelihood estimation. Our method applies a multi-layer model defined on the logits of the success probabilities, where each layer represents a

  10. Semi-parametric estimation of random effects in a logistic regression model using conditional inference

    DEFF Research Database (Denmark)

    Petersen, Jørgen Holm

    2016-01-01

    This paper describes a new approach to the estimation in a logistic regression model with two crossed random effects where special interest is in estimating the variance of one of the effects while not making distributional assumptions about the other effect. A composite likelihood is studied...

  11. Composites

    International Nuclear Information System (INIS)

    Kasen, M.B.

    1983-01-01

    This chapter discusses the roles of composite laminates and aggregates in cryogenic technology. Filamentary-reinforced composites are emphasized because they are the most widely used composite materials. Topics considered include composite systems and terminology, design and fabrication, composite failure, high-pressure reinforced plastic laminates, low-pressure reinforced plastics, reinforced metals, selectively reinforced structures, the effect of cryogenic temperatures, woven-fabric and random-mat composites, uniaxial fiber-reinforced composites, composite joints in cryogenic structures, joining techniques at room temperature, radiation effects, testing laminates at cryogenic temperatures, static and cyclic tensile testing, static and cyclic compression testing, interlaminar shear testing, secondary property tests, and concrete aggregates. It is suggested that cryogenic composite technology would benefit from the development of a fracture mechanics model for predicting the fitness-for-purpose of polymer-matrix composite structures

  12. Exclusion probabilities and likelihood ratios with applications to mixtures.

    Science.gov (United States)

    Slooten, Klaas-Jan; Egeland, Thore

    2016-01-01

    The statistical evidence obtained from mixed DNA profiles can be summarised in several ways in forensic casework including the likelihood ratio (LR) and the Random Man Not Excluded (RMNE) probability. The literature has seen a discussion of the advantages and disadvantages of likelihood ratios and exclusion probabilities, and part of our aim is to bring some clarification to this debate. In a previous paper, we proved that there is a general mathematical relationship between these statistics: RMNE can be expressed as a certain average of the LR, implying that the expected value of the LR, when applied to an actual contributor to the mixture, is at least equal to the inverse of the RMNE. While the mentioned paper presented applications for kinship problems, the current paper demonstrates the relevance for mixture cases, and for this purpose, we prove some new general properties. We also demonstrate how to use the distribution of the likelihood ratio for donors of a mixture, to obtain estimates for exceedance probabilities of the LR for non-donors, of which the RMNE is a special case corresponding to L R>0. In order to derive these results, we need to view the likelihood ratio as a random variable. In this paper, we describe how such a randomization can be achieved. The RMNE is usually invoked only for mixtures without dropout. In mixtures, artefacts like dropout and drop-in are commonly encountered and we address this situation too, illustrating our results with a basic but widely implemented model, a so-called binary model. The precise definitions, modelling and interpretation of the required concepts of dropout and drop-in are not entirely obvious, and we attempt to clarify them here in a general likelihood framework for a binary model.

  13. Generalized Empirical Likelihood-Based Focused Information Criterion and Model Averaging

    Directory of Open Access Journals (Sweden)

    Naoya Sueishi

    2013-07-01

    Full Text Available This paper develops model selection and averaging methods for moment restriction models. We first propose a focused information criterion based on the generalized empirical likelihood estimator. We address the issue of selecting an optimal model, rather than a correct model, for estimating a specific parameter of interest. Then, this study investigates a generalized empirical likelihood-based model averaging estimator that minimizes the asymptotic mean squared error. A simulation study suggests that our averaging estimator can be a useful alternative to existing post-selection estimators.

  14. Maximum Likelihood and Restricted Likelihood Solutions in Multiple-Method Studies.

    Science.gov (United States)

    Rukhin, Andrew L

    2011-01-01

    A formulation of the problem of combining data from several sources is discussed in terms of random effects models. The unknown measurement precision is assumed not to be the same for all methods. We investigate maximum likelihood solutions in this model. By representing the likelihood equations as simultaneous polynomial equations, the exact form of the Groebner basis for their stationary points is derived when there are two methods. A parametrization of these solutions which allows their comparison is suggested. A numerical method for solving likelihood equations is outlined, and an alternative to the maximum likelihood method, the restricted maximum likelihood, is studied. In the situation when methods variances are considered to be known an upper bound on the between-method variance is obtained. The relationship between likelihood equations and moment-type equations is also discussed.

  15. A new estimator for sensitivity analysis of model output: An application to the e-business readiness composite indicator

    International Nuclear Information System (INIS)

    Tarantola, Stefano; Nardo, Michela; Saisana, Michaela; Gatelli, Debora

    2006-01-01

    In this paper we propose and test a generalisation of the method originally proposed by Sobol', and recently extended by Saltelli, to estimate the first-order and total effect sensitivity indices. Exploiting the symmetries and the dualities of the formulas, we obtain additional estimates of first-order and total indices at no extra computational cost. We test the technique on a case study involving the construction of a composite indicator of e-business readiness, which is part of the initiative 'e-Readiness of European enterprises' of the European Commission 'e-Europe 2005' action plan. The method is used to assess the contribution of uncertainties in (a) the weights of the component indicators and (b) the imputation of missing data on the composite indicator values for several European countries

  16. Estimating Classification Errors Under Edit Restrictions in Composite Survey-Register Data Using Multiple Imputation Latent Class Modelling (MILC

    Directory of Open Access Journals (Sweden)

    Boeschoten Laura

    2017-12-01

    Full Text Available Both registers and surveys can contain classification errors. These errors can be estimated by making use of a composite data set. We propose a new method based on latent class modelling to estimate the number of classification errors across several sources while taking into account impossible combinations with scores on other variables. Furthermore, the latent class model, by multiply imputing a new variable, enhances the quality of statistics based on the composite data set. The performance of this method is investigated by a simulation study, which shows that whether or not the method can be applied depends on the entropy R2 of the latent class model and the type of analysis a researcher is planning to do. Finally, the method is applied to public data from Statistics Netherlands.

  17. A new estimator for sensitivity analysis of model output: An application to the e-business readiness composite indicator

    Energy Technology Data Exchange (ETDEWEB)

    Tarantola, Stefano [European Commission , Joint Research Centre, Institute for the Protection and Security of the Citizen, Applied Statistics Group, TP 361, Via E. Fermi, 1, Ispra (Vatican City State, Holy See,), 21020 (Italy)]. E-mail: stefano.tarantola@jrc.it; Nardo, Michela [European Commission, Joint Research Centre, Institute for the Protection and Security of the Citizen, Applied Statistics Group, TP 361, Via E. Fermi, 1, Ispra (VA), 21020 (Italy); Saisana, Michaela [European Commission , Joint Research Centre, Institute for the Protection and Security of the Citizen, Applied Statistics Group, TP 361, Via E. Fermi, 1, Ispra (VA), 21020 (Italy); Gatelli, Debora [European Commission , Joint Research Centre, Institute for the Protection and Security of the Citizen, Applied Statistics Group, TP 361, Via E. Fermi, 1, Ispra (VA), 21020 (Italy)

    2006-10-15

    In this paper we propose and test a generalisation of the method originally proposed by Sobol', and recently extended by Saltelli, to estimate the first-order and total effect sensitivity indices. Exploiting the symmetries and the dualities of the formulas, we obtain additional estimates of first-order and total indices at no extra computational cost. We test the technique on a case study involving the construction of a composite indicator of e-business readiness, which is part of the initiative 'e-Readiness of European enterprises' of the European Commission 'e-Europe 2005' action plan. The method is used to assess the contribution of uncertainties in (a) the weights of the component indicators and (b) the imputation of missing data on the composite indicator values for several European countries.

  18. Comparisons of likelihood and machine learning methods of individual classification

    Science.gov (United States)

    Guinand, B.; Topchy, A.; Page, K.S.; Burnham-Curtis, M. K.; Punch, W.F.; Scribner, K.T.

    2002-01-01

    Classification methods used in machine learning (e.g., artificial neural networks, decision trees, and k-nearest neighbor clustering) are rarely used with population genetic data. We compare different nonparametric machine learning techniques with parametric likelihood estimations commonly employed in population genetics for purposes of assigning individuals to their population of origin (“assignment tests”). Classifier accuracy was compared across simulated data sets representing different levels of population differentiation (low and high FST), number of loci surveyed (5 and 10), and allelic diversity (average of three or eight alleles per locus). Empirical data for the lake trout (Salvelinus namaycush) exhibiting levels of population differentiation comparable to those used in simulations were examined to further evaluate and compare classification methods. Classification error rates associated with artificial neural networks and likelihood estimators were lower for simulated data sets compared to k-nearest neighbor and decision tree classifiers over the entire range of parameters considered. Artificial neural networks only marginally outperformed the likelihood method for simulated data (0–2.8% lower error rates). The relative performance of each machine learning classifier improved relative likelihood estimators for empirical data sets, suggesting an ability to “learn” and utilize properties of empirical genotypic arrays intrinsic to each population. Likelihood-based estimation methods provide a more accessible option for reliable assignment of individuals to the population of origin due to the intricacies in development and evaluation of artificial neural networks. In recent years, characterization of highly polymorphic molecular markers such as mini- and microsatellites and development of novel methods of analysis have enabled researchers to extend investigations of ecological and evolutionary processes below the population level to the level of

  19. GENERALIZATION OF RAYLEIGH MAXIMUM LIKELIHOOD DESPECKLING FILTER USING QUADRILATERAL KERNELS

    Directory of Open Access Journals (Sweden)

    S. Sridevi

    2013-02-01

    Full Text Available Speckle noise is the most prevalent noise in clinical ultrasound images. It visibly looks like light and dark spots and deduce the pixel intensity as murkiest. Gazing at fetal ultrasound images, the impact of edge and local fine details are more palpable for obstetricians and gynecologists to carry out prenatal diagnosis of congenital heart disease. A robust despeckling filter has to be contrived to proficiently suppress speckle noise and simultaneously preserve the features. The proposed filter is the generalization of Rayleigh maximum likelihood filter by the exploitation of statistical tools as tuning parameters and use different shapes of quadrilateral kernels to estimate the noise free pixel from neighborhood. The performance of various filters namely Median, Kuwahura, Frost, Homogenous mask filter and Rayleigh maximum likelihood filter are compared with the proposed filter in terms PSNR and image profile. Comparatively the proposed filters surpass the conventional filters.

  20. Physical constraints on the likelihood of life on exoplanets

    Science.gov (United States)

    Lingam, Manasvi; Loeb, Abraham

    2018-04-01

    One of the most fundamental questions in exoplanetology is to determine whether a given planet is habitable. We estimate the relative likelihood of a planet's propensity towards habitability by considering key physical characteristics such as the role of temperature on ecological and evolutionary processes, and atmospheric losses via hydrodynamic escape and stellar wind erosion. From our analysis, we demonstrate that Earth-sized exoplanets in the habitable zone around M-dwarfs seemingly display much lower prospects of being habitable relative to Earth, owing to the higher incident ultraviolet fluxes and closer distances to the host star. We illustrate our results by specifically computing the likelihood (of supporting life) for the recently discovered exoplanets, Proxima b and TRAPPIST-1e, which we find to be several orders of magnitude smaller than that of Earth.

  1. How to apply the optimal estimation method to your lidar measurements for improved retrievals of temperature and composition

    Science.gov (United States)

    Sica, R. J.; Haefele, A.; Jalali, A.; Gamage, S.; Farhani, G.

    2018-04-01

    The optimal estimation method (OEM) has a long history of use in passive remote sensing, but has only recently been applied to active instruments like lidar. The OEM's advantage over traditional techniques includes obtaining a full systematic and random uncertainty budget plus the ability to work with the raw measurements without first applying instrument corrections. In our meeting presentation we will show you how to use the OEM for temperature and composition retrievals for Rayleigh-scatter, Ramanscatter and DIAL lidars.

  2. Estimation of 1RM for knee extension based on the maximal isometric muscle strength and body composition

    OpenAIRE

    Kanada, Yoshikiyo; Sakurai, Hiroaki; Sugiura, Yoshito; Arai, Tomoaki; Koyama, Soichiro; Tanabe, Shigeo

    2017-01-01

    [Purpose] To create a regression formula in order to estimate 1RM for knee extensors, based on the maximal isometric muscle strength measured using a hand-held dynamometer and data regarding the body composition. [Subjects and Methods] Measurement was performed in 21 healthy males in their twenties to thirties. Single regression analysis was performed, with measurement values representing 1RM and the maximal isometric muscle strength as dependent and independent variables, respectively. Furth...

  3. Estimation of 1RM for knee extension based on the maximal isometric muscle strength and body composition.

    Science.gov (United States)

    Kanada, Yoshikiyo; Sakurai, Hiroaki; Sugiura, Yoshito; Arai, Tomoaki; Koyama, Soichiro; Tanabe, Shigeo

    2017-11-01

    [Purpose] To create a regression formula in order to estimate 1RM for knee extensors, based on the maximal isometric muscle strength measured using a hand-held dynamometer and data regarding the body composition. [Subjects and Methods] Measurement was performed in 21 healthy males in their twenties to thirties. Single regression analysis was performed, with measurement values representing 1RM and the maximal isometric muscle strength as dependent and independent variables, respectively. Furthermore, multiple regression analysis was performed, with data regarding the body composition incorporated as another independent variable, in addition to the maximal isometric muscle strength. [Results] Through single regression analysis with the maximal isometric muscle strength as an independent variable, the following regression formula was created: 1RM (kg)=0.714 + 0.783 × maximal isometric muscle strength (kgf). On multiple regression analysis, only the total muscle mass was extracted. [Conclusion] A highly accurate regression formula to estimate 1RM was created based on both the maximal isometric muscle strength and body composition. Using a hand-held dynamometer and body composition analyzer, it was possible to measure these items in a short time, and obtain clinically useful results.

  4. Estimating the Initial Crack Size in a Particulate Composite Material: An Analytical and Experimental Approach

    National Research Council Canada - National Science Library

    Liu, C

    2001-01-01

    The objectives in this report are to: determine the inherent critical initial crack size in a particulate composite material, determine the statistical distribution function of the inherent critical crack size, normal distribution, two...

  5. Thermodynamic properties calculation of the flue gas based on its composition estimation for coal-fired power plants

    International Nuclear Information System (INIS)

    Xu, Liang; Yuan, Jingqi

    2015-01-01

    Thermodynamic properties of the working fluid and the flue gas play an important role in the thermodynamic calculation for the boiler design and the operational optimization in power plants. In this study, a generic approach to online calculate the thermodynamic properties of the flue gas is proposed based on its composition estimation. It covers the full operation scope of the flue gas, including the two-phase state when the temperature becomes lower than the dew point. The composition of the flue gas is online estimated based on the routinely offline assays of the coal samples and the online measured oxygen mole fraction in the flue gas. The relative error of the proposed approach is found less than 1% when the standard data set of the dry and humid air and the typical flue gas is used for validation. Also, the sensitivity analysis of the individual component and the influence of the measurement error of the oxygen mole fraction on the thermodynamic properties of the flue gas are presented. - Highlights: • Flue gas thermodynamic properties in coal-fired power plants are online calculated. • Flue gas composition is online estimated using the measured oxygen mole fraction. • The proposed approach covers full operation scope, including two-phase flue gas. • Component sensitivity to the thermodynamic properties of flue gas is presented.

  6. Deformation of log-likelihood loss function for multiclass boosting.

    Science.gov (United States)

    Kanamori, Takafumi

    2010-09-01

    The purpose of this paper is to study loss functions in multiclass classification. In classification problems, the decision function is estimated by minimizing an empirical loss function, and then, the output label is predicted by using the estimated decision function. We propose a class of loss functions which is obtained by a deformation of the log-likelihood loss function. There are four main reasons why we focus on the deformed log-likelihood loss function: (1) this is a class of loss functions which has not been deeply investigated so far, (2) in terms of computation, a boosting algorithm with a pseudo-loss is available to minimize the proposed loss function, (3) the proposed loss functions provide a clear correspondence between the decision functions and conditional probabilities of output labels, (4) the proposed loss functions satisfy the statistical consistency of the classification error rate which is a desirable property in classification problems. Based on (3), we show that the deformed log-likelihood loss provides a model of mislabeling which is useful as a statistical model of medical diagnostics. We also propose a robust loss function against outliers in multiclass classification based on our approach. The robust loss function is a natural extension of the existing robust loss function for binary classification. A model of mislabeling and a robust loss function are useful to cope with noisy data. Some numerical studies are presented to show the robustness of the proposed loss function. A mathematical characterization of the deformed log-likelihood loss function is also presented. Copyright 2010 Elsevier Ltd. All rights reserved.

  7. Composite Estimation for Single-Index Models with Responses Subject to Detection Limits

    KAUST Repository

    Tang, Yanlin; Wang, Huixia Judy; Liang, Hua

    2017-01-01

    We propose a semiparametric estimator for single-index models with censored responses due to detection limits. In the presence of left censoring, the mean function cannot be identified without any parametric distributional assumptions, but the quantile function is still identifiable at upper quantile levels. To avoid parametric distributional assumption, we propose to fit censored quantile regression and combine information across quantile levels to estimate the unknown smooth link function and the index parameter. Under some regularity conditions, we show that the estimated link function achieves the non-parametric optimal convergence rate, and the estimated index parameter is asymptotically normal. The simulation study shows that the proposed estimator is competitive with the omniscient least squares estimator based on the latent uncensored responses for data with normal errors but much more efficient for heavy-tailed data under light and moderate censoring. The practical value of the proposed method is demonstrated through the analysis of a human immunodeficiency virus antibody data set.

  8. Composite Estimation for Single-Index Models with Responses Subject to Detection Limits

    KAUST Repository

    Tang, Yanlin

    2017-11-03

    We propose a semiparametric estimator for single-index models with censored responses due to detection limits. In the presence of left censoring, the mean function cannot be identified without any parametric distributional assumptions, but the quantile function is still identifiable at upper quantile levels. To avoid parametric distributional assumption, we propose to fit censored quantile regression and combine information across quantile levels to estimate the unknown smooth link function and the index parameter. Under some regularity conditions, we show that the estimated link function achieves the non-parametric optimal convergence rate, and the estimated index parameter is asymptotically normal. The simulation study shows that the proposed estimator is competitive with the omniscient least squares estimator based on the latent uncensored responses for data with normal errors but much more efficient for heavy-tailed data under light and moderate censoring. The practical value of the proposed method is demonstrated through the analysis of a human immunodeficiency virus antibody data set.

  9. Likelihood inference for unions of interacting discs

    DEFF Research Database (Denmark)

    Møller, Jesper; Helisová, Katarina

    To the best of our knowledge, this is the first paper which discusses likelihood inference or a random set using a germ-grain model, where the individual grains are unobservable edge effects occur, and other complications appear. We consider the case where the grains form a disc process modelled...... is specified with respect to a given marked Poisson model (i.e. a Boolean model). We show how edge effects and other complications can be handled by considering a certain conditional likelihood. Our methodology is illustrated by analyzing Peter Diggle's heather dataset, where we discuss the results...... of simulation-based maximum likelihood inference and the effect of specifying different reference Poisson models....

  10. Composition

    DEFF Research Database (Denmark)

    Bergstrøm-Nielsen, Carl

    2011-01-01

    Strategies are open compositions to be realised by improvising musicians. See more about my composition practise in the entry "Composition - General Introduction". Caution: streaming the sound files will in some cases only provide a few minutes' sample. Please DOWNLOAD them to hear them in full...

  11. Composition

    DEFF Research Database (Denmark)

    2014-01-01

    Memory Pieces are open compositions to be realised solo by an improvising musicians. See more about my composition practise in the entry "Composition - General Introduction". Caution: streaming the sound files will in some cases only provide a few minutes' sample. Please DOWNLOAD them to hear them...

  12. Likelihood inference for unions of interacting discs

    DEFF Research Database (Denmark)

    Møller, Jesper; Helisova, K.

    2010-01-01

    This is probably the first paper which discusses likelihood inference for a random set using a germ-grain model, where the individual grains are unobservable, edge effects occur and other complications appear. We consider the case where the grains form a disc process modelled by a marked point...... process, where the germs are the centres and the marks are the associated radii of the discs. We propose to use a recent parametric class of interacting disc process models, where the minimal sufficient statistic depends on various geometric properties of the random set, and the density is specified......-based maximum likelihood inference and the effect of specifying different reference Poisson models....

  13. A single frequency component-based re-estimated MUSIC algorithm for impact localization on complex composite structures

    International Nuclear Information System (INIS)

    Yuan, Shenfang; Bao, Qiao; Qiu, Lei; Zhong, Yongteng

    2015-01-01

    The growing use of composite materials on aircraft structures has attracted much attention for impact monitoring as a kind of structural health monitoring (SHM) method. Multiple signal classification (MUSIC)-based monitoring technology is a promising method because of its directional scanning ability and easy arrangement of the sensor array. However, for applications on real complex structures, some challenges still exist. The impact-induced elastic waves usually exhibit a wide-band performance, giving rise to the difficulty in obtaining the phase velocity directly. In addition, composite structures usually have obvious anisotropy, and the complex structural style of real aircrafts further enhances this performance, which greatly reduces the localization precision of the MUSIC-based method. To improve the MUSIC-based impact monitoring method, this paper first analyzes and demonstrates the influence of measurement precision of the phase velocity on the localization results of the MUSIC impact localization method. In order to improve the accuracy of the phase velocity measurement, a single frequency component extraction method is presented. Additionally, a single frequency component-based re-estimated MUSIC (SFCBR-MUSIC) algorithm is proposed to reduce the localization error caused by the anisotropy of the complex composite structure. The proposed method is verified on a real composite aircraft wing box, which has T-stiffeners and screw holes. Three typical categories of 41 impacts are monitored. Experimental results show that the SFCBR-MUSIC algorithm can localize impact on complex composite structures with an obviously improved accuracy. (paper)

  14. Validation of equations using anthropometric and bioelectrical impedance for estimating body composition of the elderly

    Directory of Open Access Journals (Sweden)

    Cassiano Ricardo Rech

    2006-08-01

    Full Text Available The increase of the elderly population has enhanced the need for studying aging-related issues. In this context, the analysis of morphological alterations occurring with the age has been discussed thoroughly. Evidences point that there are few information on valid methods for estimating body composition of senior citizens in Brazil. Therefore, the objective of this study was to cross-validate equations using either anthropometric or bioelectrical impedance (BIA data for estimation of body fat (%BF and of fat-free mass (FFM in a sample of older individuals from Florianópolis-SC, having the dual energy x-ray absorptiometry (DEXA as the criterion-measurement. The group was composed by 180 subjects (60 men and 120 women who participated in four community Groups for the elderly and were systematically randomly selected by a telephone interview, with age ranging from 60 to 81 years. The variables stature, body mass, body circumferences, skinfold thickness, reactance and resistance were measured in the morning at The Sports Center of the Federal University of Santa Catarina. The DEXA evaluation was performed in the afternoon at The Diagnosis Center through Image in Florianópolis-SC. Twenty anthropometric and 8 BIA equations were analyzed for cross-validation. For those equations that estimate body density, the equation of Siri (1961 and the adapted-equation by Deurenberg et al. (1989 were used for conversion into %BF. The analyses were performed with the statistical package SPSS, version 11.5, establishing the level of significance at 5%. The criteria of cross-validation suggested by Lohman (1992 and the graphic dispersion analyses in relation to the mean, as proposed by Bland and Altman (1986 were used. The group presented values for the body mass index (BMI between 18.4kg.m-2 and 39.3kg.m-2. The mean %BF was of 23.1% (sd=5.8 for men and 37.3% (sd=6.9 in women, varying from 6% to 51.4%. There were no differences among the estimates of the equations

  15. Factors controlling carbon isotopic composition of land snail shells estimated from lab culturing experiment

    Science.gov (United States)

    Zhang, Naizhong; Yamada, Keita; Yoshida, Naohiro

    2014-05-01

    Carbon isotopic composition (δ13C) of land snail shell carbonate is widely applied in reconstructing the C3/C4 vegetation distribution of paleo-environment, which is considered to reflect variations of some environmental parameters [1][2][3]. Land snail shell carbon has three potential sources: diet, atmospheric CO2 and ingested carbonate (limestone) [4]. However, their relative contributions to shell carbonate have not been understood well yet [4][5][6][7][8]. More researches are necessary before we could apply this tool in paleo-environment reconstruction, especially inter-lab culturing experiment. A kind of land snail species, Acusta despecta sieboldiana, was collected at Yokohama, Japan and cultured under suitable environment to lay eggs. The second generations were growing up from eggs to adults around 6-12 months at the temperature of 20°, 25° and 30°, respectively. All of the snails at 25° and 30° and most of those at 20° were fed by cabbage (C3 plant) during their life span while others were fed by corn (C4 plant). To investigate the effect of ingested carbonate, some of them were fed by Ca3(PO4)2 powder while others were fed by CaCO3 powder. δ13C of shells were analyzed by an Isotope Ratio Mass Spectrometry (Thermo Finnigan MAT 253); δ13C of food and snail tissue were measured by a Cavity Ring-Down Spectroscopy (Picarro G1121-i). At the same time, δ13C of eggshell and new born snails were analyzed by a Continuous Flow Isotope Ratio Mass Spectrometry (GasBench II). We confirmed that diet, atmospheric CO2 and ingested limestone could be important sources controlling shell δ13C values. And the temperature could affect shell carbonate δ13C values, too. A simple but credible frame was raised to discuss the mechanism of how each possible source and environmental parameter could affect shell carbonate δ13C values based on previous works [4][6][8] and this study. According to this frame and some reasonable assumptions, we have estimated the

  16. The estimate of permittivity of anisotropic composites with lamellar inclusions by the self-assessment method

    Directory of Open Access Journals (Sweden)

    V. S. Zarubin

    2015-01-01

    Full Text Available Composites are widely used as structural or thermal protection materials; they are used as well as functional materials in a large number of different electrical devices and as dielectrics. This composite has one of the most important characteristics the relative permittivity. It depends primarily on the dielectric properties of the inclusions and the matrix as well as the shape and volume content of the inclusions.In this paper, a mathematical model of the interaction of the electrostatic fields in an isotropic plate and in the surrounding homogeneous anisotropic medium is constructed. This model describes the dielectric properties of the composite with such inclusions. A variant of the same orientation of lamellar inclusions is considered, which leads to the special case of anisotropy of the dielectric properties of the composite that has transverse isotropy towards the direction perpendicular to the inclusions. The shape of inclusions is represented as an oblate ellipsoid of revolution (spheroid. Transformation of the differential equation describing the distribution of the electric potential transversely to isotropic medium surrounding the spheroidal inclusion, to the Laplace equation with the subsequent transition from the initial spheroid to the given ellipsoid of rotation allows us to apply the self-assessment method for the determination of the dielectric properties of the composite. This method equates the result of averaging the perturbation of the electrostatic field in the inclusions and the matrix particles towards the unperturbed fields in the environment to zero.The constructed mathematical model allows us to determine the electrostatic field disturbance in the inclusions and the matrix particles towards the unperturbed field given in the environment at a distance from the inclusions and the matrix particles, much larger than their characteristic dimensions. By averaging the perturbation of the electrostatic field in all the

  17. Statistical modelling of survival data with random effects h-likelihood approach

    CERN Document Server

    Ha, Il Do; Lee, Youngjo

    2017-01-01

    This book provides a groundbreaking introduction to the likelihood inference for correlated survival data via the hierarchical (or h-) likelihood in order to obtain the (marginal) likelihood and to address the computational difficulties in inferences and extensions. The approach presented in the book overcomes shortcomings in the traditional likelihood-based methods for clustered survival data such as intractable integration. The text includes technical materials such as derivations and proofs in each chapter, as well as recently developed software programs in R (“frailtyHL”), while the real-world data examples together with an R package, “frailtyHL” in CRAN, provide readers with useful hands-on tools. Reviewing new developments since the introduction of the h-likelihood to survival analysis (methods for interval estimation of the individual frailty and for variable selection of the fixed effects in the general class of frailty models) and guiding future directions, the book is of interest to research...

  18. Efficient Bit-to-Symbol Likelihood Mappings

    Science.gov (United States)

    Moision, Bruce E.; Nakashima, Michael A.

    2010-01-01

    This innovation is an efficient algorithm designed to perform bit-to-symbol and symbol-to-bit likelihood mappings that represent a significant portion of the complexity of an error-correction code decoder for high-order constellations. Recent implementation of the algorithm in hardware has yielded an 8- percent reduction in overall area relative to the prior design.

  19. Likelihood-ratio-based biometric verification

    NARCIS (Netherlands)

    Bazen, A.M.; Veldhuis, Raymond N.J.

    2002-01-01

    This paper presents results on optimal similarity measures for biometric verification based on fixed-length feature vectors. First, we show that the verification of a single user is equivalent to the detection problem, which implies that for single-user verification the likelihood ratio is optimal.

  20. Likelihood Ratio-Based Biometric Verification

    NARCIS (Netherlands)

    Bazen, A.M.; Veldhuis, Raymond N.J.

    The paper presents results on optimal similarity measures for biometric verification based on fixed-length feature vectors. First, we show that the verification of a single user is equivalent to the detection problem, which implies that, for single-user verification, the likelihood ratio is optimal.

  1. Caching and interpolated likelihoods: accelerating cosmological Monte Carlo Markov chains

    Energy Technology Data Exchange (ETDEWEB)

    Bouland, Adam; Easther, Richard; Rosenfeld, Katherine, E-mail: adam.bouland@aya.yale.edu, E-mail: richard.easther@yale.edu, E-mail: krosenfeld@cfa.harvard.edu [Department of Physics, Yale University, New Haven CT 06520 (United States)

    2011-05-01

    We describe a novel approach to accelerating Monte Carlo Markov Chains. Our focus is cosmological parameter estimation, but the algorithm is applicable to any problem for which the likelihood surface is a smooth function of the free parameters and computationally expensive to evaluate. We generate a high-order interpolating polynomial for the log-likelihood using the first points gathered by the Markov chains as a training set. This polynomial then accurately computes the majority of the likelihoods needed in the latter parts of the chains. We implement a simple version of this algorithm as a patch (InterpMC) to CosmoMC and show that it accelerates parameter estimatation by a factor of between two and four for well-converged chains. The current code is primarily intended as a ''proof of concept'', and we argue that there is considerable room for further performance gains. Unlike other approaches to accelerating parameter fits, we make no use of precomputed training sets or special choices of variables, and InterpMC is almost entirely transparent to the user.

  2. Caching and interpolated likelihoods: accelerating cosmological Monte Carlo Markov chains

    International Nuclear Information System (INIS)

    Bouland, Adam; Easther, Richard; Rosenfeld, Katherine

    2011-01-01

    We describe a novel approach to accelerating Monte Carlo Markov Chains. Our focus is cosmological parameter estimation, but the algorithm is applicable to any problem for which the likelihood surface is a smooth function of the free parameters and computationally expensive to evaluate. We generate a high-order interpolating polynomial for the log-likelihood using the first points gathered by the Markov chains as a training set. This polynomial then accurately computes the majority of the likelihoods needed in the latter parts of the chains. We implement a simple version of this algorithm as a patch (InterpMC) to CosmoMC and show that it accelerates parameter estimatation by a factor of between two and four for well-converged chains. The current code is primarily intended as a ''proof of concept'', and we argue that there is considerable room for further performance gains. Unlike other approaches to accelerating parameter fits, we make no use of precomputed training sets or special choices of variables, and InterpMC is almost entirely transparent to the user

  3. Maximum likelihood as a common computational framework in tomotherapy

    International Nuclear Information System (INIS)

    Olivera, G.H.; Shepard, D.M.; Reckwerdt, P.J.; Ruchala, K.; Zachman, J.; Fitchard, E.E.; Mackie, T.R.

    1998-01-01

    Tomotherapy is a dose delivery technique using helical or axial intensity modulated beams. One of the strengths of the tomotherapy concept is that it can incorporate a number of processes into a single piece of equipment. These processes include treatment optimization planning, dose reconstruction and kilovoltage/megavoltage image reconstruction. A common computational technique that could be used for all of these processes would be very appealing. The maximum likelihood estimator, originally developed for emission tomography, can serve as a useful tool in imaging and radiotherapy. We believe that this approach can play an important role in the processes of optimization planning, dose reconstruction and kilovoltage and/or megavoltage image reconstruction. These processes involve computations that require comparable physical methods. They are also based on equivalent assumptions, and they have similar mathematical solutions. As a result, the maximum likelihood approach is able to provide a common framework for all three of these computational problems. We will demonstrate how maximum likelihood methods can be applied to optimization planning, dose reconstruction and megavoltage image reconstruction in tomotherapy. Results for planning optimization, dose reconstruction and megavoltage image reconstruction will be presented. Strengths and weaknesses of the methodology are analysed. Future directions for this work are also suggested. (author)

  4. Evaluation of Criticality of Self-Heating of Polymer Composites by Estimating the Heat Dissipation Rate

    Science.gov (United States)

    Katunin, A.

    2018-03-01

    The critical self-heating temperature at which the structural degradation of polymer composites under cyclic loading begins is evaluated by analyzing the heat dissipation rate. The method proposed is an effective tool for evaluating the degradation degree of such structures.

  5. iiv vito estimation of body composition in cattle with tritium and urea

    African Journals Online (AJOL)

    evaluate the tritium and urea dilution techniques for accurate predictron of body composition. Approximately l,l - 1,4 g ut"u/W9:]5 ... live aninral and the carcass, and to evaluate their accuracy in cornparison to those derived from ... to be infused, was carefully weighed into 20 and 50 ml sterilized disposable syringes which.

  6. Multi-sensor data fusion for estimating forest species composition and abundance in northern Minnesota

    Science.gov (United States)

    Peter P. Wolter; Phillip A. Townsend

    2011-01-01

    The magnitude, duration, and frequency of forest disturbance caused by the spruce budworm and forest tent caterpillar in northern Minnesota and neighboring Ontario, Canada have increased over the last century due to a shift in forest species composition linked to historical fire suppression, forest management, and pesticide application that has fostered increased...

  7. A Comparison of Methods for the Estimation of Body Composition in Highly Trained Wheelchair Games Players

    NARCIS (Netherlands)

    Goosey-Tolfrey, V.; Keil, M.; Brooke-Wavell, K.; de Groot, S.

    The purpose of this study was to assess the agreement in body composition measurements of wheelchair athletes using skinfolds, bio-impedance analysis (BIA) and air displacement plethysmography (ADP) relative to dual-energy X-ray absorptiometry (DXA). A secondary objective was to develop new skinfold

  8. Estimation of pyrethroid pesticide intake using regression modeling of food groups based on composite dietary samples

    Data.gov (United States)

    U.S. Environmental Protection Agency — Population-based estimates of pesticide intake are needed to characterize exposure for particular demographic groups based on their dietary behaviors. Regression...

  9. Age-specific distributions from coarse-count data: An epidemiological and demographic application of a penalized composite link model

    DEFF Research Database (Denmark)

    Rizzi, Silvia

    as realizations of a Poisson process. The latent unobserved distribution with higher resolution is assumed to be smooth and can be estimated from the composite data via maximum likelihood. In the second study the penalized composite link model for ungrouping is compared to alternative well known ungrouping...

  10. Likelihood-Based Inference in Nonlinear Error-Correction Models

    DEFF Research Database (Denmark)

    Kristensen, Dennis; Rahbæk, Anders

    We consider a class of vector nonlinear error correction models where the transfer function (or loadings) of the stationary relation- ships is nonlinear. This includes in particular the smooth transition models. A general representation theorem is given which establishes the dynamic properties...... and a linear trend in general. Gaussian likelihood-based estimators are considered for the long- run cointegration parameters, and the short-run parameters. Asymp- totic theory is provided for these and it is discussed to what extend asymptotic normality and mixed normaity can be found. A simulation study...

  11. Similar tests and the standardized log likelihood ratio statistic

    DEFF Research Database (Denmark)

    Jensen, Jens Ledet

    1986-01-01

    When testing an affine hypothesis in an exponential family the 'ideal' procedure is to calculate the exact similar test, or an approximation to this, based on the conditional distribution given the minimal sufficient statistic under the null hypothesis. By contrast to this there is a 'primitive......' approach in which the marginal distribution of a test statistic considered and any nuisance parameter appearing in the test statistic is replaced by an estimate. We show here that when using standardized likelihood ratio statistics the 'primitive' procedure is in fact an 'ideal' procedure to order O(n -3...

  12. The application of digital imaging techniques in the in vivo estimation of the body composition of pigs: a review

    International Nuclear Information System (INIS)

    Szabo, C.; Babinszky, L.; Verstegen, M.W.A.; Vangen, O.; Jansman, A.J.M.; Kanis, E.

    1999-01-01

    Calorimetry and comparative slaughter measurement are techniques widely used to measure chemical body composition of pigs, while dissection is the standard method to determine physical (tissue) composition of the body. The disadvantage of calorimetry is the small number of observations possible, while of comparative slaughter and dissection the fact that examinations can be made only once on the same pig. The non-invasive imaging techniques, such as real time ultrasound, computer tomography (CT) and magnetic resonance imaging (MRI) could constitute a valuable tool for the estimation of body composition performed in series on living animals. The aim of this paper was to compare these methods. Ultrasound equipment entails a relatively low cost and great mobility, but provides less information and lower accuracy about whole body composition compared to CT and MRI. For this reason the ultrasound technique will in the future most probably remain for field application. Computer tomography and MRI with standardized and verified application methods could provide a tool to substitute whole body analysis and physical dissection. With respect to the disadvantages of CT and MRI techniques, the expense and the lack of portability should be cited, and for these reasons it is most likely that in future such techniques will be applied only in research and breeding programs

  13. A Combined Self-Consistent Method to Estimate the Effective Properties of Polypropylene/Calcium Carbonate Composites

    Directory of Open Access Journals (Sweden)

    Zhongqiang Xiong

    2018-01-01

    Full Text Available In this work, trying to avoid difficulty of application due to the irregular filler shapes in experiments, self-consistent and differential self-consistent methods were combined to obtain a decoupled equation. The combined method suggests a tenor γ independent of filler-contents being an important connection between high and low filler-contents. On one hand, the constant parameter can be calculated by Eshelby’s inclusion theory or the Mori–Tanaka method to predict effective properties of composites coinciding with its hypothesis. On the other hand, the parameter can be calculated with several experimental results to estimate the effective properties of prepared composites of other different contents. In addition, an evaluation index σ f ′ of the interactional strength between matrix and fillers is proposed based on experiments. In experiments, a hyper-dispersant was synthesized to prepare polypropylene/calcium carbonate (PP/CaCO3 composites up to 70 wt % of filler-content with dispersion, whose dosage was only 5 wt % of the CaCO3 contents. Based on several verifications, it is hoped that the combined self-consistent method is valid for other two-phase composites in experiments with the same application progress as in this work.

  14. Estimation of regional building-related C&D debris generation and composition: case study for Florida, US.

    Science.gov (United States)

    Cochran, Kimberly; Townsend, Timothy; Reinhart, Debra; Heck, Howell

    2007-01-01

    Methodology for the accounting, generation, and composition of building-related construction and demolition (C&D) at a regional level was explored. Six specific categories of debris were examined: residential construction, nonresidential construction, residential demolition, nonresidential demolition, residential renovation, and nonresidential renovation. Debris produced from each activity was calculated as the product of the total area of activity and waste generated per unit area of activity. Similarly, composition was estimated as the product of the total area of activity and the amount of each waste component generated per unit area. The area of activity was calculated using statistical data, and individual site studies were used to assess the average amount of waste generated per unit area. The application of the methodology was illustrated using Florida, US approximately 3,750,000 metric tons of building-related C&D debris were estimated as generated in Florida in 2000. Of that amount, concrete represented 56%, wood 13%, drywall 11%, miscellaneous debris 8%, asphalt roofing materials 7%, metal 3%, cardboard 1%, and plastic 1%. This model differs from others because it accommodates regional construction styles and available data. The resulting generation amount per capita is less than the US estimate - attributable to the high construction, low demolition activity seen in Florida.

  15. The use of thermovision technique to estimate the properties of highly filled polyolefins composites with calcium carbonate

    Energy Technology Data Exchange (ETDEWEB)

    Jakubowska, Paulina; Klozinski, Arkadiusz [Poznan University of Technology, Institute of Technology and Chemical Engineering, Polymer Division Pl. M. Sklodowskiej-Curie 2, 60-965 Poznan, Poland, Paulina.Jakubowska@put.poznan.pl (Poland)

    2015-05-22

    The aim of this work was to determine the possibility of thermovision technique usage for estimating thermal properties of ternary highly filled composites (PE-MD/iPP/CaCO{sub 3}) and polymer blends (PE-MD/iPP) during mechanical measurements. The ternary, polyolefin based composites that contained the following amounts of calcium carbonate: 48, 56, and 64 wt % were studied. All materials were applying under tensile cyclic loads (x1, x5, x10, x20, x50, x100, x500, x1000). Simultaneously, a fully radiometric recording, using a TESTO infrared camera, was created. After the fatigue process, all samples were subjected to static tensile test and the maximum temperature at break was also recorded. The temperature values were analyzed in a function of cyclic loads and the filler content. The changes in the Young’s modulus values were also investigated.

  16. Thermal degradation kinetics and lifetime estimation for polycarbonate/polymethylphenylsilsesquioxane composite

    Institute of Scientific and Technical Information of China (English)

    Jiangbo WANG; Zhong XIN

    2009-01-01

    The thermal degradation behaviors of poly-carbonate/polymethylphenylsilsesquioxane (FRPC) composites were investigated by thermogravimetric analysis (TGA) under isothermal conditions in nitrogen atmosphere. The isothermal kinetics equation was used to describe the thermal degradation process. The results showed that activation energy (E), in the case of isothermal degradation, was a quick increasing function of conversion (a) for polycarbonate (PC) but was a strong and decreasing function of conversion for FRPC. Under the isothermal condition, the addition of polymethylphenylsilsesquioxane (PMPSQ) retardanted the thermal degradation and enhanced the thermal stability of PC during the early and middle stages of thermal degradation. It also indicated a possible existence of a difference in nucleation, nuclei growth, and gas diffusion mechanism in the thermal degradation process between PC and FRPC. Meanwhile, the addition of PMPSQ influenced the lifetime of PC, but the composite still met the demand in manufacturing and application.

  17. Estimating the composition of gas hydrate using 3D seismic data from Penghu Canyon, offshore Taiwan

    Directory of Open Access Journals (Sweden)

    Sourav Kumar Sahoo

    2018-01-01

    Full Text Available Direct measurements of gas composition by drilling at a few hundred meters below seafloor can be costly, and a remote sensing method may be preferable. The hydrate occurrence is seismically shown by a bottom-simulating reflection (BSR which is generally indicative of the base of the hydrate stability zone. With a good temperature profile from the seafloor to the depth of the BSR, a near-correct hydrate phase diagram can be calculated, which can be directly related to the hydrate composition. However, in the areas with high topographic anomalies of seafloor, the temperature profile is usually poorly defined, with scattered data. Here we used a remote method to reduce such scattering. We derived gas composition of hydrate in stability zone and reduced the scattering by considering depth-dependent geothermal conductivity and topographic corrections. Using 3D seismic data at the Penghu canyon, offshore SW Taiwan, we corrected for topographic focusing through 3D numerical thermal modeling. A temperature profile was fitted with a depth-dependent geothermal gradient, considering the increasing thermal conductivity with depth. Using a pore-water salinity of 2%, we constructed a gas hydrate phase model composed of 99% methane and 1% ethane to derive a temperature depth profile consistent with the seafloor temperature from in-situ measurements, and geochemical analyses of the pore fluids. The high methane content suggests predominantly biogenic source. The derived regional geothermal gradient is 40°C km-1. This method can be applied to other comparable marine environment to better constrain the composition of gas hydrate from BSR in a seismic data, in absence of direct sampling.

  18. Development of bioelectrical impedance analysis-based equations for estimation of body composition in postpartum rural Bangladeshi women.

    Science.gov (United States)

    Shaikh, Saijuddin; Schulze, Kerry J; Kurpad, Anura; Ali, Hasmot; Shamim, Abu Ahmed; Mehra, Sucheta; Wu, Lee S-F; Rashid, Mahbubar; Labrique, Alain B; Christian, Parul; West, Keith P

    2013-02-28

    Equations for predicting body composition from bioelectrical impedance analysis (BIA) parameters are age-, sex- and population-specific. Currently there are no equations applicable to women of reproductive age in rural South Asia. Hence, we developed equations for estimating total body water (TBW), fat-free mass (FFM) and fat mass in rural Bangladeshi women using BIA, with ²H₂O dilution as the criterion method. Women of reproductive age, participating in a community-based placebo-controlled trial of vitamin A or β-carotene supplementation, were enrolled at 19·7 (SD 9·3) weeks postpartum in a study to measure body composition by ²H₂O dilution and impedance at 50 kHz using multi-frequency BIA (n 147), and resistance at 50 kHz using single-frequency BIA (n 82). TBW (kg) by ²H2O dilution was used to derive prediction equations for body composition from BIA measures. The prediction equation was applied to resistance measures obtained at 13 weeks postpartum in a larger population of postpartum women (n 1020). TBW, FFM and fat were 22·6 (SD 2·7), 30·9 (SD 3·7) and 10·2 (SD 3·8) kg by ²H₂O dilution. Height²/impedance or height²/resistance and weight provided the best estimate of TBW, with adjusted R² 0·78 and 0·76, and with paired absolute differences in TBW of 0·02 (SD 1·33) and 0·00 (SD 1·28) kg, respectively, between BIA and ²H₂O. In the larger sample, values for TBW, FFM and fat were 23·8, 32·5 and 10·3 kg, respectively. BIA can be an important tool for assessing body composition in women of reproductive age in rural South Asia where poor maternal nutrition is common.

  19. Bayesian estimation of predator diet composition from fatty acids and stable isotopes

    Directory of Open Access Journals (Sweden)

    Philipp Neubauer

    2015-04-01

    Full Text Available Quantitative analysis of stable isotopes (SI and, more recently, fatty acid profiles (FAP are useful and complementary tools for estimating the relative contribution of different prey items in the diet of a predator. The combination of these two approaches, however, has thus far been limited and qualitative. We propose a mixing model for FAP that follows the Bayesian machinery employed in state-of-the-art mixing models for SI. This framework provides both point estimates and probability distributions for individual and population level diet proportions. Where fat content and conversion coefficients are available, they can be used to improve diet estimates. This model can be explicitly integrated with analogous models for SI to increase resolution and clarify predator–prey relationships. We apply our model to simulated data and an experimental dataset that allows us to illustrate modeling strategies and demonstrate model performance. Our methods are provided as an open source software package for the statistical computing environment R.

  20. Unbiased estimation of human body composition by the Cavalieri method using magnetic resonance imaging

    International Nuclear Information System (INIS)

    Roberts, N.; Reid, N.M.K.; Brodie, D.A.; Bourne, M.; Edwards, R.H.T.; Cruz-Orive, L.M.

    1993-01-01

    The classical methods for estimating the volume of human body compartments in vivo (e.g. skin-fold thickness for fat, radioisotope counting for different compartments, etc.) are generally indirect and rely on essentially empirical relationships - hence they are biased to unknown degrees. The advent of modern non-invasive scanning techniques, such as X-ray computed tomography (CT) and magnetic resonance imaging (MRI) is now widening the scope of volume quantification, especially in combination with stereological methods. Apart from its superior soft tissue contrast, MRI enjoys the distinct advantage of not using ionizing radiations. By a proper landmarking and control of the scanner couch, an adult male volunteer was scanned exhaustively into parallel systematic MR ''sections''. Four compartments were defined, namely bone, muscle, organs and fat (which included the skin), and their corresponding volumes were easily and efficiently estimated by the Cavalieri method: the total section area of a compartment times the section interval estimates the volume of the compartment without bias. Formulae and nomograms are given to predict the errors and to optimize the design. To estimate an individual's muscle volume with a 5% coefficient of error, 10 sections and less than 10 min point counting (to estimate the relevant section areas) are required. Bone and fat require about twice as much work. To estimate the mean muscle volume of a population with the same error contribution, from a random sample of six subjects, the workload per subject can be divided by √ 6, namely 4 min per subject. For a given number of sections planimetry would be as accurate but far more time consuming than point counting. (author)

  1. Validity of Standing Posture Eight-electrode Bioelectrical Impedance to Estimate Body Composition in Taiwanese Elderly

    Directory of Open Access Journals (Sweden)

    Ling-Chun Lee

    2014-09-01

    Conclusion: The results of this study showed that the impedance index and LST in the whole body, upper limbs, and lower limbs derived from DXA findings were highly correlated. The LST and BF% estimated by BIA8 in whole body and various body segments were highly correlated with the corresponding DXA results; however, BC-418 overestimates the participants' appendicular LST and underestimates whole body BF%. Therefore, caution is needed when interpreting the results of appendicular LST and whole body BF% estimated for elderly adults.

  2. Estimation of percolating water dynamics through the vadose zone of the Postojna cave on the basis of isotope composition

    Directory of Open Access Journals (Sweden)

    Janja Kogovšek

    2007-12-01

    Full Text Available Within the scope of monitoring water percolation through the 100-m thick vadose zone in the area of Postojnska jama continuous measurements of precipitation were carried out on the surface, and continuous measurements of water flowandphysicalandchemicalparametersof selected water trickles were performed under the surface. Occasional samples of percolating waters were taken for the analysis of water oxygen isotope composition. An exponential model of groundwater flowwaselaborated,bymeansofwhichtheretentiontime of water in individual trickles was estimated. Modelled retention times of groundwater range from 2.5 months to over one year.

  3. Estimation of body composition depends on applied device in patients undergoing major abdominal surgery

    NARCIS (Netherlands)

    Haverkort, Elizabeth B.; Binnekade, Jan M.; de van der Schueren, Marian A. E.; Gouma, Dirk J.; de Haan, Rob J.

    2015-01-01

    Bioelectrical impedance analysis (BIA) is a method used to estimate body compartments such as fat-free mass (FFM) and fat mass (FM). Two BIA devices, a single-frequency BIA (SF-BIA) device and a bioimpedance spectroscopy (BIS) approach, were compared to evaluate their reliability and to study

  4. Estimation of Body Composition Depends on Applied Device in Patients Undergoing Major Abdominal Surgery

    NARCIS (Netherlands)

    Haverkort, E.B.; Binnekade, J.M.; van Bokhorst-de van der Schueren, M.A.E.; Gouma, D.J.; de Haan, R.J.

    2015-01-01

    Background: Bioelectrical impedance analysis (BIA) is a method used to estimate body compartments such as fat-free mass (FFM) and fat mass (FM). Two BIA devices, a single-frequency BIA (SF-BIA) device and a bioimpedance spectroscopy (BIS) approach, were compared to evaluate their reliability and to

  5. Study Of Isotopic Technical Application To Estimate Origin Of Nitrogen Composition Of Groundwater In Hanoi Area

    International Nuclear Information System (INIS)

    Trinh Van Giap; Dinh Bich Lieu; Dang Anh Minh; Vo Thi Anh; Bui Dac Dung; Nguyen Thi Hong Thinh; Nguyen Manh Hung; Nguyen Van Hoan; Nguyen Van Hai

    2007-01-01

    Groundwater in Hanoi area as well as some other areas in Bac-Bo Delta is being contaminated by heavy metals and nitrogen compounds, especially arsenic and ammonium. The origin of nitrogen compounds in groundwater in Hanoi area is estimated in order to exploit and manage sustainable groundwater served for production and live. (author)

  6. The impact of composite AUC estimates on the prediction of systemic exposure in toxicology experiments.

    Science.gov (United States)

    Sahota, Tarjinder; Danhof, Meindert; Della Pasqua, Oscar

    2015-06-01

    Current toxicity protocols relate measures of systemic exposure (i.e. AUC, Cmax) as obtained by non-compartmental analysis to observed toxicity. A complicating factor in this practice is the potential bias in the estimates defining safe drug exposure. Moreover, it prevents the assessment of variability. The objective of the current investigation was therefore (a) to demonstrate the feasibility of applying nonlinear mixed effects modelling for the evaluation of toxicokinetics and (b) to assess the bias and accuracy in summary measures of systemic exposure for each method. Here, simulation scenarios were evaluated, which mimic toxicology protocols in rodents. To ensure differences in pharmacokinetic properties are accounted for, hypothetical drugs with varying disposition properties were considered. Data analysis was performed using non-compartmental methods and nonlinear mixed effects modelling. Exposure levels were expressed as area under the concentration versus time curve (AUC), peak concentrations (Cmax) and time above a predefined threshold (TAT). Results were then compared with the reference values to assess the bias and precision of parameter estimates. Higher accuracy and precision were observed for model-based estimates (i.e. AUC, Cmax and TAT), irrespective of group or treatment duration, as compared with non-compartmental analysis. Despite the focus of guidelines on establishing safety thresholds for the evaluation of new molecules in humans, current methods neglect uncertainty, lack of precision and bias in parameter estimates. The use of nonlinear mixed effects modelling for the analysis of toxicokinetics provides insight into variability and should be considered for predicting safe exposure in humans.

  7. Genetic Parameters for carcass composition and pork quality estimated in a commercial production chain

    NARCIS (Netherlands)

    Wijk, van H.J.; Arts, D.J.G.; Matthews, J.O.; Webster, M.; Ducro, B.J.; Knol, E.F.

    2005-01-01

    Breeding goals in pigs are subject to change and are directed much more toward retail carcass yield and meat quality because of the high economic value of these traits. The objective of this study was to estimate genetic parameters of growth, carcass, and meat quality traits. Carcass components

  8. Factors Associated with Young Adults’ Pregnancy Likelihood

    Science.gov (United States)

    Kitsantas, Panagiota; Lindley, Lisa L.; Wu, Huichuan

    2014-01-01

    OBJECTIVES While progress has been made to reduce adolescent pregnancies in the United States, rates of unplanned pregnancy among young adults (18–29 years) remain high. In this study, we assessed factors associated with perceived likelihood of pregnancy (likelihood of getting pregnant/getting partner pregnant in the next year) among sexually experienced young adults who were not trying to get pregnant and had ever used contraceptives. METHODS We conducted a secondary analysis of 660 young adults, 18–29 years old in the United States, from the cross-sectional National Survey of Reproductive and Contraceptive Knowledge. Logistic regression and classification tree analyses were conducted to generate profiles of young adults most likely to report anticipating a pregnancy in the next year. RESULTS Nearly one-third (32%) of young adults indicated they believed they had at least some likelihood of becoming pregnant in the next year. Young adults who believed that avoiding pregnancy was not very important were most likely to report pregnancy likelihood (odds ratio [OR], 5.21; 95% CI, 2.80–9.69), as were young adults for whom avoiding a pregnancy was important but not satisfied with their current contraceptive method (OR, 3.93; 95% CI, 1.67–9.24), attended religious services frequently (OR, 3.0; 95% CI, 1.52–5.94), were uninsured (OR, 2.63; 95% CI, 1.31–5.26), and were likely to have unprotected sex in the next three months (OR, 1.77; 95% CI, 1.04–3.01). DISCUSSION These results may help guide future research and the development of pregnancy prevention interventions targeting sexually experienced young adults. PMID:25782849

  9. Review of Elaboration Likelihood Model of persuasion

    OpenAIRE

    藤原, 武弘; 神山, 貴弥

    1989-01-01

    This article mainly introduces Elaboration Likelihood Model (ELM), proposed by Petty & Cacioppo, that is, a general attitude change theory. ELM posturates two routes to persuasion; central and peripheral route. Attitude change by central route is viewed as resulting from a diligent consideration of the issue-relevant informations presented. On the other hand, attitude change by peripheral route is viewed as resulting from peripheral cues in the persuasion context. Secondly we compare these tw...

  10. Developing Methods for Fraction Cover Estimation Toward Global Mapping of Ecosystem Composition

    Science.gov (United States)

    Roberts, D. A.; Thompson, D. R.; Dennison, P. E.; Green, R. O.; Kokaly, R. F.; Pavlick, R.; Schimel, D.; Stavros, E. N.

    2016-12-01

    Terrestrial vegetation seldom covers an entire pixel due to spatial mixing at many scales. Estimating the fractional contributions of photosynthetic green vegetation (GV), non-photosynthetic vegetation (NPV), and substrate (soil, rock, etc.) to mixed spectra can significantly improve quantitative remote measurement of terrestrial ecosystems. Traditional methods for estimating fractional vegetation cover rely on vegetation indices that are sensitive to variable substrate brightness, NPV and sun-sensor geometry. Spectral mixture analysis (SMA) is an alternate framework that provides estimates of fractional cover. However, simple SMA, in which the same set of endmembers is used for an entire image, fails to account for natural spectral variability within a cover class. Multiple Endmember Spectral Mixture Analysis (MESMA) is a variant of SMA that allows the number and types of pure spectra to vary on a per-pixel basis, thereby accounting for endmember variability and generating more accurate cover estimates, but at a higher computational cost. Routine generation and delivery of GV, NPV, and substrate (S) fractions using MESMA is currently in development for large, diverse datasets acquired by the Airborne Visible Infrared Imaging Spectrometer (AVIRIS). We present initial results, including our methodology for ensuring consistency and generalizability of fractional cover estimates across a wide range of regions, seasons, and biomes. We also assess uncertainty and provide a strategy for validation. GV, NPV, and S fractions are an important precursor for deriving consistent measurements of ecosystem parameters such as plant stress and mortality, functional trait assessment, disturbance susceptibility and recovery, and biomass and carbon stock assessment. Copyright 2016 California Institute of Technology. All Rights Reserved. We acknowledge support of the US Government, NASA, the Earth Science Division and Terrestrial Ecology program.

  11. Unbinned likelihood analysis of EGRET observations

    International Nuclear Information System (INIS)

    Digel, Seth W.

    2000-01-01

    We present a newly-developed likelihood analysis method for EGRET data that defines the likelihood function without binning the photon data or averaging the instrumental response functions. The standard likelihood analysis applied to EGRET data requires the photons to be binned spatially and in energy, and the point-spread functions to be averaged over energy and inclination angle. The full-width half maximum of the point-spread function increases by about 40% from on-axis to 30 degree sign inclination, and depending on the binning in energy can vary by more than that in a single energy bin. The new unbinned method avoids the loss of information that binning and averaging cause and can properly analyze regions where EGRET viewing periods overlap and photons with different inclination angles would otherwise be combined in the same bin. In the poster, we describe the unbinned analysis method and compare its sensitivity with binned analysis for detecting point sources in EGRET data

  12. Estimation of percentage body fat by dual-energy x-ray absorptiometry: evaluation by in vivo human elemental composition

    International Nuclear Information System (INIS)

    Wang Zimian; Pierson, Richard N; Heymsfield, Steven B; Chen Zhao; Zhu Shankuan

    2010-01-01

    Dual-energy x-ray absorptiometry (DXA) is widely applied for estimating body fat. The percentage of body mass as fat (%fat) is predicted from a DXA-estimated R ST value defined as the ratio of soft tissue attenuation at two photon energies (e.g., 40 keV and 70 keV). Theoretically, the R ST concept depends on the mass of each major element in the human body. The DXA R ST values, however, have never been fully evaluated by measured human elemental composition. The present investigation evaluated the DXA R ST value by the total body mass of 11 major elements and the DXA %fat by the five-component (5C) model, respectively. Six elements (i.e. C, N, Na, P, Cl and Ca) were measured by in vivo neutron activation analysis, and potassium (i.e. K) by whole-body 40 K counting in 27 healthy adults. Models were developed for predicting the total body mass of four additional elements (i.e. H, O, Mg and S). The elemental content of soft tissue, after correction for bone mineral elements, was used to predict the R ST values. The DXA R ST values were strongly associated with the R ST values predicted from elemental content (r = 0.976, P ST to systematically exceed the DXA-measured R ST (mean ± SD, 1.389 ± 0.024 versus 1.341 ± 0.024). DXA-estimated %fat was strongly associated with 5C %fat (24.4 ± 12.0% versus 24.9 ± 11.1%, r = 0.983, P ST is evaluated by in vivo elemental composition, and the present study supports the underlying physical concept and accuracy of the DXA method for estimating %fat.

  13. Estimation of Curvature Changes for Steel-Concrete Composite Bridge Using Fiber Bragg Grating Sensors

    OpenAIRE

    Kang, Donghoon; Chung, Wonseok

    2013-01-01

    This study is focused on the verification of the key idea of a newly developed steel-concrete composite bridge. The key idea of the proposed bridge is to reduce the design moment by applying vertical prestressing force to steel girders, so that a moment distribution of a continuous span bridge is formed in a simple span bridge. For the verification of the key technology, curvature changes of the bridge should be monitored sequentially at every construction stage. A pair of multiplexed FBG sen...

  14. Spectral estimation of soil properties in siberian tundra soils and relations with plant species composition

    DEFF Research Database (Denmark)

    Bartholomeus, Harm; Schaepman-Strub, Gabriela; Blok, Daan

    2012-01-01

    yields a good prediction model for K and a moderate model for pH. Using these models, soil properties are determined for a larger number of samples, and soil properties are related to plant species composition. This analysis shows that variation of soil properties is large within vegetation classes......Predicted global warming will be most pronounced in the Arctic and will severely affect permafrost environments. Due to its large spatial extent and large stocks of soil organic carbon, changes to organic matter decomposition rates and associated carbon fluxes in Arctic permafrost soils...

  15. Composition

    DEFF Research Database (Denmark)

    Bergstrøm-Nielsen, Carl

    2014-01-01

    Cue Rondo is an open composition to be realised by improvising musicians. See more about my composition practise in the entry "Composition - General Introduction". Caution: streaming the sound/video files will in some cases only provide a few minutes' sample, or the visuals will not appear at all....... Please DOWNLOAD them to see/hear them in full length! This work is licensed under a Creative Commons "by-nc" License. You may for non-commercial purposes use and distribute it, performance instructions as well as specially designated recordings, as long as the author is mentioned. Please see http...

  16. Parameter Estimation

    DEFF Research Database (Denmark)

    Sales-Cruz, Mauricio; Heitzig, Martina; Cameron, Ian

    2011-01-01

    of optimisation techniques coupled with dynamic solution of the underlying model. Linear and nonlinear approaches to parameter estimation are investigated. There is also the application of maximum likelihood principles in the estimation of parameters, as well as the use of orthogonal collocation to generate a set......In this chapter the importance of parameter estimation in model development is illustrated through various applications related to reaction systems. In particular, rate constants in a reaction system are obtained through parameter estimation methods. These approaches often require the application...... of algebraic equations as the basis for parameter estimation.These approaches are illustrated using estimations of kinetic constants from reaction system models....

  17. Making the most of the imaging we have: using head MRI to estimate body composition

    International Nuclear Information System (INIS)

    Lack, C.M.; Lesser, G.J.; Umesi, U.N.; Bowns, J.; Chen, M.Y.; Case, D.; Hightower, R.C.; Johnson, A.J.

    2016-01-01

    Aim: To investigate the use of clinical head magnetic resonance imaging (MRI) in determining body composition and to evaluate how well it correlates with established measures based on abdominal computed tomography (CT). Materials and methods: Ninety-nine consecutive patients were identified who had undergone both brain MRI and abdominal CT within a 2-week span. Volumes of fat and muscle in the extracranial head were measured utilising several techniques by both abdominal CT and head MRI. Results: MRI-based total fat volumes in the head correlated with CT-based measurements of fat in the abdomen using both single-section (r=0.64, p<0.01) and multisection (r=0.60, p<0.01) techniques. No significant correlation was found between muscle volumes in the abdomen and head. Conclusion: Based on the present results, head MRI-based measures may provide a useful surrogate for CT measurements of abdominal fat, particularly in patients with neurological cancers, as head MRI (and not abdominal CT) is routinely and repeatedly obtained for the purpose of clinical care for these patients. - Highlights: • We compared body composition using brain MRI with previously proven abdominal CT. • Fat and muscle volumes of the extracranial compartment can be measured by MRI. • Muscle volume in the face does not correlate with abdominal muscle volume. • Fat volume in the face can be used as a surrogate for abdominal fat volume.

  18. Estimating the Robustness of Composite CBA and MCDA Assessments by Variation of Criteria Importance Order

    DEFF Research Database (Denmark)

    Jensen, Anders Vestergaard; Barfod, Michael Bruhn; Leleur, Steen

    2011-01-01

    described is based on the fact that when using MCA as a decision-support tool, questions often arise about the weighting (or prioritising) of the included criteria. This part of the MCA is seen as the most subjective part and could give reasons for discussion among the decision makers or stakeholders......Abstract This paper discusses the concept of using rank variation concerning the stakeholder prioritising of importance criteria for exploring the sensitivity of criteria weights in multi-criteria analysis (MCA). Thereby the robustness of the MCA-based decision support can be tested. The analysis....... Furthermore, the relative weights can make a large difference in the resulting assessment of alternatives (Hobbs and Meier 2000). Therefore it is highly relevant to introduce a procedure for estimating the importance of criteria weights. This paper proposes a methodology for estimating the robustness...

  19. Estimating the robustness of composite CBA & MCA assessments by variation of criteria importance order

    DEFF Research Database (Denmark)

    Jensen, Anders Vestergaard; Barfod, Michael Bruhn; Leleur, Steen

    is based on the fact that when using MCA as a decision-support tool, questions often arise about the weighting (or prioritising) of the included criteria. This part of the MCA is seen as the most subjective part and could give reasons for discussion among the decision makers or stakeholders. Furthermore......This paper discusses the concept of using rank variation concerning the stake-holder prioritising of importance criteria for exploring the sensitivity of criteria weights in multi-criteria analysis (MCA). Thereby the robustness of the MCA-based decision support can be tested. The analysis described......, the relative weights can make a large difference in the resulting assessment of alternatives [1]. Therefore it is highly relevant to introduce a procedure for estimating the importance of criteria weights. This paper proposes a methodology for estimating the robustness of weights used in additive utility...

  20. Zero-inflated Poisson model based likelihood ratio test for drug safety signal detection.

    Science.gov (United States)

    Huang, Lan; Zheng, Dan; Zalkikar, Jyoti; Tiwari, Ram

    2017-02-01

    In recent decades, numerous methods have been developed for data mining of large drug safety databases, such as Food and Drug Administration's (FDA's) Adverse Event Reporting System, where data matrices are formed by drugs such as columns and adverse events as rows. Often, a large number of cells in these data matrices have zero cell counts and some of them are "true zeros" indicating that the drug-adverse event pairs cannot occur, and these zero counts are distinguished from the other zero counts that are modeled zero counts and simply indicate that the drug-adverse event pairs have not occurred yet or have not been reported yet. In this paper, a zero-inflated Poisson model based likelihood ratio test method is proposed to identify drug-adverse event pairs that have disproportionately high reporting rates, which are also called signals. The maximum likelihood estimates of the model parameters of zero-inflated Poisson model based likelihood ratio test are obtained using the expectation and maximization algorithm. The zero-inflated Poisson model based likelihood ratio test is also modified to handle the stratified analyses for binary and categorical covariates (e.g. gender and age) in the data. The proposed zero-inflated Poisson model based likelihood ratio test method is shown to asymptotically control the type I error and false discovery rate, and its finite sample performance for signal detection is evaluated through a simulation study. The simulation results show that the zero-inflated Poisson model based likelihood ratio test method performs similar to Poisson model based likelihood ratio test method when the estimated percentage of true zeros in the database is small. Both the zero-inflated Poisson model based likelihood ratio test and likelihood ratio test methods are applied to six selected drugs, from the 2006 to 2011 Adverse Event Reporting System database, with varying percentages of observed zero-count cells.

  1. Methane oxidation by termite mounds estimated by the carbon isotopic composition of methane

    Science.gov (United States)

    Sugimoto, Atsuko; Inoue, Tetsushi; Kirtibutr, Nit; Abe, Takuya

    1998-12-01

    Emission rates and carbon isotope ratios of CH4, emitted by workers of termites, and of CH4, emitted from their mounds, were observed in a dry evergreen forest in Thailand to estimate the proportion of CH4 oxidized during emission through the mound. The δ13C of CH4 emitted from a termite mound (-70.9 to -82.4‰) was higher than that of CH4 emitted by workers in the mound (-85.4 to -97. l‰). Using a fractionation factor (a = 0.987) for oxidation of CH4 which was obtained in the incubation experiment, an emission factor defined as (CH4 emitted from a termite mound/CH4 produced by termites) was calculated. The emission factor obtained in each termite mound was nearly zero for Macrotermes (fungus-growing termites), of which the nest has a thick soil wall and subterrannean termites, and 0.17 to 0.47 for Termitinae (small-mound-making termites). Global CH4 emission by termites was estimated on the basis of the CH4 emission rates by workers and termite biomass with the emission factors. The calculated result was 1.5 to 7.4 Tg/y (0.3 to 1.3% of total source), which is considerably smaller than the estimate by the IPCC [1994].

  2. Simulation-based marginal likelihood for cluster strong lensing cosmology

    Science.gov (United States)

    Killedar, M.; Borgani, S.; Fabjan, D.; Dolag, K.; Granato, G.; Meneghetti, M.; Planelles, S.; Ragone-Figueroa, C.

    2018-01-01

    Comparisons between observed and predicted strong lensing properties of galaxy clusters have been routinely used to claim either tension or consistency with Λ cold dark matter cosmology. However, standard approaches to such cosmological tests are unable to quantify the preference for one cosmology over another. We advocate approximating the relevant Bayes factor using a marginal likelihood that is based on the following summary statistic: the posterior probability distribution function for the parameters of the scaling relation between Einstein radii and cluster mass, α and β. We demonstrate, for the first time, a method of estimating the marginal likelihood using the X-ray selected z > 0.5 Massive Cluster Survey clusters as a case in point and employing both N-body and hydrodynamic simulations of clusters. We investigate the uncertainty in this estimate and consequential ability to compare competing cosmologies, which arises from incomplete descriptions of baryonic processes, discrepancies in cluster selection criteria, redshift distribution and dynamical state. The relation between triaxial cluster masses at various overdensities provides a promising alternative to the strong lensing test.

  3. Estimate of compressive strength of an unidirectional composite lamina using cross-ply and angle-ply laminates

    Directory of Open Access Journals (Sweden)

    M. Scafè

    2014-07-01

    Full Text Available In this work has been estimated the compressive strength of a unidirectional lamina of a carbon/epoxy composite material, using the cross-ply and angle-ply laminates. Over the years various methods have been developed to deduce compressive properties of composite materials reinforced with long fibres. Each of these methods is characterized by a specific way of applying load to the specimen. The method chosen to perform the compression tests is the Wyoming Combined Loading Compression (CLC Test Method, described in ASTM D 6641 / D 6641M-09. This method presents many advantages, especially: the load application on the specimen (end load combined with shear load, the reproducibility of measurements and the experimental equipment quite simplified. Six different laminates were tested in compressive tests. They were realized by the same unidirectional prepreg, but with different stacking sequences: two cross-ply [0/90]ns, two angle-ply [0/90/±45]ns and two unidirectional laminates [0]ns and [90]ns. The estimate of the compressive strength of the unidirectional laminates at 0°, was done by an indirect analytical method, developed from the classical lamination theory, and which uses a multiplicative parameter known as Back-out Factor (BF. The BF is determined by using the experimental values obtained from compression tests.

  4. Transfer Entropy as a Log-Likelihood Ratio

    Science.gov (United States)

    Barnett, Lionel; Bossomaier, Terry

    2012-09-01

    Transfer entropy, an information-theoretic measure of time-directed information transfer between joint processes, has steadily gained popularity in the analysis of complex stochastic dynamics in diverse fields, including the neurosciences, ecology, climatology, and econometrics. We show that for a broad class of predictive models, the log-likelihood ratio test statistic for the null hypothesis of zero transfer entropy is a consistent estimator for the transfer entropy itself. For finite Markov chains, furthermore, no explicit model is required. In the general case, an asymptotic χ2 distribution is established for the transfer entropy estimator. The result generalizes the equivalence in the Gaussian case of transfer entropy and Granger causality, a statistical notion of causal influence based on prediction via vector autoregression, and establishes a fundamental connection between directed information transfer and causality in the Wiener-Granger sense.

  5. Estimation of Curvature Changes for Steel-Concrete Composite Bridge Using Fiber Bragg Grating Sensors

    Directory of Open Access Journals (Sweden)

    Donghoon Kang

    2013-01-01

    Full Text Available This study is focused on the verification of the key idea of a newly developed steel-concrete composite bridge. The key idea of the proposed bridge is to reduce the design moment by applying vertical prestressing force to steel girders, so that a moment distribution of a continuous span bridge is formed in a simple span bridge. For the verification of the key technology, curvature changes of the bridge should be monitored sequentially at every construction stage. A pair of multiplexed FBG sensor arrays is proposed in order to measure curvature changes in this study. They are embedded in a full-scale test bridge and measured local strains, which are finally converted to curvatures. From the result of curvature changes, it is successfully ensured that the key idea of the proposed bridge, expected theoretically, is viable.

  6. A new method for non-invasive estimation of human muscle fiber type composition.

    Directory of Open Access Journals (Sweden)

    Audrey Baguet

    Full Text Available BACKGROUND: It has been established that excellence in sports with short and long exercise duration requires a high proportion of fast-twitch (FT or type-II fibers and slow-twitch (ST or type-I fibers, respectively. Until today, the muscle biopsy method is still accepted as gold standard to measure muscle fiber type composition. Because of its invasive nature and high sampling variance, it would be useful to develop a non-invasive alternative. METHODOLOGY: Eighty-three control subjects, 15 talented young track-and-field athletes, 51 elite athletes and 14 ex-athletes volunteered to participate in the current study. The carnosine content of all 163 subjects was measured in the gastrocnemius muscle by proton magnetic resonance spectroscopy ((1H-MRS. Muscle biopsies for fiber typing were taken from 12 untrained males. PRINCIPAL FINDINGS: A significant positive correlation was found between muscle carnosine, measured by (1H-MRS, and percentage area occupied by type II fibers. Explosive athletes had ∼30% higher carnosine levels compared to a reference population, whereas it was ∼20% lower than normal in typical endurance athletes. Similar results were found in young talents and ex-athletes. When active elite runners were ranked according to their best running distance, a negative sigmoidal curve was found between logarithm of running distance and muscle carnosine. CONCLUSIONS: Muscle carnosine content shows a good reflection of the disciplines of elite track-and-field athletes and is able to distinguish between individual track running distances. The differences between endurance and sprint muscle types is also observed in young talents and former athletes, suggesting this characteristic is genetically determined and can be applied in early talent identification. This quick method provides a valid alternative for the muscle biopsy method. In addition, this technique may also contribute to the diagnosis and monitoring of many conditions and

  7. Body composition during weight loss in obese patients estimated by dual energy X-ray absorptiometry and by total body potassium

    DEFF Research Database (Denmark)

    Hendel, H W; Gotfredsen, A; Andersen, T

    1996-01-01

    for FFM were strong (r = 0.92 and 0.93). Bland and Altman plots showed limits of agreement of +/-9 kg before and after weight loss; DXA underestimated FFM in women and overestimated FFM in men. DXA accounted for 80% of the lost body weight. The composition of the lost body mass did not differ from...... that estimated by TBK (7.6% FFM and 92.4% FM by TBK; 11% FFM and 89% FM by DXA). CONCLUSION: DXA estimates accurately the body composition and the composition of weight loss in groups of obese subjects. However, the scan table may be too small for patients weighing more than 95 kg....

  8. Dimension-Independent Likelihood-Informed MCMC

    KAUST Repository

    Cui, Tiangang; Law, Kody; Marzouk, Youssef

    2015-01-01

    Many Bayesian inference problems require exploring the posterior distribution of high-dimensional parameters, which in principle can be described as functions. By exploiting low-dimensional structure in the change from prior to posterior [distributions], we introduce a suite of MCMC samplers that can adapt to the complex structure of the posterior distribution, yet are well-defined on function space. Posterior sampling in nonlinear inverse problems arising from various partial di erential equations and also a stochastic differential equation are used to demonstrate the e ciency of these dimension-independent likelihood-informed samplers.

  9. Dimension-Independent Likelihood-Informed MCMC

    KAUST Repository

    Cui, Tiangang

    2015-01-07

    Many Bayesian inference problems require exploring the posterior distribution of high-dimensional parameters, which in principle can be described as functions. By exploiting low-dimensional structure in the change from prior to posterior [distributions], we introduce a suite of MCMC samplers that can adapt to the complex structure of the posterior distribution, yet are well-defined on function space. Posterior sampling in nonlinear inverse problems arising from various partial di erential equations and also a stochastic differential equation are used to demonstrate the e ciency of these dimension-independent likelihood-informed samplers.

  10. Approximate maximum parsimony and ancestral maximum likelihood.

    Science.gov (United States)

    Alon, Noga; Chor, Benny; Pardi, Fabio; Rapoport, Anat

    2010-01-01

    We explore the maximum parsimony (MP) and ancestral maximum likelihood (AML) criteria in phylogenetic tree reconstruction. Both problems are NP-hard, so we seek approximate solutions. We formulate the two problems as Steiner tree problems under appropriate distances. The gist of our approach is the succinct characterization of Steiner trees for a small number of leaves for the two distances. This enables the use of known Steiner tree approximation algorithms. The approach leads to a 16/9 approximation ratio for AML and asymptotically to a 1.55 approximation ratio for MP.

  11. Maximum likelihood positioning for gamma-ray imaging detectors with depth of interaction measurement

    International Nuclear Information System (INIS)

    Lerche, Ch.W.; Ros, A.; Monzo, J.M.; Aliaga, R.J.; Ferrando, N.; Martinez, J.D.; Herrero, V.; Esteve, R.; Gadea, R.; Colom, R.J.; Toledo, J.; Mateo, F.; Sebastia, A.; Sanchez, F.; Benlloch, J.M.

    2009-01-01

    The center of gravity algorithm leads to strong artifacts for gamma-ray imaging detectors that are based on monolithic scintillation crystals and position sensitive photo-detectors. This is a consequence of using the centroids as position estimates. The fact that charge division circuits can also be used to compute the standard deviation of the scintillation light distribution opens a way out of this drawback. We studied the feasibility of maximum likelihood estimation for computing the true gamma-ray photo-conversion position from the centroids and the standard deviation of the light distribution. The method was evaluated on a test detector that consists of the position sensitive photomultiplier tube H8500 and a monolithic LSO crystal (42mmx42mmx10mm). Spatial resolution was measured for the centroids and the maximum likelihood estimates. The results suggest that the maximum likelihood positioning is feasible and partially removes the strong artifacts of the center of gravity algorithm.

  12. Maximum likelihood positioning for gamma-ray imaging detectors with depth of interaction measurement

    Energy Technology Data Exchange (ETDEWEB)

    Lerche, Ch.W. [Grupo de Sistemas Digitales, ITACA, Universidad Politecnica de Valencia, 46022 Valencia (Spain)], E-mail: lerche@ific.uv.es; Ros, A. [Grupo de Fisica Medica Nuclear, IFIC, Universidad de Valencia-Consejo Superior de Investigaciones Cientificas, 46980 Paterna (Spain); Monzo, J.M.; Aliaga, R.J.; Ferrando, N.; Martinez, J.D.; Herrero, V.; Esteve, R.; Gadea, R.; Colom, R.J.; Toledo, J.; Mateo, F.; Sebastia, A. [Grupo de Sistemas Digitales, ITACA, Universidad Politecnica de Valencia, 46022 Valencia (Spain); Sanchez, F.; Benlloch, J.M. [Grupo de Fisica Medica Nuclear, IFIC, Universidad de Valencia-Consejo Superior de Investigaciones Cientificas, 46980 Paterna (Spain)

    2009-06-01

    The center of gravity algorithm leads to strong artifacts for gamma-ray imaging detectors that are based on monolithic scintillation crystals and position sensitive photo-detectors. This is a consequence of using the centroids as position estimates. The fact that charge division circuits can also be used to compute the standard deviation of the scintillation light distribution opens a way out of this drawback. We studied the feasibility of maximum likelihood estimation for computing the true gamma-ray photo-conversion position from the centroids and the standard deviation of the light distribution. The method was evaluated on a test detector that consists of the position sensitive photomultiplier tube H8500 and a monolithic LSO crystal (42mmx42mmx10mm). Spatial resolution was measured for the centroids and the maximum likelihood estimates. The results suggest that the maximum likelihood positioning is feasible and partially removes the strong artifacts of the center of gravity algorithm.

  13. Estimation of differences in trace element composition of Bulgarian summer fruits using ICP-MS

    Directory of Open Access Journals (Sweden)

    G. Toncheva

    2016-06-01

    Full Text Available Abstract. The content of potentially essential and toxic elements: chromium, manganese, iron, copper, nickel, cadmium and arsenic in Bulgarian fruits such as aronia, morello, cherry, raspberry, nectarine peach, apple type „akane” and pear type „early gold” were investigated. By using the ICP-MS we found that -1 -1 -1 raspberry has the highest content of iron (4635.9 ± 53.2 μg kg , manganese (5690.9 ± 31.7 μg kg and chromium (150.2 ± 2.5 μg kg , while the richest in -1 copper is the nectarine (887.5 ± 31.19 μg kg . The content of toxic elements (nickel, cadmium and arsenic is in amount significantly below the permissible standards. Single ANOVA and subsequent Dunkan's test were used to define the fruit and to estimate the significance of chemical elements. The test for multidirectional comparisons indicated that for five of the investigated seven elements: iron, copper, nickel, cadmium, and arsenic the fruits are statistically distinguishable According to hierarchical cluster analysis the fruits are into one cluster.

  14. Estimation of percentage body fat by dual-energy x-ray absorptiometry: evaluation by in vivo human elemental composition

    Energy Technology Data Exchange (ETDEWEB)

    Wang Zimian; Pierson, Richard N [Obesity Research Center, St Luke' s-Roosevelt Hospital, Columbia University, College of Physicians and Surgeons, New York, NY (United States); Heymsfield, Steven B [Clinical Research, Metabolism, Merck Research Laboratories, Rahway, NJ (United States); Chen Zhao [Mel and Enid Zuckerman College of Public Health, University of Arizona, Tucson, AZ (United States); Zhu Shankuan, E-mail: zw28@columbia.ed [Obesity and Body Composition Research Center, Zhejiang University, School of Public Health, Hangzhou (China)

    2010-05-07

    Dual-energy x-ray absorptiometry (DXA) is widely applied for estimating body fat. The percentage of body mass as fat (%fat) is predicted from a DXA-estimated R{sub ST} value defined as the ratio of soft tissue attenuation at two photon energies (e.g., 40 keV and 70 keV). Theoretically, the R{sub ST} concept depends on the mass of each major element in the human body. The DXA R{sub ST} values, however, have never been fully evaluated by measured human elemental composition. The present investigation evaluated the DXA R{sub ST} value by the total body mass of 11 major elements and the DXA %fat by the five-component (5C) model, respectively. Six elements (i.e. C, N, Na, P, Cl and Ca) were measured by in vivo neutron activation analysis, and potassium (i.e. K) by whole-body {sup 40}K counting in 27 healthy adults. Models were developed for predicting the total body mass of four additional elements (i.e. H, O, Mg and S). The elemental content of soft tissue, after correction for bone mineral elements, was used to predict the R{sub ST} values. The DXA R{sub ST} values were strongly associated with the R{sub ST} values predicted from elemental content (r = 0.976, P < 0.001), although there was a tendency for the elemental-predicted R{sub ST} to systematically exceed the DXA-measured R{sub ST} (mean {+-} SD, 1.389 {+-} 0.024 versus 1.341 {+-} 0.024). DXA-estimated %fat was strongly associated with 5C %fat (24.4 {+-} 12.0% versus 24.9 {+-} 11.1%, r = 0.983, P < 0.001). DXA R{sub ST} is evaluated by in vivo elemental composition, and the present study supports the underlying physical concept and accuracy of the DXA method for estimating %fat.

  15. Life estimation and analysis of dielectric strength, hydrocarbon backbone and oxidation of high voltage multi stressed EPDM composites

    Science.gov (United States)

    Khattak, Abraiz; Amin, Muhammad; Iqbal, Muhammad; Abbas, Naveed

    2018-02-01

    Micro and nanocomposites of ethylene propylene diene monomer (EPDM) are recently studied for different characteristics. Study on life estimation and effects of multiple stresses on its dielectric strength and backbone scission and oxidation is also vital for endorsement of these composites for high voltage insulation and other outdoor applications. In order to achieve these goals, unfilled EPDM and its micro and nanocomposites are prepared at 23 phr micro silica and 6 phr nanosilica loadings respectively. Prepared samples are energized at 2.5 kV AC voltage and subjected for a long time to heat, ultraviolet radiation, acid rain, humidity and salt fog in accelerated manner in laboratory. Dielectric strength, leakage current and intensity of saturated backbone and carbonyl group are periodically measured. Loss in dielectric strength, increase in leakage current and backbone degradation and oxidation were observed in all samples. These effects were least in the case of EPDM nanocomposite. The nanocomposite sample also demonstrated longest shelf life.

  16. Likelihood Approximation With Parallel Hierarchical Matrices For Large Spatial Datasets

    KAUST Repository

    Litvinenko, Alexander

    2017-11-01

    The main goal of this article is to introduce the parallel hierarchical matrix library HLIBpro to the statistical community. We describe the HLIBCov package, which is an extension of the HLIBpro library for approximating large covariance matrices and maximizing likelihood functions. We show that an approximate Cholesky factorization of a dense matrix of size $2M\\\\times 2M$ can be computed on a modern multi-core desktop in few minutes. Further, HLIBCov is used for estimating the unknown parameters such as the covariance length, variance and smoothness parameter of a Matérn covariance function by maximizing the joint Gaussian log-likelihood function. The computational bottleneck here is expensive linear algebra arithmetics due to large and dense covariance matrices. Therefore covariance matrices are approximated in the hierarchical ($\\\\H$-) matrix format with computational cost $\\\\mathcal{O}(k^2n \\\\log^2 n/p)$ and storage $\\\\mathcal{O}(kn \\\\log n)$, where the rank $k$ is a small integer (typically $k<25$), $p$ the number of cores and $n$ the number of locations on a fairly general mesh. We demonstrate a synthetic example, where the true values of known parameters are known. For reproducibility we provide the C++ code, the documentation, and the synthetic data.

  17. Likelihood Approximation With Parallel Hierarchical Matrices For Large Spatial Datasets

    KAUST Repository

    Litvinenko, Alexander; Sun, Ying; Genton, Marc G.; Keyes, David E.

    2017-01-01

    The main goal of this article is to introduce the parallel hierarchical matrix library HLIBpro to the statistical community. We describe the HLIBCov package, which is an extension of the HLIBpro library for approximating large covariance matrices and maximizing likelihood functions. We show that an approximate Cholesky factorization of a dense matrix of size $2M\\times 2M$ can be computed on a modern multi-core desktop in few minutes. Further, HLIBCov is used for estimating the unknown parameters such as the covariance length, variance and smoothness parameter of a Matérn covariance function by maximizing the joint Gaussian log-likelihood function. The computational bottleneck here is expensive linear algebra arithmetics due to large and dense covariance matrices. Therefore covariance matrices are approximated in the hierarchical ($\\H$-) matrix format with computational cost $\\mathcal{O}(k^2n \\log^2 n/p)$ and storage $\\mathcal{O}(kn \\log n)$, where the rank $k$ is a small integer (typically $k<25$), $p$ the number of cores and $n$ the number of locations on a fairly general mesh. We demonstrate a synthetic example, where the true values of known parameters are known. For reproducibility we provide the C++ code, the documentation, and the synthetic data.

  18. Superfast maximum-likelihood reconstruction for quantum tomography

    Science.gov (United States)

    Shang, Jiangwei; Zhang, Zhengyun; Ng, Hui Khoon

    2017-06-01

    Conventional methods for computing maximum-likelihood estimators (MLE) often converge slowly in practical situations, leading to a search for simplifying methods that rely on additional assumptions for their validity. In this work, we provide a fast and reliable algorithm for maximum-likelihood reconstruction that avoids this slow convergence. Our method utilizes the state-of-the-art convex optimization scheme, an accelerated projected-gradient method, that allows one to accommodate the quantum nature of the problem in a different way than in the standard methods. We demonstrate the power of our approach by comparing its performance with other algorithms for n -qubit state tomography. In particular, an eight-qubit situation that purportedly took weeks of computation time in 2005 can now be completed in under a minute for a single set of data, with far higher accuracy than previously possible. This refutes the common claim that MLE reconstruction is slow and reduces the need for alternative methods that often come with difficult-to-verify assumptions. In fact, recent methods assuming Gaussian statistics or relying on compressed sensing ideas are demonstrably inapplicable for the situation under consideration here. Our algorithm can be applied to general optimization problems over the quantum state space; the philosophy of projected gradients can further be utilized for optimization contexts with general constraints.

  19. Safe semi-supervised learning based on weighted likelihood.

    Science.gov (United States)

    Kawakita, Masanori; Takeuchi, Jun'ichi

    2014-05-01

    We are interested in developing a safe semi-supervised learning that works in any situation. Semi-supervised learning postulates that n(') unlabeled data are available in addition to n labeled data. However, almost all of the previous semi-supervised methods require additional assumptions (not only unlabeled data) to make improvements on supervised learning. If such assumptions are not met, then the methods possibly perform worse than supervised learning. Sokolovska, Cappé, and Yvon (2008) proposed a semi-supervised method based on a weighted likelihood approach. They proved that this method asymptotically never performs worse than supervised learning (i.e., it is safe) without any assumption. Their method is attractive because it is easy to implement and is potentially general. Moreover, it is deeply related to a certain statistical paradox. However, the method of Sokolovska et al. (2008) assumes a very limited situation, i.e., classification, discrete covariates, n(')→∞ and a maximum likelihood estimator. In this paper, we extend their method by modifying the weight. We prove that our proposal is safe in a significantly wide range of situations as long as n≤n('). Further, we give a geometrical interpretation of the proof of safety through the relationship with the above-mentioned statistical paradox. Finally, we show that the above proposal is asymptotically safe even when n(')

  20. Non-invasive optical estimate of tissue composition to differentiate malignant from benign breast lesions: A pilot study

    Science.gov (United States)

    Taroni, Paola; Paganoni, Anna Maria; Ieva, Francesca; Pifferi, Antonio; Quarto, Giovanna; Abbate, Francesca; Cassano, Enrico; Cubeddu, Rinaldo

    2017-01-01

    Several techniques are being investigated as a complement to screening mammography, to reduce its false-positive rate, but results are still insufficient to draw conclusions. This initial study explores time domain diffuse optical imaging as an adjunct method to classify non-invasively malignant vs benign breast lesions. We estimated differences in tissue composition (oxy- and deoxyhemoglobin, lipid, water, collagen) and absorption properties between lesion and average healthy tissue in the same breast applying a perturbative approach to optical images collected at 7 red-near infrared wavelengths (635-1060 nm) from subjects bearing breast lesions. The Discrete AdaBoost procedure, a machine-learning algorithm, was then exploited to classify lesions based on optically derived information (either tissue composition or absorption) and risk factors obtained from patient’s anamnesis (age, body mass index, familiarity, parity, use of oral contraceptives, and use of Tamoxifen). Collagen content, in particular, turned out to be the most important parameter for discrimination. Based on the initial results of this study the proposed method deserves further investigation.

  1. Sampling errors associated with soil composites used to estimate mean Ra-226 concentrations at an UMTRA remedial-action site

    International Nuclear Information System (INIS)

    Gilbert, R.O.; Baker, K.R.; Nelson, R.A.; Miller, R.H.; Miller, M.L.

    1987-07-01

    The decision whether to take additional remedial action (removal of soil) from regions contaminated by uranium mill tailings involves collecting 20 plugs of soil from each 10-m by 10-m plot in the region and analyzing a 500-g portion of the mixed soil for 226 Ra. A soil sampling study was conducted in the windblown mill-tailings flood plain area at Shiprock, New Mexico, to evaluate whether reducing the number of soil plugs to 9 would have any appreciable impact on remedial-action decisions. The results of the Shiprock study are described and used in this paper to develop a simple model of the standard deviation of 226 Ra measurements on composite samples formed from 21 or fewer plugs. This model is used to predict as a function of the number of soil plugs per composite, the percent accuracy with which the mean 226 Ra concentration in surface soil can be estimated, and the probability of making incorrect remedial action decisions on the basis of statistical tests. 8 refs., 15 figs., 9 tabs

  2. Variational approach for spatial point process intensity estimation

    DEFF Research Database (Denmark)

    Coeurjolly, Jean-Francois; Møller, Jesper

    is assumed to be of log-linear form β+θ⊤z(u) where z is a spatial covariate function and the focus is on estimating θ. The variational estimator is very simple to implement and quicker than alternative estimation procedures. We establish its strong consistency and asymptotic normality. We also discuss its...... finite-sample properties in comparison with the maximum first order composite likelihood estimator when considering various inhomogeneous spatial point process models and dimensions as well as settings were z is completely or only partially known....

  3. A Predictive Likelihood Approach to Bayesian Averaging

    Directory of Open Access Journals (Sweden)

    Tomáš Jeřábek

    2015-01-01

    Full Text Available Multivariate time series forecasting is applied in a wide range of economic activities related to regional competitiveness and is the basis of almost all macroeconomic analysis. In this paper we combine multivariate density forecasts of GDP growth, inflation and real interest rates from four various models, two type of Bayesian vector autoregression (BVAR models, a New Keynesian dynamic stochastic general equilibrium (DSGE model of small open economy and DSGE-VAR model. The performance of models is identified using historical dates including domestic economy and foreign economy, which is represented by countries of the Eurozone. Because forecast accuracy of observed models are different, the weighting scheme based on the predictive likelihood, the trace of past MSE matrix, model ranks are used to combine the models. The equal-weight scheme is used as a simple combination scheme. The results show that optimally combined densities are comparable to the best individual models.

  4. Teaching Confirmatory Factor Analysis to Non-Statisticians: A Case Study for Estimating Composite Reliability of Psychometric Instruments

    Science.gov (United States)

    Gajewski, Byron J.; Jiang, Yu; Yeh, Hung-Wen; Engelman, Kimberly; Teel, Cynthia; Choi, Won S.; Greiner, K. Allen; Daley, Christine Makosky

    2013-01-01

    Texts and software that we are currently using for teaching multivariate analysis to non-statisticians lack in the delivery of confirmatory factor analysis (CFA). The purpose of this paper is to provide educators with a complement to these resources that includes CFA and its computation. We focus on how to use CFA to estimate a “composite reliability” of a psychometric instrument. This paper provides guidance for introducing, via a case-study, the non-statistician to CFA. As a complement to our instruction about the more traditional SPSS, we successfully piloted the software R for estimating CFA on nine non-statisticians. This approach can be used with healthcare graduate students taking a multivariate course, as well as modified for community stakeholders of our Center for American Indian Community Health (e.g. community advisory boards, summer interns, & research team members). The placement of CFA at the end of the class is strategic and gives us an opportunity to do some innovative teaching: (1) build ideas for understanding the case study using previous course work (such as ANOVA); (2) incorporate multi-dimensional scaling (that students already learned) into the selection of a factor structure (new concept); (3) use interactive data from the students (active learning); (4) review matrix algebra and its importance to psychometric evaluation; (5) show students how to do the calculation on their own; and (6) give students access to an actual recent research project. PMID:24772373

  5. Narrow band interference cancelation in OFDM: Astructured maximum likelihood approach

    KAUST Repository

    Sohail, Muhammad Sadiq

    2012-06-01

    This paper presents a maximum likelihood (ML) approach to mitigate the effect of narrow band interference (NBI) in a zero padded orthogonal frequency division multiplexing (ZP-OFDM) system. The NBI is assumed to be time variant and asynchronous with the frequency grid of the ZP-OFDM system. The proposed structure based technique uses the fact that the NBI signal is sparse as compared to the ZP-OFDM signal in the frequency domain. The structure is also useful in reducing the computational complexity of the proposed method. The paper also presents a data aided approach for improved NBI estimation. The suitability of the proposed method is demonstrated through simulations. © 2012 IEEE.

  6. Likelihood Approximation With Hierarchical Matrices For Large Spatial Datasets

    KAUST Repository

    Litvinenko, Alexander

    2017-09-03

    We use available measurements to estimate the unknown parameters (variance, smoothness parameter, and covariance length) of a covariance function by maximizing the joint Gaussian log-likelihood function. To overcome cubic complexity in the linear algebra, we approximate the discretized covariance function in the hierarchical (H-) matrix format. The H-matrix format has a log-linear computational cost and storage O(kn log n), where the rank k is a small integer and n is the number of locations. The H-matrix technique allows us to work with general covariance matrices in an efficient way, since H-matrices can approximate inhomogeneous covariance functions, with a fairly general mesh that is not necessarily axes-parallel, and neither the covariance matrix itself nor its inverse have to be sparse. We demonstrate our method with Monte Carlo simulations and an application to soil moisture data. The C, C++ codes and data are freely available.

  7. An Efficient UD-Based Algorithm for the Computation of Maximum Likelihood Sensitivity of Continuous-Discrete Systems

    DEFF Research Database (Denmark)

    Boiroux, Dimitri; Juhl, Rune; Madsen, Henrik

    2016-01-01

    This paper addresses maximum likelihood parameter estimation of continuous-time nonlinear systems with discrete-time measurements. We derive an efficient algorithm for the computation of the log-likelihood function and its gradient, which can be used in gradient-based optimization algorithms....... This algorithm uses UD decomposition of symmetric matrices and the array algorithm for covariance update and gradient computation. We test our algorithm on the Lotka-Volterra equations. Compared to the maximum likelihood estimation based on finite difference gradient computation, we get a significant speedup...

  8. Minimum Distance Estimation on Time Series Analysis With Little Data

    National Research Council Canada - National Science Library

    Tekin, Hakan

    2001-01-01

    .... Minimum distance estimation has been demonstrated better standard approaches, including maximum likelihood estimators and least squares, in estimating statistical distribution parameters with very small data sets...

  9. Gaussian likelihood inference on data from trans-Gaussian random fields with Matérn covariance function

    KAUST Repository

    Yan, Yuan

    2017-07-13

    Gaussian likelihood inference has been studied and used extensively in both statistical theory and applications due to its simplicity. However, in practice, the assumption of Gaussianity is rarely met in the analysis of spatial data. In this paper, we study the effect of non-Gaussianity on Gaussian likelihood inference for the parameters of the Matérn covariance model. By using Monte Carlo simulations, we generate spatial data from a Tukey g-and-h random field, a flexible trans-Gaussian random field, with the Matérn covariance function, where g controls skewness and h controls tail heaviness. We use maximum likelihood based on the multivariate Gaussian distribution to estimate the parameters of the Matérn covariance function. We illustrate the effects of non-Gaussianity of the data on the estimated covariance function by means of functional boxplots. Thanks to our tailored simulation design, a comparison of the maximum likelihood estimator under both the increasing and fixed domain asymptotics for spatial data is performed. We find that the maximum likelihood estimator based on Gaussian likelihood is overall satisfying and preferable than the non-distribution-based weighted least squares estimator for data from the Tukey g-and-h random field. We also present the result for Gaussian kriging based on Matérn covariance estimates with data from the Tukey g-and-h random field and observe an overall satisfactory performance.

  10. Gaussian likelihood inference on data from trans-Gaussian random fields with Matérn covariance function

    KAUST Repository

    Yan, Yuan; Genton, Marc G.

    2017-01-01

    Gaussian likelihood inference has been studied and used extensively in both statistical theory and applications due to its simplicity. However, in practice, the assumption of Gaussianity is rarely met in the analysis of spatial data. In this paper, we study the effect of non-Gaussianity on Gaussian likelihood inference for the parameters of the Matérn covariance model. By using Monte Carlo simulations, we generate spatial data from a Tukey g-and-h random field, a flexible trans-Gaussian random field, with the Matérn covariance function, where g controls skewness and h controls tail heaviness. We use maximum likelihood based on the multivariate Gaussian distribution to estimate the parameters of the Matérn covariance function. We illustrate the effects of non-Gaussianity of the data on the estimated covariance function by means of functional boxplots. Thanks to our tailored simulation design, a comparison of the maximum likelihood estimator under both the increasing and fixed domain asymptotics for spatial data is performed. We find that the maximum likelihood estimator based on Gaussian likelihood is overall satisfying and preferable than the non-distribution-based weighted least squares estimator for data from the Tukey g-and-h random field. We also present the result for Gaussian kriging based on Matérn covariance estimates with data from the Tukey g-and-h random field and observe an overall satisfactory performance.

  11. Calibration of two complex ecosystem models with different likelihood functions

    Science.gov (United States)

    Hidy, Dóra; Haszpra, László; Pintér, Krisztina; Nagy, Zoltán; Barcza, Zoltán

    2014-05-01

    The biosphere is a sensitive carbon reservoir. Terrestrial ecosystems were approximately carbon neutral during the past centuries, but they became net carbon sinks due to climate change induced environmental change and associated CO2 fertilization effect of the atmosphere. Model studies and measurements indicate that the biospheric carbon sink can saturate in the future due to ongoing climate change which can act as a positive feedback. Robustness of carbon cycle models is a key issue when trying to choose the appropriate model for decision support. The input parameters of the process-based models are decisive regarding the model output. At the same time there are several input parameters for which accurate values are hard to obtain directly from experiments or no local measurements are available. Due to the uncertainty associated with the unknown model parameters significant bias can be experienced if the model is used to simulate the carbon and nitrogen cycle components of different ecosystems. In order to improve model performance the unknown model parameters has to be estimated. We developed a multi-objective, two-step calibration method based on Bayesian approach in order to estimate the unknown parameters of PaSim and Biome-BGC models. Biome-BGC and PaSim are a widely used biogeochemical models that simulate the storage and flux of water, carbon, and nitrogen between the ecosystem and the atmosphere, and within the components of the terrestrial ecosystems (in this research the developed version of Biome-BGC is used which is referred as BBGC MuSo). Both models were calibrated regardless the simulated processes and type of model parameters. The calibration procedure is based on the comparison of measured data with simulated results via calculating a likelihood function (degree of goodness-of-fit between simulated and measured data). In our research different likelihood function formulations were used in order to examine the effect of the different model

  12. Estimation of soil respiration rates and soil gas isotopic composition for the different land use of Ultisols from Calhoun CZO.

    Science.gov (United States)

    Cherkinsky, A.; Brecheisen, Z.; Richter, D. D., Jr.; Sheng, H.

    2017-12-01

    CO2 flux from soil is significant in most ecosystems and can account for more than 2/3 of total ecosystem respiration. In many cases CO2 fluxes from soil are estimated using eddy covariance techniques or the classical chamber method with measures of bulk concentrations and isotope composition of CO2. Whereas most of these studies estimate flux from the soil surface, we analyzed its concentration and isotope composition directly in soil profiles down to 8.5m depth. This experiment was conducted in Sumter National Forest in summer of 2016. The samples were collected from 3 different land use history sites: a) reference hardwood stands, mainly of oak and hickory that are taken to be never cultivated; b) cultivated plots, which were also used growing cotton prior to the 1950's but for the last 50 years for growing corn, wheat, legume, sorghum, and sunflowers; c) pine stands, which had been used for growing cotton from beginning of the 19th century and then was abandoned in 1920s and planted with loblolly pine. We have analyzed 3 replicates of each land use. There were measured in the field CO2 and O2 concentration and collected gas samples were analyzed for Δ14C, δ13C and δ18O. CO2 concentration in all types of land use has a maximum about 3m depth, approximately the same depth as the minimum of O2 concentration. Isotope analyses revealed that carbon isotopic composition tend to become lighter with the depth for all three types of land use: in cultivated site it changes from -18%o at 0.5m to -21%o at 5m; in pine site from -22%o to -25%o and in hardwood from-21.5 -24.5%o correspondently, the O2 isotopic composition does not change significantly. Based on analysis of Δ14C the turnover rate of CO2 is getting slower as depth increases. At the first 50 cm the exchange rate is the fastest on cultivated site, likely due to annual tilling, and concentration of 14C is actually equal to atmospheric. However, the turnover rate of Δ14C in soil CO2 slows down significantly as

  13. Likelihood analysis of the minimal AMSB model

    Energy Technology Data Exchange (ETDEWEB)

    Bagnaschi, E.; Weiglein, G. [DESY, Hamburg (Germany); Borsato, M.; Chobanova, V.; Lucio, M.; Santos, D.M. [Universidade de Santiago de Compostela, Santiago de Compostela (Spain); Sakurai, K. [Institute for Particle Physics Phenomenology, University of Durham, Science Laboratories, Department of Physics, Durham (United Kingdom); University of Warsaw, Faculty of Physics, Institute of Theoretical Physics, Warsaw (Poland); Buchmueller, O.; Citron, M.; Costa, J.C.; Richards, A. [Imperial College, High Energy Physics Group, Blackett Laboratory, London (United Kingdom); Cavanaugh, R. [Fermi National Accelerator Laboratory, Batavia, IL (United States); University of Illinois at Chicago, Physics Department, Chicago, IL (United States); De Roeck, A. [Experimental Physics Department, CERN, Geneva (Switzerland); Antwerp University, Wilrijk (Belgium); Dolan, M.J. [School of Physics, University of Melbourne, ARC Centre of Excellence for Particle Physics at the Terascale, Melbourne (Australia); Ellis, J.R. [King' s College London, Theoretical Particle Physics and Cosmology Group, Department of Physics, London (United Kingdom); CERN, Theoretical Physics Department, Geneva (Switzerland); Flaecher, H. [University of Bristol, H.H. Wills Physics Laboratory, Bristol (United Kingdom); Heinemeyer, S. [Campus of International Excellence UAM+CSIC, Madrid (Spain); Instituto de Fisica Teorica UAM-CSIC, Madrid (Spain); Instituto de Fisica de Cantabria (CSIC-UC), Cantabria (Spain); Isidori, G. [Physik-Institut, Universitaet Zuerich, Zurich (Switzerland); Luo, F. [Kavli IPMU (WPI), UTIAS, The University of Tokyo, Kashiwa, Chiba (Japan); Olive, K.A. [School of Physics and Astronomy, University of Minnesota, William I. Fine Theoretical Physics Institute, Minneapolis, MN (United States)

    2017-04-15

    We perform a likelihood analysis of the minimal anomaly-mediated supersymmetry-breaking (mAMSB) model using constraints from cosmology and accelerator experiments. We find that either a wino-like or a Higgsino-like neutralino LSP, χ{sup 0}{sub 1}, may provide the cold dark matter (DM), both with similar likelihoods. The upper limit on the DM density from Planck and other experiments enforces m{sub χ{sup 0}{sub 1}} 0) but the scalar mass m{sub 0} is poorly constrained. In the wino-LSP case, m{sub 3/2} is constrained to about 900 TeV and m{sub χ{sup 0}{sub 1}} to 2.9 ± 0.1 TeV, whereas in the Higgsino-LSP case m{sub 3/2} has just a lower limit >or similar 650 TeV (>or similar 480 TeV) and m{sub χ{sup 0}{sub 1}} is constrained to 1.12 (1.13) ± 0.02 TeV in the μ > 0 (μ < 0) scenario. In neither case can the anomalous magnetic moment of the muon, (g-2){sub μ}, be improved significantly relative to its Standard Model (SM) value, nor do flavour measurements constrain the model significantly, and there are poor prospects for discovering supersymmetric particles at the LHC, though there are some prospects for direct DM detection. On the other hand, if the χ{sup 0}{sub 1} contributes only a fraction of the cold DM density, future LHC E{sub T}-based searches for gluinos, squarks and heavier chargino and neutralino states as well as disappearing track searches in the wino-like LSP region will be relevant, and interference effects enable BR(B{sub s,d} → μ{sup +}μ{sup -}) to agree with the data better than in the SM in the case of wino-like DM with μ > 0. (orig.)

  14. Estimating Composite Curve Number Using an Improved SCS-CN Method with Remotely Sensed Variables in Guangzhou, China

    Directory of Open Access Journals (Sweden)

    Qihao Weng

    2013-03-01

    Full Text Available The rainfall and runoff relationship becomes an intriguing issue as urbanization continues to evolve worldwide. In this paper, we developed a simulation model based on the soil conservation service curve number (SCS-CN method to analyze the rainfall-runoff relationship in Guangzhou, a rapid growing metropolitan area in southern China. The SCS-CN method was initially developed by the Natural Resources Conservation Service (NRCS of the United States Department of Agriculture (USDA, and is one of the most enduring methods for estimating direct runoff volume in ungauged catchments. In this model, the curve number (CN is a key variable which is usually obtained by the look-up table of TR-55. Due to the limitations of TR-55 in characterizing complex urban environments and in classifying land use/cover types, the SCS-CN model cannot provide more detailed runoff information. Thus, this paper develops a method to calculate CN by using remote sensing variables, including vegetation, impervious surface, and soil (V-I-S. The specific objectives of this paper are: (1 To extract the V-I-S fraction images using Linear Spectral Mixture Analysis; (2 To obtain composite CN by incorporating vegetation types, soil types, and V-I-S fraction images; and (3 To simulate direct runoff under the scenarios with precipitation of 57mm (occurred once every five years by average and 81mm (occurred once every ten years. Our experiment shows that the proposed method is easy to use and can derive composite CN effectively.

  15. Estimating and understanding the efficiency of nanoparticles in enhancing the conductivity of carbon nanotube/polymer composites

    KAUST Repository

    Mora Cordova, Angel

    2018-05-22

    Carbon nanotubes (CNTs) have been widely used to improve the electrical conductivity of polymers. However, not all CNTs actively participate in the conduction of electricity since they have to be close to each other to form a conductive network. The amount of active CNTs is rarely discussed as it is not captured by percolation theory. However, this amount is a very important information that could be used in a definition of loading efficiency for CNTs (and, in general, for any nanofiller). Thus, we develop a computational tool to quantify the amount of CNTs that actively participates in the conductive network. We then use this quantity to propose a definition of loading efficiency. We compare our results with an expression presented in the literature for the fraction of percolated CNTs (although not presented as a definition of efficiency). We found that this expression underestimates the fraction of percolated CNTs. We thus propose an improved estimation. We also study how efficiency changes with CNT loading and the CNT aspect ratio. We use this concept to study the size of the representative volume element (RVE) for polymers loaded with CNTs, which has received little attention in the past. Here, we find the size of RVE based on both loading efficiency and electrical conductivity such that the scales of “morphological” and “functional” RVEs can be compared. Additionally, we study the relations between particle and network properties (such as efficiency, CNT conductivity and junction resistance) and the conductivity of CNT/polymer composites. We present a series of recommendations to improve the conductivity of a composite based on our simulation results.

  16. Estimating and understanding the efficiency of nanoparticles in enhancing the conductivity of carbon nanotube/polymer composites

    KAUST Repository

    Mora Cordova, Angel; Han, Fei; Lubineau, Gilles

    2018-01-01

    Carbon nanotubes (CNTs) have been widely used to improve the electrical conductivity of polymers. However, not all CNTs actively participate in the conduction of electricity since they have to be close to each other to form a conductive network. The amount of active CNTs is rarely discussed as it is not captured by percolation theory. However, this amount is a very important information that could be used in a definition of loading efficiency for CNTs (and, in general, for any nanofiller). Thus, we develop a computational tool to quantify the amount of CNTs that actively participates in the conductive network. We then use this quantity to propose a definition of loading efficiency. We compare our results with an expression presented in the literature for the fraction of percolated CNTs (although not presented as a definition of efficiency). We found that this expression underestimates the fraction of percolated CNTs. We thus propose an improved estimation. We also study how efficiency changes with CNT loading and the CNT aspect ratio. We use this concept to study the size of the representative volume element (RVE) for polymers loaded with CNTs, which has received little attention in the past. Here, we find the size of RVE based on both loading efficiency and electrical conductivity such that the scales of “morphological” and “functional” RVEs can be compared. Additionally, we study the relations between particle and network properties (such as efficiency, CNT conductivity and junction resistance) and the conductivity of CNT/polymer composites. We present a series of recommendations to improve the conductivity of a composite based on our simulation results.

  17. Factors controlling shell carbon isotopic composition of land snail Acusta despecta sieboldiana estimated from lab culturing experiment

    Science.gov (United States)

    Zhang, N.; Yamada, K.; Suzuki, N.; Yoshida, N.

    2014-05-01

    The carbon isotopic composition (δ13C) of land snail shell carbonate derives from three potential sources: diet, atmospheric CO2, and ingested carbonate (limestone). However, their relative contributions remain unclear. Under various environmental conditions, we cultured one land snail species, Acusta despecta sieboldiana collected from Yokohama, Japan, and confirmed that all of these sources affect shell carbonate δ13C values. Herein, we consider the influences of metabolic rates and temperature on the carbon isotopic composition of the shell carbonate. Based on previous works and on results obtained in this study, a simple but credible framework is presented for discussion of how each source and environmental parameter can affect shell carbonate δ13C values. According to this framework and some reasonable assumptions, we have estimated the contributions of different carbon sources for each snail individual: for cabbage (C3 plant) fed groups, the contributions of diet, atmospheric CO2 and ingested limestone respectively vary as 66-80%, 16-24%, and 0-13%. For corn (C4 plant) fed groups, because of the possible food stress (lower consumption ability of C4 plant), the values vary respectively as 56-64%, 18-20%, and 16-26%. Moreover, we present new evidence that snails have discrimination to choose C3 and C4 plants as food. Therefore, we suggest that food preferences must be considered adequately when applying δ13C in paleo-environment studies. Finally, we inferred that, during egg laying and hatching of our cultured snails, carbon isotope fractionation is controlled only by the isotopic exchange of the calcite-HCO3--aragonite equilibrium.

  18. Factors controlling shell carbon isotopic composition of land snail Acusta despecta sieboldiana estimated from laboratory culturing experiment

    Science.gov (United States)

    Zhang, N.; Yamada, K.; Suzuki, N.; Yoshida, N.

    2014-10-01

    The carbon isotopic composition (δ13C) of land snail shell carbonate derives from three potential sources: diet, atmospheric CO2, and ingested carbonate (limestone). However, their relative contributions remain unclear. Under various environmental conditions, we cultured one land snail subspecies, Acusta despecta sieboldiana, collected from Yokohama, Japan, and confirmed that all of these sources affect shell carbonate δ13C values. Herein, we consider the influences of metabolic rates and temperature on the carbon isotopic composition of the shell carbonate. Based on results obtained from previous works and this study, a simple but credible framework is presented to illustrate how each source and environmental parameter affects shell carbonate δ13C values. According to this framework and some reasonable assumptions, we estimated the contributions of different carbon sources for each snail individual: for cabbage-fed (C3 plant) groups, the contributions of diet, atmospheric CO2, and ingested limestone vary in the ranges of 66-80, 16-24, and 0-13%, respectively. For corn-fed (C4 plant) groups, because of the possible food stress (less ability to consume C4 plants), the values vary in the ranges of 56-64, 18-20, and 16-26%, respectively. Moreover, according to the literature and our observations, the subspecies we cultured in this study show preferences towards different plant species for food. Therefore, we suggest that the potential food preference should be considered adequately for some species in paleoenvironment studies. Finally, we inferred that only the isotopic exchange of the calcite-HCO3--aragonite equilibrium during egg laying and hatching of our cultured snails controls carbon isotope fractionation.

  19. Approximate likelihood approaches for detecting the influence of primordial gravitational waves in cosmic microwave background polarization

    Science.gov (United States)

    Pan, Zhen; Anderes, Ethan; Knox, Lloyd

    2018-05-01

    One of the major targets for next-generation cosmic microwave background (CMB) experiments is the detection of the primordial B-mode signal. Planning is under way for Stage-IV experiments that are projected to have instrumental noise small enough to make lensing and foregrounds the dominant source of uncertainty for estimating the tensor-to-scalar ratio r from polarization maps. This makes delensing a crucial part of future CMB polarization science. In this paper we present a likelihood method for estimating the tensor-to-scalar ratio r from CMB polarization observations, which combines the benefits of a full-scale likelihood approach with the tractability of the quadratic delensing technique. This method is a pixel space, all order likelihood analysis of the quadratic delensed B modes, and it essentially builds upon the quadratic delenser by taking into account all order lensing and pixel space anomalies. Its tractability relies on a crucial factorization of the pixel space covariance matrix of the polarization observations which allows one to compute the full Gaussian approximate likelihood profile, as a function of r , at the same computational cost of a single likelihood evaluation.

  20. Robust Biometric Score Fusion by Naive Likelihood Ratio via Receiver Operating Characteristics

    NARCIS (Netherlands)

    Tao, Q.; Veldhuis, Raymond N.J.

    This paper presents a novel method of fusing multiple biometrics on the matching score level. We estimate the likelihood ratios of the fused biometric scores, via individual receiver operating characteristics (ROC) which construct the Naive Bayes classifier. Using a limited number of operation

  1. The Likelihood of Parent-Adult Child Coresidence: Effects of Family Structure and Parental Characteristics.

    Science.gov (United States)

    Aquilino, William S.

    1990-01-01

    Estimated influence of child, parent, and family structural characteristics on likelihood of parents having coresident adult child, based on national sample of 4,893 parents. Results indicated most parents maintained own households and most parents and adult children who coresided lived in parents' home. Family structure was found to exert strong…

  2. HLIBCov: Parallel Hierarchical Matrix Approximation of Large Covariance Matrices and Likelihoods with Applications in Parameter Identification

    KAUST Repository

    Litvinenko, Alexander

    2017-01-01

    and maximizing likelihood functions. We show that an approximate Cholesky factorization of a dense matrix of size $2M\\times 2M$ can be computed on a modern multi-core desktop in few minutes. Further, HLIBCov is used for estimating the unknown parameters

  3. Maximum Likelihood Dynamic Factor Modeling for Arbitrary "N" and "T" Using SEM

    Science.gov (United States)

    Voelkle, Manuel C.; Oud, Johan H. L.; von Oertzen, Timo; Lindenberger, Ulman

    2012-01-01

    This article has 3 objectives that build on each other. First, we demonstrate how to obtain maximum likelihood estimates for dynamic factor models (the direct autoregressive factor score model) with arbitrary "T" and "N" by means of structural equation modeling (SEM) and compare the approach to existing methods. Second, we go beyond standard time…

  4. Likelihood of Suicidality at Varying Levels of Depression Severity: A Re-Analysis of NESARC Data

    Science.gov (United States)

    Uebelacker, Lisa A.; Strong, David; Weinstock, Lauren M.; Miller, Ivan W.

    2010-01-01

    Although it is clear that increasing depression severity is associated with more risk for suicidality, less is known about at what levels of depression severity the risk for different suicide symptoms increases. We used item response theory to estimate the likelihood of endorsing suicide symptoms across levels of depression severity in an…

  5. Dimension-independent likelihood-informed MCMC

    KAUST Repository

    Cui, Tiangang

    2015-10-08

    Many Bayesian inference problems require exploring the posterior distribution of high-dimensional parameters that represent the discretization of an underlying function. This work introduces a family of Markov chain Monte Carlo (MCMC) samplers that can adapt to the particular structure of a posterior distribution over functions. Two distinct lines of research intersect in the methods developed here. First, we introduce a general class of operator-weighted proposal distributions that are well defined on function space, such that the performance of the resulting MCMC samplers is independent of the discretization of the function. Second, by exploiting local Hessian information and any associated low-dimensional structure in the change from prior to posterior distributions, we develop an inhomogeneous discretization scheme for the Langevin stochastic differential equation that yields operator-weighted proposals adapted to the non-Gaussian structure of the posterior. The resulting dimension-independent and likelihood-informed (DILI) MCMC samplers may be useful for a large class of high-dimensional problems where the target probability measure has a density with respect to a Gaussian reference measure. Two nonlinear inverse problems are used to demonstrate the efficiency of these DILI samplers: an elliptic PDE coefficient inverse problem and path reconstruction in a conditioned diffusion.

  6. Likelihood Analysis of Supersymmetric SU(5) GUTs

    CERN Document Server

    Bagnaschi, E.

    2017-01-01

    We perform a likelihood analysis of the constraints from accelerator experiments and astrophysical observations on supersymmetric (SUSY) models with SU(5) boundary conditions on soft SUSY-breaking parameters at the GUT scale. The parameter space of the models studied has 7 parameters: a universal gaugino mass $m_{1/2}$, distinct masses for the scalar partners of matter fermions in five- and ten-dimensional representations of SU(5), $m_5$ and $m_{10}$, and for the $\\mathbf{5}$ and $\\mathbf{\\bar 5}$ Higgs representations $m_{H_u}$ and $m_{H_d}$, a universal trilinear soft SUSY-breaking parameter $A_0$, and the ratio of Higgs vevs $\\tan \\beta$. In addition to previous constraints from direct sparticle searches, low-energy and flavour observables, we incorporate constraints based on preliminary results from 13 TeV LHC searches for jets + MET events and long-lived particles, as well as the latest PandaX-II and LUX searches for direct Dark Matter detection. In addition to previously-identified mechanisms for bringi...

  7. Reducing the likelihood of long tennis matches.

    Science.gov (United States)

    Barnett, Tristan; Alan, Brown; Pollard, Graham

    2006-01-01

    Long matches can cause problems for tournaments. For example, the starting times of subsequent matches can be substantially delayed causing inconvenience to players, spectators, officials and television scheduling. They can even be seen as unfair in the tournament setting when the winner of a very long match, who may have negative aftereffects from such a match, plays the winner of an average or shorter length match in the next round. Long matches can also lead to injuries to the participating players. One factor that can lead to long matches is the use of the advantage set as the fifth set, as in the Australian Open, the French Open and Wimbledon. Another factor is long rallies and a greater than average number of points per game. This tends to occur more frequently on the slower surfaces such as at the French Open. The mathematical method of generating functions is used to show that the likelihood of long matches can be substantially reduced by using the tiebreak game in the fifth set, or more effectively by using a new type of game, the 50-40 game, throughout the match. Key PointsThe cumulant generating function has nice properties for calculating the parameters of distributions in a tennis matchA final tiebreaker set reduces the length of matches as currently being used in the US OpenA new 50-40 game reduces the length of matches whilst maintaining comparable probabilities for the better player to win the match.

  8. Dimension-independent likelihood-informed MCMC

    KAUST Repository

    Cui, Tiangang; Law, Kody; Marzouk, Youssef M.

    2015-01-01

    Many Bayesian inference problems require exploring the posterior distribution of high-dimensional parameters that represent the discretization of an underlying function. This work introduces a family of Markov chain Monte Carlo (MCMC) samplers that can adapt to the particular structure of a posterior distribution over functions. Two distinct lines of research intersect in the methods developed here. First, we introduce a general class of operator-weighted proposal distributions that are well defined on function space, such that the performance of the resulting MCMC samplers is independent of the discretization of the function. Second, by exploiting local Hessian information and any associated low-dimensional structure in the change from prior to posterior distributions, we develop an inhomogeneous discretization scheme for the Langevin stochastic differential equation that yields operator-weighted proposals adapted to the non-Gaussian structure of the posterior. The resulting dimension-independent and likelihood-informed (DILI) MCMC samplers may be useful for a large class of high-dimensional problems where the target probability measure has a density with respect to a Gaussian reference measure. Two nonlinear inverse problems are used to demonstrate the efficiency of these DILI samplers: an elliptic PDE coefficient inverse problem and path reconstruction in a conditioned diffusion.

  9. New estimates of oxygen isotope fractionation by plants and soils - Implications for the isotopic composition of the atmosphere

    International Nuclear Information System (INIS)

    Angert, A.; Luz, B.

    2002-01-01

    Oxygen concentration and δ 18 O of O 2 have been monitored in light and heavy soils. Steep oxygen gradients were present at the heavy soil site (minimal O 2 concentration was 1% at 150cm depth) and δ 18 O values typically ranged from 0 per mille to -1.6 per mille relative to air O 2 . In the light-soil site, the O 2 concentration was 20.38% to 20.53% and δ 18 O values ranged from -0.06±0.015 per mille to 0.06±0.015 per mille relative to atmospheric O 2 . The fractionation in soil respiration was estimated from the observed [O 2 ] and δ 18 O profiles and their change with time by a five-box numerical model. Diffusion due to concentration and temperature gradients was taken into account. Good agreement was found between the model results and the measured values. The average discrimination against 18 O in the two study sites was 12±1 per mille. The current understanding of the composition of air O 2 attributes the magnitude of the fractionation in soil respiration to biochemical mechanisms alone. Thus the discrimination against 18 O is assumed to be 18 per mille in cyanide-sensitive dark respiration and 25 per mille to 30 per mille in cyanide-resistant respiration. The discrimination we report is significantly less than in dark respiration. This overall low discrimination is explained by slow diffusion in soil aggregates, and in root tissues that results in low O 2 concentration in the consumption site. Since about half of the terrestrial respiration occurs in soils, our new discrimination estimate lowers significantly the discrimination value for terrestrial uptake. Higher then currently assumed discrimination was found in experiments with illuminated plants. This high discrimination might compensate for the low discrimination found in soils. (author)

  10. Likelihood analysis of supersymmetric SU(5) GUTs

    Energy Technology Data Exchange (ETDEWEB)

    Bagnaschi, E.; Weiglein, G. [DESY, Hamburg (Germany); Costa, J.C.; Buchmueller, O.; Citron, M.; Richards, A.; De Vries, K.J. [Imperial College, High Energy Physics Group, Blackett Laboratory, London (United Kingdom); Sakurai, K. [University of Durham, Science Laboratories, Department of Physics, Institute for Particle Physics Phenomenology, Durham (United Kingdom); University of Warsaw, Faculty of Physics, Institute of Theoretical Physics, Warsaw (Poland); Borsato, M.; Chobanova, V.; Lucio, M.; Martinez Santos, D. [Universidade de Santiago de Compostela, Santiago de Compostela (Spain); Cavanaugh, R. [Fermi National Accelerator Laboratory, Batavia, IL (United States); University of Illinois at Chicago, Physics Department, Chicago, IL (United States); Roeck, A. de [CERN, Experimental Physics Department, Geneva (Switzerland); Antwerp University, Wilrijk (Belgium); Dolan, M.J. [University of Melbourne, ARC Centre of Excellence for Particle Physics at the Terascale, School of Physics, Parkville (Australia); Ellis, J.R. [King' s College London, Theoretical Particle Physics and Cosmology Group, Department of Physics, London (United Kingdom); Theoretical Physics Department, CERN, Geneva 23 (Switzerland); Flaecher, H. [University of Bristol, H.H. Wills Physics Laboratory, Bristol (United Kingdom); Heinemeyer, S. [Campus of International Excellence UAM+CSIC, Cantoblanco, Madrid (Spain); Instituto de Fisica Teorica UAM-CSIC, Madrid (Spain); Instituto de Fisica de Cantabria (CSIC-UC), Santander (Spain); Isidori, G. [Universitaet Zuerich, Physik-Institut, Zurich (Switzerland); Olive, K.A. [University of Minnesota, William I. Fine Theoretical Physics Institute, School of Physics and Astronomy, Minneapolis, MN (United States)

    2017-02-15

    We perform a likelihood analysis of the constraints from accelerator experiments and astrophysical observations on supersymmetric (SUSY) models with SU(5) boundary conditions on soft SUSY-breaking parameters at the GUT scale. The parameter space of the models studied has seven parameters: a universal gaugino mass m{sub 1/2}, distinct masses for the scalar partners of matter fermions in five- and ten-dimensional representations of SU(5), m{sub 5} and m{sub 10}, and for the 5 and anti 5 Higgs representations m{sub H{sub u}} and m{sub H{sub d}}, a universal trilinear soft SUSY-breaking parameter A{sub 0}, and the ratio of Higgs vevs tan β. In addition to previous constraints from direct sparticle searches, low-energy and flavour observables, we incorporate constraints based on preliminary results from 13 TeV LHC searches for jets + E{sub T} events and long-lived particles, as well as the latest PandaX-II and LUX searches for direct Dark Matter detection. In addition to previously identified mechanisms for bringing the supersymmetric relic density into the range allowed by cosmology, we identify a novel u{sub R}/c{sub R} - χ{sup 0}{sub 1} coannihilation mechanism that appears in the supersymmetric SU(5) GUT model and discuss the role of ν{sub τ} coannihilation. We find complementarity between the prospects for direct Dark Matter detection and SUSY searches at the LHC. (orig.)

  11. Likelihood analysis of supersymmetric SU(5) GUTs

    Energy Technology Data Exchange (ETDEWEB)

    Bagnaschi, E. [DESY, Hamburg (Germany); Costa, J.C. [Imperial College, London (United Kingdom). Blackett Lab.; Sakurai, K. [Durham Univ. (United Kingdom). Inst. for Particle Physics Phenomonology; Warsaw Univ. (Poland). Inst. of Theoretical Physics; Collaboration: MasterCode Collaboration; and others

    2016-10-15

    We perform a likelihood analysis of the constraints from accelerator experiments and astrophysical observations on supersymmetric (SUSY) models with SU(5) boundary conditions on soft SUSY-breaking parameters at the GUT scale. The parameter space of the models studied has 7 parameters: a universal gaugino mass m{sub 1/2}, distinct masses for the scalar partners of matter fermions in five- and ten-dimensional representations of SU(5), m{sub 5} and m{sub 10}, and for the 5 and anti 5 Higgs representations m{sub H{sub u}} and m{sub H{sub d}}, a universal trilinear soft SUSY-breaking parameter A{sub 0}, and the ratio of Higgs vevs tan β. In addition to previous constraints from direct sparticle searches, low-energy and avour observables, we incorporate constraints based on preliminary results from 13 TeV LHC searches for jets+E{sub T} events and long-lived particles, as well as the latest PandaX-II and LUX searches for direct Dark Matter detection. In addition to previously-identified mechanisms for bringing the supersymmetric relic density into the range allowed by cosmology, we identify a novel u{sub R}/c{sub R}-χ{sup 0}{sub 1} coannihilation mechanism that appears in the supersymmetric SU(5) GUT model and discuss the role of ν{sub T} coannihilation. We find complementarity between the prospects for direct Dark Matter detection and SUSY searches at the LHC.

  12. The Location-Scale Mixture Exponential Power Distribution: A Bayesian and Maximum Likelihood Approach

    OpenAIRE

    Rahnamaei, Z.; Nematollahi, N.; Farnoosh, R.

    2012-01-01

    We introduce an alternative skew-slash distribution by using the scale mixture of the exponential power distribution. We derive the properties of this distribution and estimate its parameter by Maximum Likelihood and Bayesian methods. By a simulation study we compute the mentioned estimators and their mean square errors, and we provide an example on real data to demonstrate the modeling strength of the new distribution.

  13. The Location-Scale Mixture Exponential Power Distribution: A Bayesian and Maximum Likelihood Approach

    Directory of Open Access Journals (Sweden)

    Z. Rahnamaei

    2012-01-01

    Full Text Available We introduce an alternative skew-slash distribution by using the scale mixture of the exponential power distribution. We derive the properties of this distribution and estimate its parameter by Maximum Likelihood and Bayesian methods. By a simulation study we compute the mentioned estimators and their mean square errors, and we provide an example on real data to demonstrate the modeling strength of the new distribution.

  14. Cosmic shear measurement with maximum likelihood and maximum a posteriori inference

    Science.gov (United States)

    Hall, Alex; Taylor, Andy

    2017-06-01

    We investigate the problem of noise bias in maximum likelihood and maximum a posteriori estimators for cosmic shear. We derive the leading and next-to-leading order biases and compute them in the context of galaxy ellipticity measurements, extending previous work on maximum likelihood inference for weak lensing. We show that a large part of the bias on these point estimators can be removed using information already contained in the likelihood when a galaxy model is specified, without the need for external calibration. We test these bias-corrected estimators on simulated galaxy images similar to those expected from planned space-based weak lensing surveys, with promising results. We find that the introduction of an intrinsic shape prior can help with mitigation of noise bias, such that the maximum a posteriori estimate can be made less biased than the maximum likelihood estimate. Second-order terms offer a check on the convergence of the estimators, but are largely subdominant. We show how biases propagate to shear estimates, demonstrating in our simple set-up that shear biases can be reduced by orders of magnitude and potentially to within the requirements of planned space-based surveys at mild signal-to-noise ratio. We find that second-order terms can exhibit significant cancellations at low signal-to-noise ratio when Gaussian noise is assumed, which has implications for inferring the performance of shear-measurement algorithms from simplified simulations. We discuss the viability of our point estimators as tools for lensing inference, arguing that they allow for the robust measurement of ellipticity and shear.

  15. Block Empirical Likelihood for Longitudinal Single-Index Varying-Coefficient Model

    Directory of Open Access Journals (Sweden)

    Yunquan Song

    2013-01-01

    Full Text Available In this paper, we consider a single-index varying-coefficient model with application to longitudinal data. In order to accommodate the within-group correlation, we apply the block empirical likelihood procedure to longitudinal single-index varying-coefficient model, and prove a nonparametric version of Wilks’ theorem which can be used to construct the block empirical likelihood confidence region with asymptotically correct coverage probability for the parametric component. In comparison with normal approximations, the proposed method does not require a consistent estimator for the asymptotic covariance matrix, making it easier to conduct inference for the model's parametric component. Simulations demonstrate how the proposed method works.

  16. An Alternative Estimator for the Maximum Likelihood Estimator for the Two Extreme Response Patterns.

    Science.gov (United States)

    1981-06-29

    is the item discrimination parameter and b is the g X g item response difficulty parameter which satisfies (2.3) - = b 0 < b1 < b 2 < ....... < b...Tuscon, AZ 85721 4833 Rugby Avenue Dr. John B. Carroll Bethesda, MD 20014 Psychometric Lab 1 Dr. Leonard Feldt Univ. of No. Carolina Lindquist Center for

  17. Analytic confidence level calculations using the likelihood ratio and fourier transform

    International Nuclear Information System (INIS)

    Hu Hongbo; Nielsen, J.

    2000-01-01

    The interpretation of new particle search results involves a confidence level calculation on either the discovery hypothesis or the background-only ('null') hypothesis. A typical approach uses toy Monte Carlo experiments to build an expected experiment estimator distribution against which an observed experiment's estimator may be compared. In this note, a new approach is presented which calculates analytically the experiment estimator distribution via a Fourier transform, using the likelihood ratio as an ordering estimator. The analytic approach enjoys an enormous speed advantage over the toy Monte Carlo method, making it possible to quickly and precisely calculate confidence level results

  18. The behavior of the likelihood ratio test for testing missingness

    OpenAIRE

    Hens, Niel; Aerts, Marc; Molenberghs, Geert; Thijs, Herbert

    2003-01-01

    To asses the sensitivity of conclusions to model choices in the context of selection models for non-random dropout, one can oppose the different missing mechanisms to each other; e.g. by the likelihood ratio tests. The finite sample behavior of the null distribution and the power of the likelihood ratio test is studied under a variety of missingness mechanisms. missing data; sensitivity analysis; likelihood ratio test; missing mechanisms

  19. Estimation of glomerular filtration rate in cancer patients with abnormal body composition and relation with carboplatin toxicity.

    Science.gov (United States)

    Bretagne, M; Jouinot, A; Durand, J P; Huillard, O; Boudou Rouquette, P; Tlemsani, C; Arrondeau, J; Sarfati, G; Goldwasser, F; Alexandre, J

    2017-07-01

    Carboplatin clearance is correlated with glomerular filtration rate (GFR) and usually estimated with creatinine clearance using Cockcroft-Gault (CG) formula. Because plasma creatinine level is highly correlated with muscle mass, we hypothesized that an abnormal body composition with a low lean body mass (LBM) percentage [(LBM/weight) × 100] may result in inadequate carboplatin dosing. Serum cystatin C is an alternative marker of GFR, not affected by muscle mass. We aimed to investigate the influence of total LBM and LBM percentage on GFR calculation, using creatinine (CrCl) or cystatin C (GFR cysC-creat ) in cancer patients. Pretreatment serum creatinine and cystatin C were prospectively measured in consecutive patients. CrCl (CG formula), GFR cysC-creat (CKD-EPI creatinine-cystatin equation), and LBM (CT scan) were calculated. Severe thrombocytopenia post-carboplatin were analyzed. In 131 patients without renal insufficiency, LBM was correlated with creatinine (r = 0.30, p LBM percentage, the CrCl was significantly higher than GFR cysC-creat indicating an overestimation of GFR with creatinine (p = 0.0004). In 24 patients treated with carboplatin AUC 5 (mg/ml min) ± paclitaxel, the risk of severe thrombocytopenia was associated with lower LBM percentage (p = 0.0002) and higher CrCl/GFR cysC-creat ratio (p = 0.006). By ROC analysis, the CrCl/GFR cysC-creat ratio threshold predicting severe thrombocytopenia was 1.23. A low LBM percentage increases the risk of inadequate GFR calculation by CG formula, and carboplatin overdosage with severe thrombocytopenia. High CrCl/GFR cysC-creat ratio allows the identification of these patients.

  20. Planck 2013 results. XV. CMB power spectra and likelihood

    Science.gov (United States)

    Planck Collaboration; Ade, P. A. R.; Aghanim, N.; Armitage-Caplan, C.; Arnaud, M.; Ashdown, M.; Atrio-Barandela, F.; Aumont, J.; Baccigalupi, C.; Banday, A. J.; Barreiro, R. B.; Bartlett, J. G.; Battaner, E.; Benabed, K.; Benoît, A.; Benoit-Lévy, A.; Bernard, J.-P.; Bersanelli, M.; Bielewicz, P.; Bobin, J.; Bock, J. J.; Bonaldi, A.; Bonavera, L.; Bond, J. R.; Borrill, J.; Bouchet, F. R.; Boulanger, F.; Bridges, M.; Bucher, M.; Burigana, C.; Butler, R. C.; Calabrese, E.; Cardoso, J.-F.; Catalano, A.; Challinor, A.; Chamballu, A.; Chiang, H. C.; Chiang, L.-Y.; Christensen, P. R.; Church, S.; Clements, D. L.; Colombi, S.; Colombo, L. P. L.; Combet, C.; Couchot, F.; Coulais, A.; Crill, B. P.; Curto, A.; Cuttaia, F.; Danese, L.; Davies, R. D.; Davis, R. J.; de Bernardis, P.; de Rosa, A.; de Zotti, G.; Delabrouille, J.; Delouis, J.-M.; Désert, F.-X.; Dickinson, C.; Diego, J. M.; Dole, H.; Donzelli, S.; Doré, O.; Douspis, M.; Dunkley, J.; Dupac, X.; Efstathiou, G.; Elsner, F.; Enßlin, T. A.; Eriksen, H. K.; Finelli, F.; Forni, O.; Frailis, M.; Fraisse, A. A.; Franceschi, E.; Gaier, T. C.; Galeotta, S.; Galli, S.; Ganga, K.; Giard, M.; Giardino, G.; Giraud-Héraud, Y.; Gjerløw, E.; González-Nuevo, J.; Górski, K. M.; Gratton, S.; Gregorio, A.; Gruppuso, A.; Gudmundsson, J. E.; Hansen, F. K.; Hanson, D.; Harrison, D.; Helou, G.; Henrot-Versillé, S.; Hernández-Monteagudo, C.; Herranz, D.; Hildebrandt, S. R.; Hivon, E.; Hobson, M.; Holmes, W. A.; Hornstrup, A.; Hovest, W.; Huffenberger, K. M.; Hurier, G.; Jaffe, A. H.; Jaffe, T. R.; Jewell, J.; Jones, W. C.; Juvela, M.; Keihänen, E.; Keskitalo, R.; Kiiveri, K.; Kisner, T. S.; Kneissl, R.; Knoche, J.; Knox, L.; Kunz, M.; Kurki-Suonio, H.; Lagache, G.; Lähteenmäki, A.; Lamarre, J.-M.; Lasenby, A.; Lattanzi, M.; Laureijs, R. J.; Lawrence, C. R.; Le Jeune, M.; Leach, S.; Leahy, J. P.; Leonardi, R.; León-Tavares, J.; Lesgourgues, J.; Liguori, M.; Lilje, P. B.; Linden-Vørnle, M.; Lindholm, V.; López-Caniego, M.; Lubin, P. M.; Macías-Pérez, J. F.; Maffei, B.; Maino, D.; Mandolesi, N.; Marinucci, D.; Maris, M.; Marshall, D. J.; Martin, P. G.; Martínez-González, E.; Masi, S.; Massardi, M.; Matarrese, S.; Matthai, F.; Mazzotta, P.; Meinhold, P. R.; Melchiorri, A.; Mendes, L.; Menegoni, E.; Mennella, A.; Migliaccio, M.; Millea, M.; Mitra, S.; Miville-Deschênes, M.-A.; Molinari, D.; Moneti, A.; Montier, L.; Morgante, G.; Mortlock, D.; Moss, A.; Munshi, D.; Murphy, J. A.; Naselsky, P.; Nati, F.; Natoli, P.; Netterfield, C. B.; Nørgaard-Nielsen, H. U.; Noviello, F.; Novikov, D.; Novikov, I.; O'Dwyer, I. J.; Orieux, F.; Osborne, S.; Oxborrow, C. A.; Paci, F.; Pagano, L.; Pajot, F.; Paladini, R.; Paoletti, D.; Partridge, B.; Pasian, F.; Patanchon, G.; Paykari, P.; Perdereau, O.; Perotto, L.; Perrotta, F.; Piacentini, F.; Piat, M.; Pierpaoli, E.; Pietrobon, D.; Plaszczynski, S.; Pointecouteau, E.; Polenta, G.; Ponthieu, N.; Popa, L.; Poutanen, T.; Pratt, G. W.; Prézeau, G.; Prunet, S.; Puget, J.-L.; Rachen, J. P.; Rahlin, A.; Rebolo, R.; Reinecke, M.; Remazeilles, M.; Renault, C.; Ricciardi, S.; Riller, T.; Ringeval, C.; Ristorcelli, I.; Rocha, G.; Rosset, C.; Roudier, G.; Rowan-Robinson, M.; Rubiño-Martín, J. A.; Rusholme, B.; Sandri, M.; Sanselme, L.; Santos, D.; Savini, G.; Scott, D.; Seiffert, M. D.; Shellard, E. P. S.; Spencer, L. D.; Starck, J.-L.; Stolyarov, V.; Stompor, R.; Sudiwala, R.; Sureau, F.; Sutton, D.; Suur-Uski, A.-S.; Sygnet, J.-F.; Tauber, J. A.; Tavagnacco, D.; Terenzi, L.; Toffolatti, L.; Tomasi, M.; Tristram, M.; Tucci, M.; Tuovinen, J.; Türler, M.; Valenziano, L.; Valiviita, J.; Van Tent, B.; Varis, J.; Vielva, P.; Villa, F.; Vittorio, N.; Wade, L. A.; Wandelt, B. D.; Wehus, I. K.; White, M.; White, S. D. M.; Yvon, D.; Zacchei, A.; Zonca, A.

    2014-11-01

    This paper presents the Planck 2013 likelihood, a complete statistical description of the two-point correlation function of the CMB temperature fluctuations that accounts for all known relevant uncertainties, both instrumental and astrophysical in nature. We use this likelihood to derive our best estimate of the CMB angular power spectrum from Planck over three decades in multipole moment, ℓ, covering 2 ≤ ℓ ≤ 2500. The main source of uncertainty at ℓ ≲ 1500 is cosmic variance. Uncertainties in small-scale foreground modelling and instrumental noise dominate the error budget at higher ℓs. For ℓ impact of residual foreground and instrumental uncertainties on the final cosmological parameters. We find good internal agreement among the high-ℓ cross-spectra with residuals below a few μK2 at ℓ ≲ 1000, in agreement with estimated calibration uncertainties. We compare our results with foreground-cleaned CMB maps derived from all Planck frequencies, as well as with cross-spectra derived from the 70 GHz Planck map, and find broad agreement in terms of spectrum residuals and cosmological parameters. We further show that the best-fit ΛCDM cosmology is in excellent agreement with preliminary PlanckEE and TE polarisation spectra. We find that the standard ΛCDM cosmology is well constrained by Planck from the measurements at ℓ ≲ 1500. One specific example is the spectral index of scalar perturbations, for which we report a 5.4σ deviation from scale invariance, ns = 1. Increasing the multipole range beyond ℓ ≃ 1500 does not increase our accuracy for the ΛCDM parameters, but instead allows us to study extensions beyond the standard model. We find no indication of significant departures from the ΛCDM framework. Finally, we report a tension between the Planck best-fit ΛCDM model and the low-ℓ spectrum in the form of a power deficit of 5-10% at ℓ ≲ 40, with a statistical significance of 2.5-3σ. Without a theoretically motivated model for