WorldWideScience

Sample records for automatic likelihood model

  1. In all likelihood statistical modelling and inference using likelihood

    CERN Document Server

    Pawitan, Yudi

    2001-01-01

    Based on a course in the theory of statistics this text concentrates on what can be achieved using the likelihood/Fisherian method of taking account of uncertainty when studying a statistical problem. It takes the concept ot the likelihood as providing the best methods for unifying the demands of statistical modelling and the theory of inference. Every likelihood concept is illustrated by realistic examples, which are not compromised by computational problems. Examples range from asimile comparison of two accident rates, to complex studies that require generalised linear or semiparametric mode

  2. Likelihood analysis of the I(2) model

    DEFF Research Database (Denmark)

    Johansen, Søren

    1997-01-01

    The I(2) model is defined as a submodel of the general vector autoregressive model, by two reduced rank conditions. The model describes stochastic processes with stationary second difference. A parametrization is suggested which makes likelihood inference feasible. Consistency of the maximum...... likelihood estimator is proved, and the asymptotic distribution of the maximum likelihood estimator is given. It is shown that the asymptotic distribution is either Gaussian, mixed Gaussian or, in some cases, even more complicated....

  3. Multiplicative earthquake likelihood models incorporating strain rates

    Science.gov (United States)

    Rhoades, D. A.; Christophersen, A.; Gerstenberger, M. C.

    2017-01-01

    SUMMARYWe examine the potential for strain-rate variables to improve long-term earthquake likelihood models. We derive a set of multiplicative hybrid earthquake likelihood models in which cell rates in a spatially uniform baseline model are scaled using combinations of covariates derived from earthquake catalogue data, fault data, and strain-rates for the New Zealand region. Three components of the strain rate estimated from GPS data over the period 1991-2011 are considered: the shear, rotational and dilatational strain rates. The hybrid model parameters are optimised for earthquakes of M 5 and greater over the period 1987-2006 and tested on earthquakes from the period 2012-2015, which is independent of the strain rate estimates. The shear strain rate is overall the most informative individual covariate, as indicated by Molchan error diagrams as well as multiplicative modelling. Most models including strain rates are significantly more informative than the best models excluding strain rates in both the fitting and testing period. A hybrid that combines the shear and dilatational strain rates with a smoothed seismicity covariate is the most informative model in the fitting period, and a simpler model without the dilatational strain rate is the most informative in the testing period. These results have implications for probabilistic seismic hazard analysis and can be used to improve the background model component of medium-term and short-term earthquake forecasting models.

  4. An improved likelihood model for eye tracking

    DEFF Research Database (Denmark)

    Hammoud, Riad I.; Hansen, Dan Witzner

    2007-01-01

    approach in such cases is to abandon the tracking routine and re-initialize eye detection. Of course this may be a difficult process due to missed data problem. Accordingly, what is needed is an efficient method of reliably tracking a person's eyes between successively produced video image frames, even...... are challenging. It proposes a log likelihood-ratio function of foreground and background models in a particle filter-based eye tracking framework. It fuses key information from even, odd infrared fields (dark and bright-pupil) and their corresponding subtractive image into one single observation model...

  5. Likelihood approaches for proportional likelihood ratio model with right-censored data.

    Science.gov (United States)

    Zhu, Hong

    2014-06-30

    Regression methods for survival data with right censoring have been extensively studied under semiparametric transformation models such as the Cox regression model and the proportional odds model. However, their practical application could be limited because of possible violation of model assumption or lack of ready interpretation for the regression coefficients in some cases. As an alternative, in this paper, the proportional likelihood ratio model introduced by Luo and Tsai is extended to flexibly model the relationship between survival outcome and covariates. This model has a natural connection with many important semiparametric models such as generalized linear model and density ratio model and is closely related to biased sampling problems. Compared with the semiparametric transformation model, the proportional likelihood ratio model is appealing and practical in many ways because of its model flexibility and quite direct clinical interpretation. We present two likelihood approaches for the estimation and inference on the target regression parameters under independent and dependent censoring assumptions. Based on a conditional likelihood approach using uncensored failure times, a numerically simple estimation procedure is developed by maximizing a pairwise pseudo-likelihood. We also develop a full likelihood approach, and the most efficient maximum likelihood estimator is obtained by a profile likelihood. Simulation studies are conducted to assess the finite-sample properties of the proposed estimators and compare the efficiency of the two likelihood approaches. An application to survival data for bone marrow transplantation patients of acute leukemia is provided to illustrate the proposed method and other approaches for handling non-proportionality. The relative merits of these methods are discussed in concluding remarks.

  6. Inference in HIV dynamics models via hierarchical likelihood

    OpenAIRE

    2010-01-01

    HIV dynamical models are often based on non-linear systems of ordinary differential equations (ODE), which do not have analytical solution. Introducing random effects in such models leads to very challenging non-linear mixed-effects models. To avoid the numerical computation of multiple integrals involved in the likelihood, we propose a hierarchical likelihood (h-likelihood) approach, treated in the spirit of a penalized likelihood. We give the asymptotic distribution of the maximum h-likelih...

  7. Inference in HIV dynamics models via hierarchical likelihood

    CERN Document Server

    Commenges, D; Putter, H; Thiebaut, R

    2010-01-01

    HIV dynamical models are often based on non-linear systems of ordinary differential equations (ODE), which do not have analytical solution. Introducing random effects in such models leads to very challenging non-linear mixed-effects models. To avoid the numerical computation of multiple integrals involved in the likelihood, we propose a hierarchical likelihood (h-likelihood) approach, treated in the spirit of a penalized likelihood. We give the asymptotic distribution of the maximum h-likelihood estimators (MHLE) for fixed effects, a result that may be relevant in a more general setting. The MHLE are slightly biased but the bias can be made negligible by using a parametric bootstrap procedure. We propose an efficient algorithm for maximizing the h-likelihood. A simulation study, based on a classical HIV dynamical model, confirms the good properties of the MHLE. We apply it to the analysis of a clinical trial.

  8. Likelihood inference for a nonstationary fractional autoregressive model

    DEFF Research Database (Denmark)

    Johansen, Søren; Ørregård Nielsen, Morten

    2010-01-01

    the conditional Gaussian likelihood and for the probability analysis we also condition on initial values but assume that the errors in the autoregressive model are i.i.d. with suitable moment conditions. We analyze the conditional likelihood and its derivatives as stochastic processes in the parameters, including...... d and b, and prove that they converge in distribution. We use the results to prove consistency of the maximum likelihood estimator for d,b in a large compact subset of {1/2...

  9. INTERACTING MULTIPLE MODEL ALGORITHM BASED ON JOINT LIKELIHOOD ESTIMATION

    Institute of Scientific and Technical Information of China (English)

    Sun Jie; Jiang Chaoshu; Chen Zhuming; Zhang Wei

    2011-01-01

    A novel approach is proposed for the estimation of likelihood on Interacting Multiple-Model (IMM) filter.In this approach,the actual innovation,based on a mismatched model,can be formulated as sum of the theoretical innovation based on a matched model and the distance between matched and mismatched models,whose probability distributions are known.The joint likelihood of innovation sequence can be estimated by convolution of the two known probability density functions.The likelihood of tracking models can be calculated by conditional probability formula.Compared with the conventional likelihood estimation method,the proposed method improves the estimation accuracy of likelihood and robustness of IMM,especially when maneuver occurs.

  10. Maximum likelihood estimation of finite mixture model for economic data

    Science.gov (United States)

    Phoong, Seuk-Yen; Ismail, Mohd Tahir

    2014-06-01

    Finite mixture model is a mixture model with finite-dimension. This models are provides a natural representation of heterogeneity in a finite number of latent classes. In addition, finite mixture models also known as latent class models or unsupervised learning models. Recently, maximum likelihood estimation fitted finite mixture models has greatly drawn statistician's attention. The main reason is because maximum likelihood estimation is a powerful statistical method which provides consistent findings as the sample sizes increases to infinity. Thus, the application of maximum likelihood estimation is used to fit finite mixture model in the present paper in order to explore the relationship between nonlinear economic data. In this paper, a two-component normal mixture model is fitted by maximum likelihood estimation in order to investigate the relationship among stock market price and rubber price for sampled countries. Results described that there is a negative effect among rubber price and stock market price for Malaysia, Thailand, Philippines and Indonesia.

  11. EMPIRICAL LIKELIHOOD FOR LINEAR MODELS UNDER m-DEPENDENT ERRORS

    Institute of Scientific and Technical Information of China (English)

    QinYongsong; JiangBo; LiYufang

    2005-01-01

    In this paper,the empirical likelihood confidence regions for the regression coefficient in a linear model are constructed under m-dependent errors. It is shown that the blockwise empirical likelihood is a good way to deal with dependent samples.

  12. Maximum likelihood estimation for semiparametric density ratio model.

    Science.gov (United States)

    Diao, Guoqing; Ning, Jing; Qin, Jing

    2012-06-27

    In the statistical literature, the conditional density model specification is commonly used to study regression effects. One attractive model is the semiparametric density ratio model, under which the conditional density function is the product of an unknown baseline density function and a known parametric function containing the covariate information. This model has a natural connection with generalized linear models and is closely related to biased sampling problems. Despite the attractive features and importance of this model, most existing methods are too restrictive since they are based on multi-sample data or conditional likelihood functions. The conditional likelihood approach can eliminate the unknown baseline density but cannot estimate it. We propose efficient estimation procedures based on the nonparametric likelihood. The nonparametric likelihood approach allows for general forms of covariates and estimates the regression parameters and the baseline density simultaneously. Therefore, the nonparametric likelihood approach is more versatile than the conditional likelihood approach especially when estimation of the conditional mean or other quantities of the outcome is of interest. We show that the nonparametric maximum likelihood estimators are consistent, asymptotically normal, and asymptotically efficient. Simulation studies demonstrate that the proposed methods perform well in practical settings. A real example is used for illustration.

  13. MAXIMUM LIKELIHOOD ESTIMATION IN GENERALIZED GAMMA TYPE MODEL

    Directory of Open Access Journals (Sweden)

    Vinod Kumar

    2010-01-01

    Full Text Available In the present paper, the maximum likelihood estimates of the two parameters of ageneralized gamma type model have been obtained directly by solving the likelihood equationsas well as by reparametrizing the model first and then solving the likelihood equations (as doneby Prentice, 1974 for fixed values of the third parameter. It is found that reparametrization doesneither reduce the bulk nor the complexity of calculations. as claimed by Prentice (1974. Theprocedure has been illustrated with the help of an example. The distribution of MLE of q alongwith its properties has also been obtained.

  14. Likelihood Inference for a Fractionally Cointegrated Vector Autoregressive Model

    DEFF Research Database (Denmark)

    Johansen, Søren; Nielsen, Morten Ørregaard

    2012-01-01

    We consider model based inference in a fractionally cointegrated (or cofractional) vector autoregressive model with a restricted constant term, ¿, based on the Gaussian likelihood conditional on initial values. The model nests the I(d) VAR model. We give conditions on the parameters......likelihood estimators. To this end we prove weak convergence of the conditional likelihood as a continuous stochastic...... process in the parameters when errors are i.i.d. with suitable moment conditions and initial values are bounded. When the limit is deterministic this implies uniform convergence in probability of the conditional likelihood function. If the true value b0>1/2, we prove that the limit distribution of (ß...

  15. Likelihood Inference for a Nonstationary Fractional Autoregressive Model

    DEFF Research Database (Denmark)

    Johansen, Søren; Nielsen, Morten Ørregaard

    values Xº-n, n = 0, 1, ..., under the assumption that the errors are i.i.d. Gaussian. We consider the likelihood and its derivatives as stochastic processes in the parameters, and prove that they converge in distribution when the errors are i.i.d. with suitable moment conditions and the initial values......This paper discusses model based inference in an autoregressive model for fractional processes based on the Gaussian likelihood. The model allows for the process to be fractional of order d or d - b; where d = b > 1/2 are parameters to be estimated. We model the data X¿, ..., X¿ given the initial...... are bounded. We use this to prove existence and consistency of the local likelihood estimator, and to ?find the asymptotic distribution of the estimators and the likelihood ratio test of the associated fractional unit root hypothesis, which contains the fractional Brownian motion of type II...

  16. Likelihood inference for a nonstationary fractional autoregressive model

    DEFF Research Database (Denmark)

    Johansen, Søren; Nielsen, Morten Ørregaard

    values X0-n, n = 0, 1,...,under the assumption that the errors are i.i.d. Gaussian. We consider the likelihood and its derivatives as stochastic processes in the parameters, and prove that they converge in distribution when the errors are i.i.d. with suitable moment conditions and the initial values......This paper discusses model based inference in an autoregressive model for fractional processes based on the Gaussian likelihood. The model allows for the process to be fractional of order d or d-b; where d ≥ b > 1/2 are parameters to be estimated. We model the data X1,...,XT given the initial...... are bounded. We use this to prove existence and consistency of the local likelihood estimator, and to find the asymptotic distribution of the estimators and the likelihood ratio test of the associated fractional unit root hypothesis, which contains the fractional Brownian motion of type II....

  17. Tapered composite likelihood for spatial max-stable models

    KAUST Repository

    Sang, Huiyan

    2014-05-01

    Spatial extreme value analysis is useful to environmental studies, in which extreme value phenomena are of interest and meaningful spatial patterns can be discerned. Max-stable process models are able to describe such phenomena. This class of models is asymptotically justified to characterize the spatial dependence among extremes. However, likelihood inference is challenging for such models because their corresponding joint likelihood is unavailable and only bivariate or trivariate distributions are known. In this paper, we propose a tapered composite likelihood approach by utilizing lower dimensional marginal likelihoods for inference on parameters of various max-stable process models. We consider a weighting strategy based on a "taper range" to exclude distant pairs or triples. The "optimal taper range" is selected to maximize various measures of the Godambe information associated with the tapered composite likelihood function. This method substantially reduces the computational cost and improves the efficiency over equally weighted composite likelihood estimators. We illustrate its utility with simulation experiments and an analysis of rainfall data in Switzerland.

  18. Conditional likelihood inference in generalized linear mixed models.

    OpenAIRE

    Sartori, Nicola; Severini , T.A

    2002-01-01

    Consider a generalized linear model with a canonical link function, containing both fixed and random effects. In this paper, we consider inference about the fixed effects based on a conditional likelihood function. It is shown that this conditional likelihood function is valid for any distribution of the random effects and, hence, the resulting inferences about the fixed effects are insensitive to misspecification of the random effects distribution. Inferences based on the conditional likelih...

  19. Likelihood Inference for a Nonstationary Fractional Autoregressive Model

    DEFF Research Database (Denmark)

    Johansen, Søren; Nielsen, Morten Ørregaard

    This paper discusses model based inference in an autoregressive model for fractional processes based on the Gaussian likelihood. The model allows for the process to be fractional of order d or d - b; where d = b > 1/2 are parameters to be estimated. We model the data X¿, ..., X¿ given the initial...

  20. Likelihood Inference for a Fractionally Cointegrated Vector Autoregressive Model

    DEFF Research Database (Denmark)

    Johansen, Søren; Nielsen, Morten Ørregaard

    We consider model based inference in a fractionally cointegrated (or cofractional) vector autoregressive model based on the conditional Gaussian likelihood. The model allows the process X(t) to be fractional of order d and cofractional of order d-b; that is, there exist vectors ß for which ß...

  1. Heteroscedastic one-factor models and marginal maximum likelihood estimation

    NARCIS (Netherlands)

    Hessen, D.J.; Dolan, C.V.

    2009-01-01

    In the present paper, a general class of heteroscedastic one-factor models is considered. In these models, the residual variances of the observed scores are explicitly modelled as parametric functions of the one-dimensional factor score. A marginal maximum likelihood procedure for parameter estimati

  2. Empirical likelihood-based evaluations of Value at Risk models

    Institute of Scientific and Technical Information of China (English)

    2009-01-01

    Value at Risk (VaR) is a basic and very useful tool in measuring market risks. Numerous VaR models have been proposed in literature. Therefore, it is of great interest to evaluate the efficiency of these models, and to select the most appropriate one. In this paper, we shall propose to use the empirical likelihood approach to evaluate these models. Simulation results and real life examples show that the empirical likelihood method is more powerful and more robust than some of the asymptotic method available in literature.

  3. Reconceptualizing Social Influence in Counseling: The Elaboration Likelihood Model.

    Science.gov (United States)

    McNeill, Brian W.; Stoltenberg, Cal D.

    1989-01-01

    Presents Elaboration Likelihood Model (ELM) of persuasion (a reconceptualization of the social influence process) as alternative model of attitude change. Contends ELM unifies conflicting social psychology results and can potentially account for inconsistent research findings in counseling psychology. Provides guidelines on integrating…

  4. How to Maximize the Likelihood Function for a DSGE Model

    DEFF Research Database (Denmark)

    Andreasen, Martin Møller

    This paper extends two optimization routines to deal with objective functions for DSGE models. The optimization routines are i) a version of Simulated Annealing developed by Corana, Marchesi & Ridella (1987), and ii) the evolutionary algorithm CMA-ES developed by Hansen, Müller & Koumoutsakos (2003......). Following these extensions, we examine the ability of the two routines to maximize the likelihood function for a sequence of test economies. Our results show that the CMA- ES routine clearly outperforms Simulated Annealing in its ability to find the global optimum and in efficiency. With 10 unknown...... structural parameters in the likelihood function, the CMA-ES routine finds the global optimum in 95% of our test economies compared to 89% for Simulated Annealing. When the number of unknown structural parameters in the likelihood function increases to 20 and 35, then the CMA-ES routine finds the global...

  5. Maximum Likelihood Estimation of Nonlinear Structural Equation Models.

    Science.gov (United States)

    Lee, Sik-Yum; Zhu, Hong-Tu

    2002-01-01

    Developed an EM type algorithm for maximum likelihood estimation of a general nonlinear structural equation model in which the E-step is completed by a Metropolis-Hastings algorithm. Illustrated the methodology with results from a simulation study and two real examples using data from previous studies. (SLD)

  6. A model independent safeguard for unbinned Profile Likelihood

    CERN Document Server

    Priel, Nadav; Landsman, Hagar; Manfredini, Alessandro; Budnik, Ranny

    2016-01-01

    We present a general method to include residual un-modeled background shape uncertainties in profile likelihood based statistical tests for high energy physics and astroparticle physics counting experiments. This approach provides a simple and natural protection against undercoverage, thus lowering the chances of a false discovery or of an over constrained confidence interval, and allows a natural transition to unbinned space. Unbinned likelihood enhances the sensitivity and allows optimal usage of information for the data and the models. We show that the asymptotic behavior of the test statistic can be regained in cases where the model fails to describe the true background behavior, and present 1D and 2D case studies for model-driven and data-driven background models. The resulting penalty on sensitivities follows the actual discrepancy between the data and the models, and is asymptotically reduced to zero with increasing knowledge.

  7. Selection properties of Type II maximum likelihood (empirical bayes) linear models with individual variance components for predictors

    NARCIS (Netherlands)

    Jamil, T.; Braak, ter C.J.F.

    2012-01-01

    Maximum Likelihood (ML) in the linear model overfits when the number of predictors (M) exceeds the number of objects (N). One of the possible solution is the Relevance vector machine (RVM) which is a form of automatic relevance detection and has gained popularity in the pattern recognition machine l

  8. Bayesian experimental design for models with intractable likelihoods.

    Science.gov (United States)

    Drovandi, Christopher C; Pettitt, Anthony N

    2013-12-01

    In this paper we present a methodology for designing experiments for efficiently estimating the parameters of models with computationally intractable likelihoods. The approach combines a commonly used methodology for robust experimental design, based on Markov chain Monte Carlo sampling, with approximate Bayesian computation (ABC) to ensure that no likelihood evaluations are required. The utility function considered for precise parameter estimation is based upon the precision of the ABC posterior distribution, which we form efficiently via the ABC rejection algorithm based on pre-computed model simulations. Our focus is on stochastic models and, in particular, we investigate the methodology for Markov process models of epidemics and macroparasite population evolution. The macroparasite example involves a multivariate process and we assess the loss of information from not observing all variables.

  9. Rate of strong consistency of the maximum quasi-likelihood estimator in quasi-likelihood nonlinear models

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    Quasi-likelihood nonlinear models (QLNM) include generalized linear models as a special case.Under some regularity conditions,the rate of the strong consistency of the maximum quasi-likelihood estimation (MQLE) is obtained in QLNM.In an important case,this rate is O(n-1/2(loglogn)1/2),which is just the rate of LIL of partial sums for I.I.d variables,and thus cannot be improved anymore.

  10. MAXIMUM LIKELIHOOD ESTIMATION FOR PERIODIC AUTOREGRESSIVE MOVING AVERAGE MODELS.

    Science.gov (United States)

    Vecchia, A.V.

    1985-01-01

    A useful class of models for seasonal time series that cannot be filtered or standardized to achieve second-order stationarity is that of periodic autoregressive moving average (PARMA) models, which are extensions of ARMA models that allow periodic (seasonal) parameters. An approximation to the exact likelihood for Gaussian PARMA processes is developed, and a straightforward algorithm for its maximization is presented. The algorithm is tested on several periodic ARMA(1, 1) models through simulation studies and is compared to moment estimation via the seasonal Yule-Walker equations. Applicability of the technique is demonstrated through an analysis of a seasonal stream-flow series from the Rio Caroni River in Venezuela.

  11. Likelihood inference for a fractionally cointegrated vector autoregressive model

    DEFF Research Database (Denmark)

    Johansen, Søren; Nielsen, Morten Ørregaard

    We consider model based inference in a fractionally cointegrated (or cofractional) vector autoregressive model based on the conditional Gaussian likelihood. The model allows the process X_{t} to be fractional of order d and cofractional of order d-b; that is, there exist vectors β for which β......′X_{t} is fractional of order d-b. The parameters d and b satisfy either d≥b≥1/2, d=b≥1/2, or d=d_{0}≥b≥1/2. Our main technical contribution is the proof of consistency of the maximum likelihood estimators on the set 1/2≤b≤d≤d_{1} for any d_{1}≥d_{0}. To this end, we consider the conditional likelihood as a stochastic...... process in the parameters, and prove that it converges in distribution when errors are i.i.d. with suitable moment conditions and initial values are bounded. We then prove that the estimator of β is asymptotically mixed Gaussian and estimators of the remaining parameters are asymptotically Gaussian. We...

  12. Likelihood Inference for a Fractionally Cointegrated Vector Autoregressive Model

    DEFF Research Database (Denmark)

    Johansen, Søren; Nielsen, Morten Ørregaard

    We consider model based inference in a fractionally cointegrated (or cofractional) vector autoregressive model based on the conditional Gaussian likelihood. The model allows the process X(t) to be fractional of order d and cofractional of order d-b; that is, there exist vectors ß for which ß......'X(t) is fractional of order d-b. The parameters d and b satisfy either d=b=1/2, d=b=1/2, or d=d0=b=1/2. Our main technical contribution is the proof of consistency of the maximum likelihood estimators on the set 1/2=b=d=d1 for any d1=d0. To this end, we consider the conditional likelihood as a stochastic process...... in the parameters, and prove that it converges in distribution when errors are i.i.d. with suitable moment conditions and initial values are bounded. We then prove that the estimator of ß is asymptotically mixed Gaussian and estimators of the remaining parameters are asymptotically Gaussian. We also find...

  13. Applications of the Likelihood Theory in Finance: Modelling and Pricing

    CERN Document Server

    Janssen, Arnold

    2012-01-01

    This paper discusses the connection between mathematical finance and statistical modelling which turns out to be more than a formal mathematical correspondence. We like to figure out how common results and notions in statistics and their meaning can be translated to the world of mathematical finance and vice versa. A lot of similarities can be expressed in terms of LeCam's theory for statistical experiments which is the theory of the behaviour of likelihood processes. For positive prices the arbitrage free financial assets fit into filtered experiments. It is shown that they are given by filtered likelihood ratio processes. From the statistical point of view, martingale measures, completeness and pricing formulas are revisited. The pricing formulas for various options are connected with the power functions of tests. For instance the Black-Scholes price of a European option has an interpretation as Bayes risk of a Neyman Pearson test. Under contiguity the convergence of financial experiments and option prices ...

  14. Likelihood-Based Inference in Nonlinear Error-Correction Models

    DEFF Research Database (Denmark)

    Kristensen, Dennis; Rahbæk, Anders

    We consider a class of vector nonlinear error correction models where the transfer function (or loadings) of the stationary relation- ships is nonlinear. This includes in particular the smooth transition models. A general representation theorem is given which establishes the dynamic properties...... and a linear trend in general. Gaussian likelihood-based estimators are considered for the long- run cointegration parameters, and the short-run parameters. Asymp- totic theory is provided for these and it is discussed to what extend asymptotic normality and mixed normaity can be found. A simulation study...

  15. Marginal Maximum Likelihood Estimation of Item Response Models in R

    Directory of Open Access Journals (Sweden)

    Matthew S. Johnson

    2007-02-01

    Full Text Available Item response theory (IRT models are a class of statistical models used by researchers to describe the response behaviors of individuals to a set of categorically scored items. The most common IRT models can be classified as generalized linear fixed- and/or mixed-effect models. Although IRT models appear most often in the psychological testing literature, researchers in other fields have successfully utilized IRT-like models in a wide variety of applications. This paper discusses the three major methods of estimation in IRT and develops R functions utilizing the built-in capabilities of the R environment to find the marginal maximum likelihood estimates of the generalized partial credit model. The currently available R packages ltm is also discussed.

  16. Adaptive quasi-likelihood estimate in generalized linear models

    Institute of Scientific and Technical Information of China (English)

    CHEN Xia; CHEN Xiru

    2005-01-01

    This paper gives a thorough theoretical treatment on the adaptive quasilikelihood estimate of the parameters in the generalized linear models. The unknown covariance matrix of the response variable is estimated by the sample. It is shown that the adaptive estimator defined in this paper is asymptotically most efficient in the sense that it is asymptotic normal, and the covariance matrix of the limit distribution coincides with the one for the quasi-likelihood estimator for the case that the covariance matrix of the response variable is completely known.

  17. Conditional Likelihood Estimators for Hidden Markov Models and Stochastic Volatility Models

    OpenAIRE

    Genon-Catalot, Valentine; Jeantheau, Thierry; Laredo, Catherine

    2003-01-01

    ABSTRACT. This paper develops a new contrast process for parametric inference of general hidden Markov models, when the hidden chain has a non-compact state space. This contrast is based on the conditional likelihood approach, often used for ARCH-type models. We prove the strong consistency of the conditional likelihood estimators under appropriate conditions. The method is applied to the Kalman filter (for which this contrast and the exact likelihood lead to asymptotically equivalent estimat...

  18. Empirical likelihood ratio tests for multivariate regression models

    Institute of Scientific and Technical Information of China (English)

    WU Jianhong; ZHU Lixing

    2007-01-01

    This paper proposes some diagnostic tools for checking the adequacy of multivariate regression models including classical regression and time series autoregression. In statistical inference, the empirical likelihood ratio method has been well known to be a powerful tool for constructing test and confidence region. For model checking, however, the naive empirical likelihood (EL) based tests are not of Wilks' phenomenon. Hence, we make use of bias correction to construct the EL-based score tests and derive a nonparametric version of Wilks' theorem. Moreover, by the advantages of both the EL and score test method, the EL-based score tests share many desirable features as follows: They are self-scale invariant and can detect the alternatives that converge to the null at rate n-1/2, the possibly fastest rate for lack-of-fit testing; they involve weight functions, which provides us with the flexibility to choose scores for improving power performance, especially under directional alternatives. Furthermore, when the alternatives are not directional, we construct asymptotically distribution-free maximin tests for a large class of possible alternatives. A simulation study is carried out and an application for a real dataset is analyzed.

  19. Likelihood Analysis of the Minimal AMSB Model arXiv

    CERN Document Server

    Bagnaschi, E.; Sakurai, K.; Buchmueller, O.; Cavanaugh, R.; Chobanova, V.; Citron, M.; Costa, J.C.; De Roeck, A.; Dolan, M.J.; Ellis, J.R.; Flächer, H.; Heinemeyer, S.; Isidori, G.; Lucio, M.; Luo, F.; Martínez Santos, D.; Olive, K.A.; Richards, A.; Weiglein, G.

    We perform a likelihood analysis of the minimal Anomaly-Mediated Supersymmetry Breaking (mAMSB) model using constraints from cosmology and accelerator experiments. We find that a wino-like or a Higgsino-like neutralino LSP, $m_{\\tilde \\chi^0_{1}}$, may provide the cold dark matter (DM) with similar likelihood. The upper limit on the DM density from Planck and other experiments enforces $m_{\\tilde \\chi^0_{1}} \\lesssim 3~TeV$ after the inclusion of Sommerfeld enhancement in its annihilations. If most of the cold DM density is provided by the $\\tilde \\chi_0^1$, the measured value of the Higgs mass favours a limited range of $\\tan \\beta \\sim 5$ (or for $\\mu > 0$, $\\tan \\beta \\sim 45$) but the scalar mass $m_0$ is poorly constrained. In the wino-LSP case, $m_{3/2}$ is constrained to about $900~TeV$ and ${m_{\\tilde \\chi^0_{1}}}$ to $2.9\\pm0.1~TeV$, whereas in the Higgsino-LSP case $m_{3/2}$ has just a lower limit $\\gtrsim 650TeV$ ($\\gtrsim 480TeV$) and $m_{\\tilde \\chi^0_{1}}$ is constrained to $1.12 ~(1.13) \\pm0.02...

  20. Music genre classification via likelihood fusion from multiple feature models

    Science.gov (United States)

    Shiu, Yu; Kuo, C.-C. J.

    2005-01-01

    Music genre provides an efficient way to index songs in a music database, and can be used as an effective means to retrieval music of a similar type, i.e. content-based music retrieval. A new two-stage scheme for music genre classification is proposed in this work. At the first stage, we examine a couple of different features, construct their corresponding parametric models (e.g. GMM and HMM) and compute their likelihood functions to yield soft classification results. In particular, the timbre, rhythm and temporal variation features are considered. Then, at the second stage, these soft classification results are integrated to result in a hard decision for final music genre classification. Experimental results are given to demonstrate the performance of the proposed scheme.

  1. Likelihood ratio model for classification of forensic evidence

    Energy Technology Data Exchange (ETDEWEB)

    Zadora, G., E-mail: gzadora@ies.krakow.pl [Institute of Forensic Research, Westerplatte 9, 31-033 Krakow (Poland); Neocleous, T., E-mail: tereza@stats.gla.ac.uk [University of Glasgow, Department of Statistics, 15 University Gardens, Glasgow G12 8QW (United Kingdom)

    2009-05-29

    One of the problems of analysis of forensic evidence such as glass fragments, is the determination of their use-type category, e.g. does a glass fragment originate from an unknown window or container? Very small glass fragments arise during various accidents and criminal offences, and could be carried on the clothes, shoes and hair of participants. It is therefore necessary to obtain information on their physicochemical composition in order to solve the classification problem. Scanning Electron Microscopy coupled with an Energy Dispersive X-ray Spectrometer and the Glass Refractive Index Measurement method are routinely used in many forensic institutes for the investigation of glass. A natural form of glass evidence evaluation for forensic purposes is the likelihood ratio-LR = p(E|H{sub 1})/p(E|H{sub 2}). The main aim of this paper was to study the performance of LR models for glass object classification which considered one or two sources of data variability, i.e. between-glass-object variability and(or) within-glass-object variability. Within the proposed model a multivariate kernel density approach was adopted for modelling the between-object distribution and a multivariate normal distribution was adopted for modelling within-object distributions. Moreover, a graphical method of estimating the dependence structure was employed to reduce the highly multivariate problem to several lower-dimensional problems. The performed analysis showed that the best likelihood model was the one which allows to include information about between and within-object variability, and with variables derived from elemental compositions measured by SEM-EDX, and refractive values determined before (RI{sub b}) and after (RI{sub a}) the annealing process, in the form of dRI = log{sub 10}|RI{sub a} - RI{sub b}|. This model gave better results than the model with only between-object variability considered. In addition, when dRI and variables derived from elemental compositions were used, this

  2. Approximate Maximum Likelihood Commercial Bank Loan Management Model

    Directory of Open Access Journals (Sweden)

    Godwin N.O.   Asemota

    2009-01-01

    Full Text Available Problem statement: Loan management is a very complex and yet, a vitally important aspect of any commercial bank operations. The balance sheet position shows the main sources of funds as deposits and shareholders contributions. Approach: In order to operate profitably, remain solvent and consequently grow, a commercial bank needs to properly manage its excess cash to yield returns in the form of loans. Results: The above are achieved if the bank can honor depositors withdrawals at all times and also grant loans to credible borrowers. This is so because loans are the main portfolios of a commercial bank that yield the highest rate of returns. Commercial banks and the environment in which they operate are dynamic. So, any attempt to model their behavior without including some elements of uncertainty would be less than desirable. The inclusion of uncertainty factor is now possible with the advent of stochastic optimal control theories. Thus, approximate maximum likelihood algorithm with variable forgetting factor was used to model the loan management behavior of a commercial bank in this study. Conclusion: The results showed that uncertainty factor employed in the stochastic modeling, enable us to adaptively control loan demand as well as fluctuating cash balances in the bank. However, this loan model can also visually aid commercial bank managers planning decisions by allowing them to competently determine excess cash and invest this excess cash as loans to earn more assets without jeopardizing public confidence.

  3. Maximum likelihood reconstruction for Ising models with asynchronous updates

    CERN Document Server

    Zeng, Hong-Li; Aurell, Erik; Hertz, John; Roudi, Yasser

    2012-01-01

    We describe how the couplings in a non-equilibrium Ising model can be inferred from observing the model history. Two cases of an asynchronous update scheme are considered: one in which we know both the spin history and the update times (times at which an attempt was made to flip a spin) and one in which we only know the spin history (i.e., the times at which spins were actually flipped). In both cases, maximizing the likelihood of the data leads to exact learning rules for the couplings in the model. For the first case, we show that one can average over all possible choices of update times to obtain a learning rule that depends only on spin correlations and not on the specific spin history. For the second case, the same rule can be derived within a further decoupling approximation. We study all methods numerically for fully asymmetric Sherrington-Kirkpatrick models, varying the data length, system size, temperature, and external field. Good convergence is observed in accordance with the theoretical expectatio...

  4. Robust Likelihood-Based Survival Modeling with Microarray Data

    Directory of Open Access Journals (Sweden)

    HyungJun Cho

    2008-09-01

    Full Text Available Gene expression data can be associated with various clinical outcomes. In particular, these data can be of importance in discovering survival-associated genes for medical applications. As alternatives to traditional statistical methods, sophisticated methods and software programs have been developed to overcome the high-dimensional difficulty of microarray data. Nevertheless, new algorithms and software programs are needed to include practical functions such as the discovery of multiple sets of survival-associated genes and the incorporation of risk factors, and to use in the R environment which many statisticians are familiar with. For survival modeling with microarray data, we have developed a software program (called rbsurv which can be used conveniently and interactively in the R environment. This program selects survival-associated genes based on the partial likelihood of the Cox model and separates training and validation sets of samples for robustness. It can discover multiple sets of genes by iterative forward selection rather than one large set of genes. It can also allow adjustment for risk factors in microarray survival modeling. This software package, the rbsurv package, can be used to discover survival-associated genes with microarray data conveniently.

  5. Quantifying uncertainty, variability and likelihood for ordinary differential equation models

    LENUS (Irish Health Repository)

    Weisse, Andrea Y

    2010-10-28

    Abstract Background In many applications, ordinary differential equation (ODE) models are subject to uncertainty or variability in initial conditions and parameters. Both, uncertainty and variability can be quantified in terms of a probability density function on the state and parameter space. Results The partial differential equation that describes the evolution of this probability density function has a form that is particularly amenable to application of the well-known method of characteristics. The value of the density at some point in time is directly accessible by the solution of the original ODE extended by a single extra dimension (for the value of the density). This leads to simple methods for studying uncertainty, variability and likelihood, with significant advantages over more traditional Monte Carlo and related approaches especially when studying regions with low probability. Conclusions While such approaches based on the method of characteristics are common practice in other disciplines, their advantages for the study of biological systems have so far remained unrecognized. Several examples illustrate performance and accuracy of the approach and its limitations.

  6. Empirical likelihood-based inference in a partially linear model for longitudinal data

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    A partially linear model with longitudinal data is considered, empirical likelihood to inference for the regression coefficients and the baseline function is investigated, the empirical log-likelihood ratios is proven to be asymptotically chi-squared, and the corresponding confidence regions for the parameters of interest are then constructed. Also by the empirical likelihood ratio functions, we can obtain the maximum empirical likelihood estimates of the regression coefficients and the baseline function, and prove the asymptotic normality. The numerical results are conducted to compare the performance of the empirical likelihood and the normal approximation-based method, and a real example is analysed.

  7. Empirical likelihood-based inference in a partially linear model for longitudinal data

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    A partially linear model with longitudinal data is considered, empirical likelihood to infer- ence for the regression coefficients and the baseline function is investigated, the empirical log-likelihood ratios is proven to be asymptotically chi-squared, and the corresponding confidence regions for the pa- rameters of interest are then constructed. Also by the empirical likelihood ratio functions, we can obtain the maximum empirical likelihood estimates of the regression coefficients and the baseline function, and prove the asymptotic normality. The numerical results are conducted to compare the performance of the empirical likelihood and the normal approximation-based method, and a real example is analysed.

  8. Fast inference in generalized linear models via expected log-likelihoods.

    Science.gov (United States)

    Ramirez, Alexandro D; Paninski, Liam

    2014-04-01

    Generalized linear models play an essential role in a wide variety of statistical applications. This paper discusses an approximation of the likelihood in these models that can greatly facilitate computation. The basic idea is to replace a sum that appears in the exact log-likelihood by an expectation over the model covariates; the resulting "expected log-likelihood" can in many cases be computed significantly faster than the exact log-likelihood. In many neuroscience experiments the distribution over model covariates is controlled by the experimenter and the expected log-likelihood approximation becomes particularly useful; for example, estimators based on maximizing this expected log-likelihood (or a penalized version thereof) can often be obtained with orders of magnitude computational savings compared to the exact maximum likelihood estimators. A risk analysis establishes that these maximum EL estimators often come with little cost in accuracy (and in some cases even improved accuracy) compared to standard maximum likelihood estimates. Finally, we find that these methods can significantly decrease the computation time of marginal likelihood calculations for model selection and of Markov chain Monte Carlo methods for sampling from the posterior parameter distribution. We illustrate our results by applying these methods to a computationally-challenging dataset of neural spike trains obtained via large-scale multi-electrode recordings in the primate retina.

  9. AD Model Builder: using automatic differentiation for statistical inference of highly parameterized complex nonlinear models

    DEFF Research Database (Denmark)

    Fournier, David A.; Skaug, Hans J.; Ancheta, Johnoel

    2011-01-01

    of parameters. The benefits of using AD are computational efficiency and high numerical accuracy, both crucial in many practical problems. We describe the basic components and the underlying philosophy of ADMB, with an emphasis on functionality found in no other statistical software. One example......Many criteria for statistical parameter estimation, such as maximum likelihood, are formulated as a nonlinear optimization problem.Automatic Differentiation Model Builder (ADMB) is a programming framework based on automatic differentiation, aimed at highly nonlinear models with a large number...

  10. Improved Likelihood Ratio Tests for Cointegration Rank in the VAR Model

    DEFF Research Database (Denmark)

    Boswijk, H. Peter; Jansson, Michael; Nielsen, Morten Ørregaard

    . The power gains relative to existing tests are due to two factors. First, instead of basing our tests on the conditional (with respect to the initial observations) likelihood, we follow the recent unit root literature and base our tests on the full likelihood as in, e.g., Elliott, Rothenberg, and Stock......We suggest improved tests for cointegration rank in the vector autoregressive (VAR) model and develop asymptotic distribution theory and local power results. The tests are (quasi-)likelihood ratio tests based on a Gaussian likelihood, but of course the asymptotic results apply more generally...

  11. Empirical likelihood

    CERN Document Server

    Owen, Art B

    2001-01-01

    Empirical likelihood provides inferences whose validity does not depend on specifying a parametric model for the data. Because it uses a likelihood, the method has certain inherent advantages over resampling methods: it uses the data to determine the shape of the confidence regions, and it makes it easy to combined data from multiple sources. It also facilitates incorporating side information, and it simplifies accounting for censored, truncated, or biased sampling.One of the first books published on the subject, Empirical Likelihood offers an in-depth treatment of this method for constructing confidence regions and testing hypotheses. The author applies empirical likelihood to a range of problems, from those as simple as setting a confidence region for a univariate mean under IID sampling, to problems defined through smooth functions of means, regression models, generalized linear models, estimating equations, or kernel smooths, and to sampling with non-identically distributed data. Abundant figures offer vi...

  12. Improved likelihood ratio tests for cointegration rank in the VAR model

    NARCIS (Netherlands)

    Boswijk, H.P.; Jansson, M.; Nielsen, M.Ø.

    2012-01-01

    We suggest improved tests for cointegration rank in the vector autoregressive (VAR) model and develop asymptotic distribution theory and local power results. The tests are (quasi-)likelihood ratio tests based on a Gaussian likelihood, but of course the asymptotic results apply more generally. The po

  13. Improved likelihood ratio tests for cointegration rank in the VAR model

    NARCIS (Netherlands)

    Boswijk, H.P.; Jansson, M.; Nielsen, M.Ø.

    2015-01-01

    We suggest improved tests for cointegration rank in the vector autoregressive (VAR) model and develop asymptotic distribution theory and local power results. The tests are (quasi-)likelihood ratio tests based on a Gaussian likelihood, but as usual the asymptotic results do not require normally distr

  14. Likelihood-Based Cointegration Analysis in Panels of Vector Error Correction Models

    NARCIS (Netherlands)

    J.J.J. Groen (Jan); F.R. Kleibergen (Frank)

    1999-01-01

    textabstractWe propose in this paper a likelihood-based framework for cointegration analysis in panels of a fixed number of vector error correction models. Maximum likelihood estimators of the cointegrating vectors are constructed using iterated Generalized Method of Moments estimators. Using these

  15. Choosing the observational likelihood in state-space stock assessment models

    DEFF Research Database (Denmark)

    Albertsen, Christoffer Moesgaard; Nielsen, Anders; Thygesen, Uffe Høgsbro

    2016-01-01

    Data used in stock assessment models result from combinations of biological, ecological, fishery, and sampling processes. Since different types of errors propagate through these processes it can be difficult to identify a particular family of distributions for modelling errors on observations...... a priori. By implementing several observational likelihoods, modelling both numbers- and proportions-at-age, in an age based state-space stock assessment model, we compare the model fit for each choice of likelihood along with the implications for spawning stock biomass and average fishing mortality. We...... propose using AIC intervals based on fitting the full observational model for comparing different observational likelihoods. Using data from four stocks, we show that the model fit is improved by modelling the correlation of observations within years. However, the best choice of observational likelihood...

  16. Automatic terrain modeling using transfinite element analysis

    KAUST Repository

    Collier, Nathaniel O.

    2010-05-31

    An automatic procedure for modeling terrain is developed based on L2 projection-based interpolation of discrete terrain data onto transfinite function spaces. The function space is refined automatically by the use of image processing techniques to detect regions of high error and the flexibility of the transfinite interpolation to add degrees of freedom to these areas. Examples are shown of a section of the Palo Duro Canyon in northern Texas.

  17. Approximate Likelihood

    CERN Document Server

    CERN. Geneva

    2015-01-01

    Most physics results at the LHC end in a likelihood ratio test. This includes discovery and exclusion for searches as well as mass, cross-section, and coupling measurements. The use of Machine Learning (multivariate) algorithms in HEP is mainly restricted to searches, which can be reduced to classification between two fixed distributions: signal vs. background. I will show how we can extend the use of ML classifiers to distributions parameterized by physical quantities like masses and couplings as well as nuisance parameters associated to systematic uncertainties. This allows for one to approximate the likelihood ratio while still using a high dimensional feature vector for the data. Both the MEM and ABC approaches mentioned above aim to provide inference on model parameters (like cross-sections, masses, couplings, etc.). ABC is fundamentally tied Bayesian inference and focuses on the “likelihood free” setting where only a simulator is available and one cannot directly compute the likelihood for the dat...

  18. Experiments using machine learning to approximate likelihood ratios for mixture models

    Science.gov (United States)

    Cranmer, K.; Pavez, J.; Louppe, G.; Brooks, W. K.

    2016-10-01

    Likelihood ratio tests are a key tool in many fields of science. In order to evaluate the likelihood ratio the likelihood function is needed. However, it is common in fields such as High Energy Physics to have complex simulations that describe the distribution while not having a description of the likelihood that can be directly evaluated. In this setting it is impossible or computationally expensive to evaluate the likelihood. It is, however, possible to construct an equivalent version of the likelihood ratio that can be evaluated by using discriminative classifiers. We show how this can be used to approximate the likelihood ratio when the underlying distribution is a weighted sum of probability distributions (e.g. signal plus background model). We demonstrate how the results can be considerably improved by decomposing the ratio and use a set of classifiers in a pairwise manner on the components of the mixture model and how this can be used to estimate the unknown coefficients of the model, such as the signal contribution.

  19. Estimation and Model Selection for Model-Based Clustering with the Conditional Classification Likelihood

    CERN Document Server

    Baudry, Jean-Patrick

    2012-01-01

    The Integrated Completed Likelihood (ICL) criterion has been proposed by Biernacki et al. (2000) in the model-based clustering framework to select a relevant number of classes and has been used by statisticians in various application areas. A theoretical study of this criterion is proposed. A contrast related to the clustering objective is introduced: the conditional classification likelihood. This yields an estimator and a model selection criteria class. The properties of these new procedures are studied and ICL is proved to be an approximation of one of these criteria. We oppose these results to the current leading point of view about ICL, that it would not be consistent. Moreover these results give insights into the class notion underlying ICL and feed a reflection on the class notion in clustering. General results on penalized minimum contrast criteria and on mixture models are derived, which are interesting in their own right.

  20. Maximum Likelihood Inference for the Cox Regression Model with Applications to Missing Covariates.

    Science.gov (United States)

    Chen, Ming-Hui; Ibrahim, Joseph G; Shao, Qi-Man

    2009-10-01

    In this paper, we carry out an in-depth theoretical investigation for existence of maximum likelihood estimates for the Cox model (Cox, 1972, 1975) both in the full data setting as well as in the presence of missing covariate data. The main motivation for this work arises from missing data problems, where models can easily become difficult to estimate with certain missing data configurations or large missing data fractions. We establish necessary and sufficient conditions for existence of the maximum partial likelihood estimate (MPLE) for completely observed data (i.e., no missing data) settings as well as sufficient conditions for existence of the maximum likelihood estimate (MLE) for survival data with missing covariates via a profile likelihood method. Several theorems are given to establish these conditions. A real dataset from a cancer clinical trial is presented to further illustrate the proposed methodology.

  1. Small-sample likelihood inference in extreme-value regression models

    CERN Document Server

    Ferrari, Silvia L P

    2012-01-01

    We deal with a general class of extreme-value regression models introduced by Barreto- Souza and Vasconcellos (2011). Our goal is to derive an adjusted likelihood ratio statistic that is approximately distributed as \\c{hi}2 with a high degree of accuracy. Although the adjusted statistic requires more computational effort than its unadjusted counterpart, it is shown that the adjustment term has a simple compact form that can be easily implemented in standard statistical software. Further, we compare the finite sample performance of the three classical tests (likelihood ratio, Wald, and score), the gradient test that has been recently proposed by Terrell (2002), and the adjusted likelihood ratio test obtained in this paper. Our simulations favor the latter. Applications of our results are presented. Key words: Extreme-value regression; Gradient test; Gumbel distribution; Likelihood ratio test; Nonlinear models; Score test; Small-sample adjustments; Wald test.

  2. Empirical Likelihood for Mixed-effects Error-in-variables Model

    Institute of Scientific and Technical Information of China (English)

    Qiu-hua Chen; Ping-shou Zhong; Heng-jian Cui

    2009-01-01

    This paper mainly introduces the method of empirical likelihood and its applications on two dif-ferent models.We discuss the empirical likelihood inference on fixed-effect parameter in mixed-effects model with error-in-variables.We first consider a linear mixed-effects model with measurement errors in both fixed and random effects.We construct the empirical likelihood confidence regions for the fixed-effects parameters and the mean parameters of random-effects.The limiting distribution of the empirical log likelihood ratio at the true parameter is χ2p+q,where p,q are dimension of fixed and random effects respectively.Then we discuss empirical likelihood inference in a semi-linear error-in-variable mixed-effects model.Under certain conditions,it is shown that the empirical log likelihood ratio at the true parameter also converges to χ2p+q.Simulations illustrate that the proposed confidence region has a coverage probability more closer to the nominal level than normal approximation based confidence region.

  3. SCI Identification (SCIDNT) program user's guide. [maximum likelihood method for linear rotorcraft models

    Science.gov (United States)

    1979-01-01

    The computer program Linear SCIDNT which evaluates rotorcraft stability and control coefficients from flight or wind tunnel test data is described. It implements the maximum likelihood method to maximize the likelihood function of the parameters based on measured input/output time histories. Linear SCIDNT may be applied to systems modeled by linear constant-coefficient differential equations. This restriction in scope allows the application of several analytical results which simplify the computation and improve its efficiency over the general nonlinear case.

  4. Maximum Likelihood Estimation in Meta-Analytic Structural Equation Modeling

    Science.gov (United States)

    Oort, Frans J.; Jak, Suzanne

    2016-01-01

    Meta-analytic structural equation modeling (MASEM) involves fitting models to a common population correlation matrix that is estimated on the basis of correlation coefficients that are reported by a number of independent studies. MASEM typically consist of two stages. The method that has been found to perform best in terms of statistical…

  5. Generalized Empirical Likelihood-Based Focused Information Criterion and Model Averaging

    Directory of Open Access Journals (Sweden)

    Naoya Sueishi

    2013-07-01

    Full Text Available This paper develops model selection and averaging methods for moment restriction models. We first propose a focused information criterion based on the generalized empirical likelihood estimator. We address the issue of selecting an optimal model, rather than a correct model, for estimating a specific parameter of interest. Then, this study investigates a generalized empirical likelihood-based model averaging estimator that minimizes the asymptotic mean squared error. A simulation study suggests that our averaging estimator can be a useful alternative to existing post-selection estimators.

  6. On Compound Poisson Processes Arising in Change-Point Type Statistical Models as Limiting Likelihood Ratios

    CERN Document Server

    Dachian, Serguei

    2010-01-01

    Different change-point type models encountered in statistical inference for stochastic processes give rise to different limiting likelihood ratio processes. In a previous paper of one of the authors it was established that one of these likelihood ratios, which is an exponential functional of a two-sided Poisson process driven by some parameter, can be approximated (for sufficiently small values of the parameter) by another one, which is an exponential functional of a two-sided Brownian motion. In this paper we consider yet another likelihood ratio, which is the exponent of a two-sided compound Poisson process driven by some parameter. We establish, that similarly to the Poisson type one, the compound Poisson type likelihood ratio can be approximated by the Brownian type one for sufficiently small values of the parameter. We equally discuss the asymptotics for large values of the parameter and illustrate the results by numerical simulations.

  7. Generalized linear models with random effects unified analysis via H-likelihood

    CERN Document Server

    Lee, Youngjo; Pawitan, Yudi

    2006-01-01

    Since their introduction in 1972, generalized linear models (GLMs) have proven useful in the generalization of classical normal models. Presenting methods for fitting GLMs with random effects to data, Generalized Linear Models with Random Effects: Unified Analysis via H-likelihood explores a wide range of applications, including combining information over trials (meta-analysis), analysis of frailty models for survival data, genetic epidemiology, and analysis of spatial and temporal models with correlated errors.Written by pioneering authorities in the field, this reference provides an introduction to various theories and examines likelihood inference and GLMs. The authors show how to extend the class of GLMs while retaining as much simplicity as possible. By maximizing and deriving other quantities from h-likelihood, they also demonstrate how to use a single algorithm for all members of the class, resulting in a faster algorithm as compared to existing alternatives. Complementing theory with examples, many of...

  8. Generalized Additive Models, Cubic Splines and Penalized Likelihood.

    Science.gov (United States)

    1987-05-22

    in case control studies ). All models in the table include dummy variable to account for the matching. The first 3 lines of the table indicate that OA...Ausoc. Breslow, N. and Day, N. (1980). Statistical methods in cancer research, volume 1- the analysis of case - control studies . International agency

  9. Maximum likelihood estimation of the parameters of nonminimum phase and noncausal ARMA models

    DEFF Research Database (Denmark)

    Rasmussen, Klaus Bolding

    1994-01-01

    The well-known prediction-error-based maximum likelihood (PEML) method can only handle minimum phase ARMA models. This paper presents a new method known as the back-filtering-based maximum likelihood (BFML) method, which can handle nonminimum phase and noncausal ARMA models. The BFML method...... is identical to the PEML method in the case of a minimum phase ARMA model, and it turns out that the BFML method incorporates a noncausal ARMA filter with poles outside the unit circle for estimation of the parameters of a causal, nonminimum phase ARMA model...

  10. MLE's bias pathology, Model Updated Maximum Likelihood Estimates and Wallace's Minimum Message Length method

    OpenAIRE

    Yatracos, Yannis G.

    2013-01-01

    The inherent bias pathology of the maximum likelihood (ML) estimation method is confirmed for models with unknown parameters $\\theta$ and $\\psi$ when MLE $\\hat \\psi$ is function of MLE $\\hat \\theta.$ To reduce $\\hat \\psi$'s bias the likelihood equation to be solved for $\\psi$ is updated using the model for the data $Y$ in it. Model updated (MU) MLE, $\\hat \\psi_{MU},$ often reduces either totally or partially $\\hat \\psi$'s bias when estimating shape parameter $\\psi.$ For the Pareto model $\\hat...

  11. A note on the maximum likelihood estimator in the gamma regression model

    Directory of Open Access Journals (Sweden)

    Jerzy P. Rydlewski

    2009-01-01

    Full Text Available This paper considers a nonlinear regression model, in which the dependent variable has the gamma distribution. A model is considered in which the shape parameter of the random variable is the sum of continuous and algebraically independent functions. The paper proves that there is exactly one maximum likelihood estimator for the gamma regression model.

  12. Choosing the observational likelihood in state-space stock assessment models

    DEFF Research Database (Denmark)

    Albertsen, Christoffer Moesgaard; Nielsen, Anders; Thygesen, Uffe Høgsbro

    By implementing different observational likelihoods in a state-space age-based stock assessment model, we are able to compare the goodness-of-fit and effects on estimated fishing mortallity for different model choices. Model fit is improved by estimating suitable correlations between agegroups. We...

  13. Maximum Likelihood Dynamic Factor Modeling for Arbitrary "N" and "T" Using SEM

    Science.gov (United States)

    Voelkle, Manuel C.; Oud, Johan H. L.; von Oertzen, Timo; Lindenberger, Ulman

    2012-01-01

    This article has 3 objectives that build on each other. First, we demonstrate how to obtain maximum likelihood estimates for dynamic factor models (the direct autoregressive factor score model) with arbitrary "T" and "N" by means of structural equation modeling (SEM) and compare the approach to existing methods. Second, we go beyond standard time…

  14. Using Data to Tune Nearshore Dynamics Models: A Bayesian Approach with Parametric Likelihood

    CERN Document Server

    Balci, Nusret; Venkataramani, Shankar C

    2013-01-01

    We propose a modification of a maximum likelihood procedure for tuning parameter values in models, based upon the comparison of their output to field data. Our methodology, which uses polynomial approximations of the sample space to increase the computational efficiency, differs from similar Bayesian estimation frameworks in the use of an alternative likelihood distribution, is shown to better address problems in which covariance information is lacking, than its more conventional counterpart. Lack of covariance information is a frequent challenge in large-scale geophysical estimation. This is the case in the geophysical problem considered here. We use a nearshore model for long shore currents and observational data of the same to show the contrast between both maximum likelihood methodologies. Beyond a methodological comparison, this study gives estimates of parameter values for the bottom drag and surface forcing that make the particular model most consistent with data; furthermore, we also derive sensitivit...

  15. Likelihood analysis of spatial capture-recapture models for stratified or class structured populations

    Science.gov (United States)

    Royle, J. Andrew; Sutherland, Christopher S.; Fuller, Angela K.; Sun, Catherine C.

    2015-01-01

    We develop a likelihood analysis framework for fitting spatial capture-recapture (SCR) models to data collected on class structured or stratified populations. Our interest is motivated by the necessity of accommodating the problem of missing observations of individual class membership. This is particularly problematic in SCR data arising from DNA analysis of scat, hair or other material, which frequently yields individual identity but fails to identify the sex. Moreover, this can represent a large fraction of the data and, given the typically small sample sizes of many capture-recapture studies based on DNA information, utilization of the data with missing sex information is necessary. We develop the class structured likelihood for the case of missing covariate values, and then we address the scaling of the likelihood so that models with and without class structured parameters can be formally compared regardless of missing values. We apply our class structured model to black bear data collected in New York in which sex could be determined for only 62 of 169 uniquely identified individuals. The models containing sex-specificity of both the intercept of the SCR encounter probability model and the distance coefficient, and including a behavioral response are strongly favored by log-likelihood. Estimated population sex ratio is strongly influenced by sex structure in model parameters illustrating the importance of rigorous modeling of sex differences in capture-recapture models.

  16. Maximum Likelihood Analysis of Nonlinear Structural Equation Models with Dichotomous Variables

    Science.gov (United States)

    Song, Xin-Yuan; Lee, Sik-Yum

    2005-01-01

    In this article, a maximum likelihood approach is developed to analyze structural equation models with dichotomous variables that are common in behavioral, psychological and social research. To assess nonlinear causal effects among the latent variables, the structural equation in the model is defined by a nonlinear function. The basic idea of the…

  17. Maximum Likelihood Analysis of a Two-Level Nonlinear Structural Equation Model with Fixed Covariates

    Science.gov (United States)

    Lee, Sik-Yum; Song, Xin-Yuan

    2005-01-01

    In this article, a maximum likelihood (ML) approach for analyzing a rather general two-level structural equation model is developed for hierarchically structured data that are very common in educational and/or behavioral research. The proposed two-level model can accommodate nonlinear causal relations among latent variables as well as effects…

  18. Marginal Maximum Likelihood Estimation of a Latent Variable Model with Interaction

    Science.gov (United States)

    Cudeck, Robert; Harring, Jeffrey R.; du Toit, Stephen H. C.

    2009-01-01

    There has been considerable interest in nonlinear latent variable models specifying interaction between latent variables. Although it seems to be only slightly more complex than linear regression without the interaction, the model that includes a product of latent variables cannot be estimated by maximum likelihood assuming normality.…

  19. A Composite Likelihood Inference in Latent Variable Models for Ordinal Longitudinal Responses

    Science.gov (United States)

    Vasdekis, Vassilis G. S.; Cagnone, Silvia; Moustaki, Irini

    2012-01-01

    The paper proposes a composite likelihood estimation approach that uses bivariate instead of multivariate marginal probabilities for ordinal longitudinal responses using a latent variable model. The model considers time-dependent latent variables and item-specific random effects to be accountable for the interdependencies of the multivariate…

  20. ATTITUDE-CHANGE FOLLOWING PERSUASIVE COMMUNICATION - INTEGRATING SOCIAL JUDGMENT THEORY AND THE ELABORATION LIKELIHOOD MODEL

    NARCIS (Netherlands)

    SIERO, FW; DOOSJE, BJ

    1993-01-01

    An experiment was conducted to examine the influence of the perceived extremity of a message and motivation to elaborate upon the process of persuasion. The first goal was to test a model of attitude change relating Social Judgment Theory to the Elaboration Likelihood Model. The second objective was

  1. On the Existence and Uniqueness of Maximum-Likelihood Estimates in the Rasch Model.

    Science.gov (United States)

    Fischer, Gerhard H.

    1981-01-01

    Necessary and sufficient conditions for the existence and uniqueness of a solution of the so-called "unconditional" and the "conditional" maximum-likelihood estimation equations in the dichotomous Rasch model are given. It is shown how to apply the results in practical uses of the Rasch model. (Author/JKS)

  2. On penalized likelihood estimation for a non-proportional hazards regression model.

    Science.gov (United States)

    Devarajan, Karthik; Ebrahimi, Nader

    2013-07-01

    In this paper, a semi-parametric generalization of the Cox model that permits crossing hazard curves is described. A theoretical framework for estimation in this model is developed based on penalized likelihood methods. It is shown that the optimal solution to the baseline hazard, baseline cumulative hazard and their ratio are hyperbolic splines with knots at the distinct failure times.

  3. Driving the Model to Its Limit: Profile Likelihood Based Model Reduction.

    Science.gov (United States)

    Maiwald, Tim; Hass, Helge; Steiert, Bernhard; Vanlier, Joep; Engesser, Raphael; Raue, Andreas; Kipkeew, Friederike; Bock, Hans H; Kaschek, Daniel; Kreutz, Clemens; Timmer, Jens

    2016-01-01

    In systems biology, one of the major tasks is to tailor model complexity to information content of the data. A useful model should describe the data and produce well-determined parameter estimates and predictions. Too small of a model will not be able to describe the data whereas a model which is too large tends to overfit measurement errors and does not provide precise predictions. Typically, the model is modified and tuned to fit the data, which often results in an oversized model. To restore the balance between model complexity and available measurements, either new data has to be gathered or the model has to be reduced. In this manuscript, we present a data-based method for reducing non-linear models. The profile likelihood is utilised to assess parameter identifiability and designate likely candidates for reduction. Parameter dependencies are analysed along profiles, providing context-dependent suggestions for the type of reduction. We discriminate four distinct scenarios, each associated with a specific model reduction strategy. Iterating the presented procedure eventually results in an identifiable model, which is capable of generating precise and testable predictions. Source code for all toy examples is provided within the freely available, open-source modelling environment Data2Dynamics based on MATLAB available at http://www.data2dynamics.org/, as well as the R packages dMod/cOde available at https://github.com/dkaschek/. Moreover, the concept is generally applicable and can readily be used with any software capable of calculating the profile likelihood.

  4. Inferring fixed effects in a mixed linear model from an integrated likelihood

    DEFF Research Database (Denmark)

    Gianola, Daniel; Sorensen, Daniel

    2008-01-01

    of all nuisances, viewing random effects and variance components as missing data. In a simulation of a grazing trial, the procedure was compared with four widely used estimators of fixed effects in mixed models, and found to be competitive. An analysis of body weight in freshwater crayfish was conducted......A new method for likelihood-based inference of fixed effects in mixed linear models, with variance components treated as nuisance parameters, is presented. The method uses uniform-integration of the likelihood; the implementation employs the expectation-maximization (EM) algorithm for elimination...

  5. A New Speaker Verification Method with GlobalSpeaker Model and Likelihood Score Normalization

    Institute of Scientific and Technical Information of China (English)

    张怡颖; 朱小燕; 张钹

    2000-01-01

    In this paper a new text-independent speaker verification method GSMSV is proposed based on likelihood score normalization. In this novel method a global speaker model is established to represent the universal features of speech and normalize the likelihood score. Statistical analysis demonstrates that this normalization method can remove common factors of speech and bring the differences between speakers into prominence. As a result the equal error rate is decreased significantly,verification procedure is accelerated and system adaptability to speaking speed is improved.

  6. Maximum likelihood estimation for Cox's regression model under nested case-control sampling

    DEFF Research Database (Denmark)

    Scheike, Thomas; Juul, Anders

    2004-01-01

    Nested case-control sampling is designed to reduce the costs of large cohort studies. It is important to estimate the parameters of interest as efficiently as possible. We present a new maximum likelihood estimator (MLE) for nested case-control sampling in the context of Cox's proportional hazards...... model. The MLE is computed by the EM-algorithm, which is easy to implement in the proportional hazards setting. Standard errors are estimated by a numerical profile likelihood approach based on EM aided differentiation. The work was motivated by a nested case-control study that hypothesized that insulin...

  7. ASYMPTOTIC NORMALITY OF MAXIMUM QUASI-LIKELIHOOD ESTIMATORS IN GENERALIZED LINEAR MODELS WITH FIXED DESIGN

    Institute of Scientific and Technical Information of China (English)

    Qibing GAO; Yaohua WU; Chunhua ZHU; Zhanfeng WANG

    2008-01-01

    In generalized linear models with fixed design, under the assumption ~ →∞ and otherregularity conditions, the asymptotic normality of maximum quasi-likelihood estimator (β)n, which is the root of the quasi-likelihood equation with natural link function ∑n/i=1Xi(yi-μ(X1/iβ))=0, is obtained,where λ/-n denotes the minimum eigenvalue of ∑n/i=1XiX/1/i, Xi are bounded p x q regressors, and yi are q × 1 responses.

  8. Modified likelihood ratio tests in heteroskedastic multivariate regression models with measurement error

    CERN Document Server

    Melo, Tatiane F N; Patriota, Alexandre G

    2012-01-01

    In this paper, we develop a modified version of the likelihood ratio test for multivariate heteroskedastic errors-in-variables regression models. The error terms are allowed to follow a multivariate distribution in the elliptical class of distributions, which has the normal distribution as a special case. We derive the Skovgaard adjusted likelihood ratio statistic, which follows a chi-squared distribution with a high degree of accuracy. We conduct a simulation study and show that the proposed test displays superior finite sample behavior as compared to the standard likelihood ratio test. We illustrate the usefulness of our results in applied settings using a data set from the WHO MONICA Project on cardiovascular disease.

  9. On the Relationships between Jeffreys Modal and Weighted Likelihood Estimation of Ability under Logistic IRT Models

    Science.gov (United States)

    Magis, David; Raiche, Gilles

    2012-01-01

    This paper focuses on two estimators of ability with logistic item response theory models: the Bayesian modal (BM) estimator and the weighted likelihood (WL) estimator. For the BM estimator, Jeffreys' prior distribution is considered, and the corresponding estimator is referred to as the Jeffreys modal (JM) estimator. It is established that under…

  10. Quasi-Maximum Likelihood Estimation of Structural Equation Models with Multiple Interaction and Quadratic Effects

    Science.gov (United States)

    Klein, Andreas G.; Muthen, Bengt O.

    2007-01-01

    In this article, a nonlinear structural equation model is introduced and a quasi-maximum likelihood method for simultaneous estimation and testing of multiple nonlinear effects is developed. The focus of the new methodology lies on efficiency, robustness, and computational practicability. Monte-Carlo studies indicate that the method is highly…

  11. Estimation of stochastic frontier models with fixed-effects through Monte Carlo Maximum Likelihood

    NARCIS (Netherlands)

    Emvalomatis, G.; Stefanou, S.E.; Oude Lansink, A.G.J.M.

    2011-01-01

    Estimation of nonlinear fixed-effects models is plagued by the incidental parameters problem. This paper proposes a procedure for choosing appropriate densities for integrating the incidental parameters from the likelihood function in a general context. The densities are based on priors that are upd

  12. Empirical Likelihood Based Variable Selection for Varying Coefficient Partially Linear Models with Censored Data

    Institute of Scientific and Technical Information of China (English)

    Peixin ZHAO

    2013-01-01

    In this paper,we consider the variable selection for the parametric components of varying coefficient partially linear models with censored data.By constructing a penalized auxiliary vector ingeniously,we propose an empirical likelihood based variable selection procedure,and show that it is consistent and satisfies the sparsity.The simulation studies show that the proposed variable selection method is workable.

  13. Maximum Likelihood Estimation of Nonlinear Structural Equation Models with Ignorable Missing Data

    Science.gov (United States)

    Lee, Sik-Yum; Song, Xin-Yuan; Lee, John C. K.

    2003-01-01

    The existing maximum likelihood theory and its computer software in structural equation modeling are established on the basis of linear relationships among latent variables with fully observed data. However, in social and behavioral sciences, nonlinear relationships among the latent variables are important for establishing more meaningful models…

  14. The Performance of the Full Information Maximum Likelihood Estimator in Multiple Regression Models with Missing Data.

    Science.gov (United States)

    Enders, Craig K.

    2001-01-01

    Examined the performance of a recently available full information maximum likelihood (FIML) estimator in a multiple regression model with missing data using Monte Carlo simulation and considering the effects of four independent variables. Results indicate that FIML estimation was superior to that of three ad hoc techniques, with less bias and less…

  15. Rate of strong consistency of quasi maximum likelihood estimate in generalized linear models

    Institute of Scientific and Technical Information of China (English)

    2004-01-01

    [1]McCullagh, P., Nelder, J. A., Generalized Linear Models, New York: Chapman and Hall, 1989.[2]Wedderbum, R. W. M., Quasi-likelihood functions, generalized linear models and Gauss-Newton method,Biometrika, 1974, 61:439-447.[3]Fahrmeir, L., Maximum likelihood estimation in misspecified generalized linear models, Statistics, 1990, 21:487-502.[4]Fahrmeir, L., Kaufmann, H., Consistency and asymptotic normality of the maximum likelihood estimator in generalized linear models, Ann. Statist., 1985, 13: 342-368.[5]Melder, J. A., Pregibon, D., An extended quasi-likelihood function, Biometrika, 1987, 74: 221-232.[6]Bennet, G., Probability inequalities for the sum of independent random variables, JASA, 1962, 57: 33-45.[7]Stout, W. F., Almost Sure Convergence, New York:Academic Press, 1974.[8]Petrov, V, V., Sums of Independent Random Variables, Berlin, New York: Springer-Verlag, 1975.

  16. Asymptotic Properties of the Maximum Likelihood Estimate in Generalized Linear Models with Stochastic Regressors

    Institute of Scientific and Technical Information of China (English)

    Jie Li DING; Xi Ru CHEN

    2006-01-01

    For generalized linear models (GLM), in case the regressors are stochastic and have different distributions, the asymptotic properties of the maximum likelihood estimate (MLE)(β^)n of the parameters are studied. Under reasonable conditions, we prove the weak, strong consistency and asymptotic normality of(β^)n.

  17. %lrasch_mml: A SAS Macro for Marginal Maximum Likelihood Estimation in Longitudinal Polytomous Rasch Models

    Directory of Open Access Journals (Sweden)

    Maja Olsbjerg

    2015-10-01

    Full Text Available Item response theory models are often applied when a number items are used to measure a unidimensional latent variable. Originally proposed and used within educational research, they are also used when focus is on physical functioning or psychological wellbeing. Modern applications often need more general models, typically models for multidimensional latent variables or longitudinal models for repeated measurements. This paper describes a SAS macro that fits two-dimensional polytomous Rasch models using a specification of the model that is sufficiently flexible to accommodate longitudinal Rasch models. The macro estimates item parameters using marginal maximum likelihood estimation. A graphical presentation of item characteristic curves is included.

  18. A penalized likelihood approach for bivariate conditional normal models for dynamic co-expression analysis.

    Science.gov (United States)

    Chen, Jun; Xie, Jichun; Li, Hongzhe

    2011-03-01

    Gene co-expressions have been widely used in the analysis of microarray gene expression data. However, the co-expression patterns between two genes can be mediated by cellular states, as reflected by expression of other genes, single nucleotide polymorphisms, and activity of protein kinases. In this article, we introduce a bivariate conditional normal model for identifying the variables that can mediate the co-expression patterns between two genes. Based on this model, we introduce a likelihood ratio (LR) test and a penalized likelihood procedure for identifying the mediators that affect gene co-expression patterns. We propose an efficient computational algorithm based on iterative reweighted least squares and cyclic coordinate descent and have shown that when the tuning parameter in the penalized likelihood is appropriately selected, such a procedure has the oracle property in selecting the variables. We present simulation results to compare with existing methods and show that the LR-based approach can perform similarly or better than the existing method of liquid association and the penalized likelihood procedure can be quite effective in selecting the mediators. We apply the proposed method to yeast gene expression data in order to identify the kinases or single nucleotide polymorphisms that mediate the co-expression patterns between genes.

  19. Parameter Estimation for an Electric Arc Furnace Model Using Maximum Likelihood

    Directory of Open Access Journals (Sweden)

    Jesser J. Marulanda-Durango

    2012-12-01

    Full Text Available In this paper, we present a methodology for estimating the parameters of a model for an electrical arc furnace, by using maximum likelihood estimation. Maximum likelihood estimation is one of the most employed methods for parameter estimation in practical settings. The model for the electrical arc furnace that we consider, takes into account the non-periodic and non-linear variations in the voltage-current characteristic. We use NETLAB, an open source MATLAB® toolbox, for solving a set of non-linear algebraic equations that relate all the parameters to be estimated. Results obtained through simulation of the model in PSCADTM, are contrasted against real measurements taken during the furnance's most critical operating point. We show how the model for the electrical arc furnace, with appropriate parameter tuning, captures with great detail the real voltage and current waveforms generated by the system. Results obtained show a maximum error of 5% for the current's root mean square error.

  20. The empirical likelihood goodness-of-fit test for regression model

    Institute of Scientific and Technical Information of China (English)

    Li-xing ZHU; Yong-song QIN; Wang-li XU

    2007-01-01

    Goodness-of-fit test for regression modes has received much attention in literature. In this paper, empirical likelihood (EL) goodness-of-fit tests for regression models including classical parametric and autoregressive (AR) time series models are proposed. Unlike the existing locally smoothing and globally smoothing methodologies, the new method has the advantage that the tests are self-scale invariant and that the asymptotic null distribution is chi-squared. Simulations are carried out to illustrate the methodology.

  1. Automatic Queuing Model for Banking Applications

    Directory of Open Access Journals (Sweden)

    Dr. Ahmed S. A. AL-Jumaily

    2011-08-01

    Full Text Available Queuing is the process of moving customers in a specific sequence to a specific service according to the customer need. The term scheduling stands for the process of computing a schedule. This may be done by a queuing based scheduler. This paper focuses on the banks lines system, the different queuing algorithms that are used in banks to serve the customers, and the average waiting time. The aim of this paper is to build automatic queuing system for organizing the banks queuing system that can analyses the queue status and take decision which customer to serve. The new queuing architecture model can switch between different scheduling algorithms according to the testing results and the factor of the average waiting time. The main innovation of this work concerns the modeling of the average waiting time is taken into processing, in addition with the process of switching to the scheduling algorithm that gives the best average waiting time.

  2. Computing maximum-likelihood estimates for parameters of the National Descriptive Model of Mercury in Fish

    Science.gov (United States)

    Donato, David I.

    2012-01-01

    This report presents the mathematical expressions and the computational techniques required to compute maximum-likelihood estimates for the parameters of the National Descriptive Model of Mercury in Fish (NDMMF), a statistical model used to predict the concentration of methylmercury in fish tissue. The expressions and techniques reported here were prepared to support the development of custom software capable of computing NDMMF parameter estimates more quickly and using less computer memory than is currently possible with available general-purpose statistical software. Computation of maximum-likelihood estimates for the NDMMF by numerical solution of a system of simultaneous equations through repeated Newton-Raphson iterations is described. This report explains the derivation of the mathematical expressions required for computational parameter estimation in sufficient detail to facilitate future derivations for any revised versions of the NDMMF that may be developed.

  3. ABC of SV: Limited Information Likelihood Inference in Stochastic Volatility Jump-Diffusion Models

    DEFF Research Database (Denmark)

    Creel, Michael; Kristensen, Dennis

    We develop novel methods for estimation and filtering of continuous-time models with stochastic volatility and jumps using so-called Approximate Bayesian Computation which build likelihoods based on limited information. The proposed estimators and filters are computationally attractive relative...... to standard likelihood-based versions since they rely on low-dimensional auxiliary statistics and so avoid computation of high-dimensional integrals. Despite their computational simplicity, we find that estimators and filters perform well in practice and lead to precise estimates of model parameters...... and latent variables. We show how the methods can incorporate intra-daily information to improve on the estimation and filtering. In particular, the availability of realized volatility measures help us in learning about parameters and latent states. The method is employed in the estimation of a flexible...

  4. Asymptotic normality and strong consistency of maximum quasi-likelihood estimates in generalized linear models

    Institute of Scientific and Technical Information of China (English)

    YIN; Changming; ZHAO; Lincheng; WEI; Chengdong

    2006-01-01

    In a generalized linear model with q × 1 responses, the bounded and fixed (or adaptive) p × q regressors Zi and the general link function, under the most general assumption on the minimum eigenvalue of ∑ni=1 ZiZ'i, the moment condition on responses as weak as possible and the other mild regular conditions, we prove that the maximum quasi-likelihood estimates for the regression parameter vector are asymptotically normal and strongly consistent.

  5. ASYMPTOTIC NORMALITY OF QUASI MAXIMUM LIKELIHOOD ESTIMATE IN GENERALIZED LINEAR MODELS

    Institute of Scientific and Technical Information of China (English)

    YUE LI; CHEN XIRU

    2005-01-01

    For the Generalized Linear Model (GLM), under some conditions including that the specification of the expectation is correct, it is shown that the Quasi Maximum Likelihood Estimate (QMLE) of the parameter-vector is asymptotic normal. It is also shown that the asymptotic covariance matrix of the QMLE reaches its minimum (in the positive-definte sense) in case that the specification of the covariance matrix is correct.

  6. Quasi-likelihood estimation of average treatment effects based on model information

    Institute of Scientific and Technical Information of China (English)

    Zhi-hua SUN

    2007-01-01

    In this paper, the estimation of average treatment effects is considered when we have the model information of the conditional mean and conditional variance for the responses given the covariates. The quasi-likelihood method adapted to treatment effects data is developed to estimate the parameters in the conditional mean and conditional variance models. Based on the model information, we define three estimators by imputation, regression and inverse probability weighted methods.All the estimators are shown asymptotically normal. Our simulation results show that by using the model information, the substantial efficiency gains are obtained which are comparable with the existing estimators.

  7. Quasi-likelihood estimation of average treatment effects based on model information

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    In this paper, the estimation of average treatment effects is considered when we have the model information of the conditional mean and conditional variance for the responses given the covariates. The quasi-likelihood method adapted to treatment effects data is developed to estimate the parameters in the conditional mean and conditional variance models. Based on the model information, we define three estimators by imputation, regression and inverse probability weighted methods. All the estimators are shown asymptotically normal. Our simulation results show that by using the model information, the substantial efficiency gains are obtained which are comparable with the existing estimators.

  8. Computation of the Likelihood in Biallelic Diffusion Models Using Orthogonal Polynomials

    Directory of Open Access Journals (Sweden)

    Claus Vogl

    2014-11-01

    Full Text Available In population genetics, parameters describing forces such as mutation, migration and drift are generally inferred from molecular data. Lately, approximate methods based on simulations and summary statistics have been widely applied for such inference, even though these methods waste information. In contrast, probabilistic methods of inference can be shown to be optimal, if their assumptions are met. In genomic regions where recombination rates are high relative to mutation rates, polymorphic nucleotide sites can be assumed to evolve independently from each other. The distribution of allele frequencies at a large number of such sites has been called “allele-frequency spectrum” or “site-frequency spectrum” (SFS. Conditional on the allelic proportions, the likelihoods of such data can be modeled as binomial. A simple model representing the evolution of allelic proportions is the biallelic mutation-drift or mutation-directional selection-drift diffusion model. With series of orthogonal polynomials, specifically Jacobi and Gegenbauer polynomials, or the related spheroidal wave function, the diffusion equations can be solved efficiently. In the neutral case, the product of the binomial likelihoods with the sum of such polynomials leads to finite series of polynomials, i.e., relatively simple equations, from which the exact likelihoods can be calculated. In this article, the use of orthogonal polynomials for inferring population genetic parameters is investigated.

  9. Handling Missing Data With Multilevel Structural Equation Modeling and Full Information Maximum Likelihood Techniques.

    Science.gov (United States)

    Schminkey, Donna L; von Oertzen, Timo; Bullock, Linda

    2016-08-01

    With increasing access to population-based data and electronic health records for secondary analysis, missing data are common. In the social and behavioral sciences, missing data frequently are handled with multiple imputation methods or full information maximum likelihood (FIML) techniques, but healthcare researchers have not embraced these methodologies to the same extent and more often use either traditional imputation techniques or complete case analysis, which can compromise power and introduce unintended bias. This article is a review of options for handling missing data, concluding with a case study demonstrating the utility of multilevel structural equation modeling using full information maximum likelihood (MSEM with FIML) to handle large amounts of missing data. MSEM with FIML is a parsimonious and hypothesis-driven strategy to cope with large amounts of missing data without compromising power or introducing bias. This technique is relevant for nurse researchers faced with ever-increasing amounts of electronic data and decreasing research budgets. © 2016 Wiley Periodicals, Inc.

  10. Nonlinear Random Effects Mixture Models: Maximum Likelihood Estimation via the EM Algorithm.

    Science.gov (United States)

    Wang, Xiaoning; Schumitzky, Alan; D'Argenio, David Z

    2007-08-15

    Nonlinear random effects models with finite mixture structures are used to identify polymorphism in pharmacokinetic/pharmacodynamic phenotypes. An EM algorithm for maximum likelihood estimation approach is developed and uses sampling-based methods to implement the expectation step, that results in an analytically tractable maximization step. A benefit of the approach is that no model linearization is performed and the estimation precision can be arbitrarily controlled by the sampling process. A detailed simulation study illustrates the feasibility of the estimation approach and evaluates its performance. Applications of the proposed nonlinear random effects mixture model approach to other population pharmacokinetic/pharmacodynamic problems will be of interest for future investigation.

  11. Empirical likelihood confidence regions of the parameters in a partially linear single-index model

    Institute of Scientific and Technical Information of China (English)

    XUE Liugen; ZHU Lixing

    2005-01-01

    In this paper, a partially linear single-index model is investigated, and three empirical log-likelihood ratio statistics for the unknown parameters in the model are suggested. It is proved that the proposed statistics are asymptotically standard chi-square under some suitable conditions, and hence can be used to construct the confidence regions of the parameters. Our methods can also deal with the confidence region construction for the index in the pure single-index model. A simulation study indicates that, in terms of coverage probabilities and average areas of the confidence regions, the proposed methods perform better than the least-squares method.

  12. Likelihood Inference of Nonlinear Models Based on a Class of Flexible Skewed Distributions

    Directory of Open Access Journals (Sweden)

    Xuedong Chen

    2014-01-01

    Full Text Available This paper deals with the issue of the likelihood inference for nonlinear models with a flexible skew-t-normal (FSTN distribution, which is proposed within a general framework of flexible skew-symmetric (FSS distributions by combining with skew-t-normal (STN distribution. In comparison with the common skewed distributions such as skew normal (SN, and skew-t (ST as well as scale mixtures of skew normal (SMSN, the FSTN distribution can accommodate more flexibility and robustness in the presence of skewed, heavy-tailed, especially multimodal outcomes. However, for this distribution, a usual approach of maximum likelihood estimates based on EM algorithm becomes unavailable and an alternative way is to return to the original Newton-Raphson type method. In order to improve the estimation as well as the way for confidence estimation and hypothesis test for the parameters of interest, a modified Newton-Raphson iterative algorithm is presented in this paper, based on profile likelihood for nonlinear regression models with FSTN distribution, and, then, the confidence interval and hypothesis test are also developed. Furthermore, a real example and simulation are conducted to demonstrate the usefulness and the superiority of our approach.

  13. Comparisons of Maximum Likelihood Estimates and Bayesian Estimates for the Discretized Discovery Process Model

    Institute of Scientific and Technical Information of China (English)

    GaoChunwen; XuJingzhen; RichardSinding-Larsen

    2005-01-01

    A Bayesian approach using Markov chain Monte Carlo algorithms has been developed to analyze Smith's discretized version of the discovery process model. It avoids the problems involved in the maximum likelihood method by effectively making use of the information from the prior distribution and that from the discovery sequence according to posterior probabilities. All statistical inferences about the parameters of the model and total resources can be quantified by drawing samples directly from the joint posterior distribution. In addition, statistical errors of the samples can be easily assessed and the convergence properties can be monitored during the sampling. Because the information contained in a discovery sequence is not enough to estimate all parameters, especially the number of fields, geologically justified prior information is crucial to the estimation. The Bayesian approach allows the analyst to specify his subjective estimates of the required parameters and his degree of uncertainty about the estimates in a clearly identified fashion throughout the analysis. As an example, this approach is applied to the same data of the North Sea on which Smith demonstrated his maximum likelihood method. For this case, the Bayesian approach has really improved the overly pessimistic results and downward bias of the maximum likelihood procedure.

  14. Hidden Markov models in automatic speech recognition

    Science.gov (United States)

    Wrzoskowicz, Adam

    1993-11-01

    This article describes a method for constructing an automatic speech recognition system based on hidden Markov models (HMMs). The author discusses the basic concepts of HMM theory and the application of these models to the analysis and recognition of speech signals. The author provides algorithms which make it possible to train the ASR system and recognize signals on the basis of distinct stochastic models of selected speech sound classes. The author describes the specific components of the system and the procedures used to model and recognize speech. The author discusses problems associated with the choice of optimal signal detection and parameterization characteristics and their effect on the performance of the system. The author presents different options for the choice of speech signal segments and their consequences for the ASR process. The author gives special attention to the use of lexical, syntactic, and semantic information for the purpose of improving the quality and efficiency of the system. The author also describes an ASR system developed by the Speech Acoustics Laboratory of the IBPT PAS. The author discusses the results of experiments on the effect of noise on the performance of the ASR system and describes methods of constructing HMM's designed to operate in a noisy environment. The author also describes a language for human-robot communications which was defined as a complex multilevel network from an HMM model of speech sounds geared towards Polish inflections. The author also added mandatory lexical and syntactic rules to the system for its communications vocabulary.

  15. Biological models for automatic target detection

    Science.gov (United States)

    Schachter, Bruce

    2008-04-01

    Humans are better at detecting targets in literal imagery than any known algorithm. Recent advances in modeling visual processes have resulted from f-MRI brain imaging with humans and the use of more invasive techniques with monkeys. There are four startling new discoveries. 1) The visual cortex does not simply process an incoming image. It constructs a physics based model of the image. 2) Coarse category classification and range-to-target are estimated quickly - possibly through the dorsal pathway of the visual cortex, combining rapid coarse processing of image data with expectations and goals. This data is then fed back to lower levels to resize the target and enhance the recognition process feeding forward through the ventral pathway. 3) Giant photosensitive retinal ganglion cells provide data for maintaining circadian rhythm (time-of-day) and modeling the physics of the light source. 4) Five filter types implemented by the neurons of the primary visual cortex have been determined. A computer model for automatic target detection has been developed based upon these recent discoveries. It uses an artificial neural network architecture with multiple feed-forward and feedback paths. Our implementation's efficiency derives from the observation that any 2-D filter kernel can be approximated by a sum of 2-D box functions. And, a 2-D box function easily decomposes into two 1-D box functions. Further efficiency is obtained by decomposing the largest neural filter into a high pass filter and a more sparsely sampled low pass filter.

  16. Robust maximum likelihood estimation for stochastic state space model with observation outliers

    Science.gov (United States)

    AlMutawa, J.

    2016-08-01

    The objective of this paper is to develop a robust maximum likelihood estimation (MLE) for the stochastic state space model via the expectation maximisation algorithm to cope with observation outliers. Two types of outliers and their influence are studied in this paper: namely,the additive outlier (AO) and innovative outlier (IO). Due to the sensitivity of the MLE to AO and IO, we propose two techniques for robustifying the MLE: the weighted maximum likelihood estimation (WMLE) and the trimmed maximum likelihood estimation (TMLE). The WMLE is easy to implement with weights estimated from the data; however, it is still sensitive to IO and a patch of AO outliers. On the other hand, the TMLE is reduced to a combinatorial optimisation problem and hard to implement but it is efficient to both types of outliers presented here. To overcome the difficulty, we apply the parallel randomised algorithm that has a low computational cost. A Monte Carlo simulation result shows the efficiency of the proposed algorithms. An earlier version of this paper was presented at the 8th Asian Control Conference, Kaohsiung, Taiwan, 2011.

  17. Generalized Empirical Likelihood Inference in Semiparametric Regression Model for Longitudinal Data

    Institute of Scientific and Technical Information of China (English)

    Gao Rong LI; Ping TIAN; Liu Gen XUE

    2008-01-01

    In this paper, we consider the semiparametric regression model for longitudinal data. Due to the correlation within groups, a generalized empirical log-likelihood ratio statistic for the unknown parameters in the model is suggested by introducing the working covariance matrix. It is proved that the proposed statistic is asymptotically standard chi-squared under some suitable conditions, and hence it can be used to construct the confidence regions of the parameters. A simulation study is conducted to compare the proposed method with the generalized least squares method in terms of coverage accuracy and average lengths of the confidence intervals.

  18. Maximum-likelihood model averaging to profile clustering of site types across discrete linear sequences.

    Directory of Open Access Journals (Sweden)

    Zhang Zhang

    2009-06-01

    Full Text Available A major analytical challenge in computational biology is the detection and description of clusters of specified site types, such as polymorphic or substituted sites within DNA or protein sequences. Progress has been stymied by a lack of suitable methods to detect clusters and to estimate the extent of clustering in discrete linear sequences, particularly when there is no a priori specification of cluster size or cluster count. Here we derive and demonstrate a maximum likelihood method of hierarchical clustering. Our method incorporates a tripartite divide-and-conquer strategy that models sequence heterogeneity, delineates clusters, and yields a profile of the level of clustering associated with each site. The clustering model may be evaluated via model selection using the Akaike Information Criterion, the corrected Akaike Information Criterion, and the Bayesian Information Criterion. Furthermore, model averaging using weighted model likelihoods may be applied to incorporate model uncertainty into the profile of heterogeneity across sites. We evaluated our method by examining its performance on a number of simulated datasets as well as on empirical polymorphism data from diverse natural alleles of the Drosophila alcohol dehydrogenase gene. Our method yielded greater power for the detection of clustered sites across a breadth of parameter ranges, and achieved better accuracy and precision of estimation of clusters, than did the existing empirical cumulative distribution function statistics.

  19. Calibrating floor field cellular automaton models for pedestrian dynamics by using likelihood function optimization

    Science.gov (United States)

    Lovreglio, Ruggiero; Ronchi, Enrico; Nilsson, Daniel

    2015-11-01

    The formulation of pedestrian floor field cellular automaton models is generally based on hypothetical assumptions to represent reality. This paper proposes a novel methodology to calibrate these models using experimental trajectories. The methodology is based on likelihood function optimization and allows verifying whether the parameters defining a model statistically affect pedestrian navigation. Moreover, it allows comparing different model specifications or the parameters of the same model estimated using different data collection techniques, e.g. virtual reality experiment, real data, etc. The methodology is here implemented using navigation data collected in a Virtual Reality tunnel evacuation experiment including 96 participants. A trajectory dataset in the proximity of an emergency exit is used to test and compare different metrics, i.e. Euclidean and modified Euclidean distance, for the static floor field. In the present case study, modified Euclidean metrics provide better fitting with the data. A new formulation using random parameters for pedestrian cellular automaton models is also defined and tested.

  20. Elaboration Likelihood Model and an Analysis of the Contexts of Its Application

    Directory of Open Access Journals (Sweden)

    Aslıhan Kıymalıoğlu

    2014-12-01

    Full Text Available Elaboration Likelihood Model (ELM, which supports the existence of two routes to persuasion: central and peripheral routes, has been one of the major models on persuasion. As the number of studies in the Turkish literature on ELM is limited, a detailed explanation of the model together with a comprehensive literature review was considered to be contributory for this gap. The findings of the review reveal that the model was mostly used in marketing and advertising researches, that the concept most frequently used in elaboration process was involvement, and that argument quality and endorser credibility were the factors most often employed in measuring their effect on the dependant variables. The review provides valuable insights as it presents a holistic view of the model and the variables used in the model.

  1. Effect of formal and informal likelihood functions on uncertainty assessment in a single event rainfall-runoff model

    Science.gov (United States)

    Nourali, Mahrouz; Ghahraman, Bijan; Pourreza-Bilondi, Mohsen; Davary, Kamran

    2016-09-01

    In the present study, DREAM(ZS), Differential Evolution Adaptive Metropolis combined with both formal and informal likelihood functions, is used to investigate uncertainty of parameters of the HEC-HMS model in Tamar watershed, Golestan province, Iran. In order to assess the uncertainty of 24 parameters used in HMS, three flood events were used to calibrate and one flood event was used to validate the posterior distributions. Moreover, performance of seven different likelihood functions (L1-L7) was assessed by means of DREAM(ZS)approach. Four likelihood functions, L1-L4, Nash-Sutcliffe (NS) efficiency, Normalized absolute error (NAE), Index of agreement (IOA), and Chiew-McMahon efficiency (CM), is considered as informal, whereas remaining (L5-L7) is represented in formal category. L5 focuses on the relationship between the traditional least squares fitting and the Bayesian inference, and L6, is a hetereoscedastic maximum likelihood error (HMLE) estimator. Finally, in likelihood function L7, serial dependence of residual errors is accounted using a first-order autoregressive (AR) model of the residuals. According to the results, sensitivities of the parameters strongly depend on the likelihood function, and vary for different likelihood functions. Most of the parameters were better defined by formal likelihood functions L5 and L7 and showed a high sensitivity to model performance. Posterior cumulative distributions corresponding to the informal likelihood functions L1, L2, L3, L4 and the formal likelihood function L6 are approximately the same for most of the sub-basins, and these likelihood functions depict almost a similar effect on sensitivity of parameters. 95% total prediction uncertainty bounds bracketed most of the observed data. Considering all the statistical indicators and criteria of uncertainty assessment, including RMSE, KGE, NS, P-factor and R-factor, results showed that DREAM(ZS) algorithm performed better under formal likelihood functions L5 and L7

  2. Conditional maximum likelihood estimation in semiparametric transformation model with LTRC data.

    Science.gov (United States)

    Chen, Chyong-Mei; Shen, Pao-Sheng

    2017-02-06

    Left-truncated data often arise in epidemiology and individual follow-up studies due to a biased sampling plan since subjects with shorter survival times tend to be excluded from the sample. Moreover, the survival time of recruited subjects are often subject to right censoring. In this article, a general class of semiparametric transformation models that include proportional hazards model and proportional odds model as special cases is studied for the analysis of left-truncated and right-censored data. We propose a conditional likelihood approach and develop the conditional maximum likelihood estimators (cMLE) for the regression parameters and cumulative hazard function of these models. The derived score equations for regression parameter and infinite-dimensional function suggest an iterative algorithm for cMLE. The cMLE is shown to be consistent and asymptotically normal. The limiting variances for the estimators can be consistently estimated using the inverse of negative Hessian matrix. Intensive simulation studies are conducted to investigate the performance of the cMLE. An application to the Channing House data is given to illustrate the methodology.

  3. The early maximum likelihood estimation model of audiovisual integration in speech perception

    DEFF Research Database (Denmark)

    Andersen, Tobias

    2015-01-01

    Speech perception is facilitated by seeing the articulatory mouth movements of the talker. This is due to perceptual audiovisual integration, which also causes the McGurk−MacDonald illusion, and for which a comprehensive computational account is still lacking. Decades of research have largely...... focused on the fuzzy logical model of perception (FLMP), which provides excellent fits to experimental observations but also has been criticized for being too flexible, post hoc and difficult to interpret. The current study introduces the early maximum likelihood estimation (MLE) model of audiovisual......-validation can evaluate models of audiovisual integration based on typical data sets taking both goodness-of-fit and model flexibility into account. All models were tested on a published data set previously used for testing the FLMP. Cross-validation favored the early MLE while more conventional error measures...

  4. Raw Data Maximum Likelihood Estimation for Common Principal Component Models: A State Space Approach.

    Science.gov (United States)

    Gu, Fei; Wu, Hao

    2016-09-01

    The specifications of state space model for some principal component-related models are described, including the independent-group common principal component (CPC) model, the dependent-group CPC model, and principal component-based multivariate analysis of variance. Some derivations are provided to show the equivalence of the state space approach and the existing Wishart-likelihood approach. For each model, a numeric example is used to illustrate the state space approach. In addition, a simulation study is conducted to evaluate the standard error estimates under the normality and nonnormality conditions. In order to cope with the nonnormality conditions, the robust standard errors are also computed. Finally, other possible applications of the state space approach are discussed at the end.

  5. Rate of strong consistency of quasi maximum likelihood estimate in generalized linear models

    Institute of Scientific and Technical Information of China (English)

    YUE Li; CHEN Xiru

    2004-01-01

    Under the assumption that in the generalized linear model (GLM) the expectation of the response variable has a correct specification and some other smooth conditions,it is shown that with probability one the quasi-likelihood equation for the GLM has a solution when the sample size n is sufficiently large. The rate of this solution tending to the true value is determined. In an important special case, this rate is the same as specified in the LIL for iid partial sums and thus cannot be improved anymore.

  6. Likelihood based observability analysis and confidence intervals for predictions of dynamic models

    CERN Document Server

    Kreutz, Clemens; Timmer, Jens

    2011-01-01

    Mechanistic dynamic models of biochemical networks such as Ordinary Differential Equations (ODEs) contain unknown parameters like the reaction rate constants and the initial concentrations of the compounds. The large number of parameters as well as their nonlinear impact on the model responses hamper the determination of confidence regions for parameter estimates. At the same time, classical approaches translating the uncertainty of the parameters into confidence intervals for model predictions are hardly feasible. In this article it is shown that a so-called prediction profile likelihood yields reliable confidence intervals for model predictions, despite arbitrarily complex and high-dimensional shapes of the confidence regions for the estimated parameters. Prediction confidence intervals of the dynamic states allow a data-based observability analysis. The approach renders the issue of sampling a high-dimensional parameter space into evaluating one-dimensional prediction spaces. The method is also applicable ...

  7. Observation Likelihood Model Design and Failure Recovery Scheme toward Reliable Localization of Mobile Robots

    Directory of Open Access Journals (Sweden)

    Chang-bae Moon

    2011-01-01

    Full Text Available Although there have been many researches on mobile robot localization, it is still difficult to obtain reliable localization performance in a human co-existing real environment. Reliability of localization is highly dependent upon developer's experiences because uncertainty is caused by a variety of reasons. We have developed a range sensor based integrated localization scheme for various indoor service robots. Through the experience, we found out that there are several significant experimental issues. In this paper, we provide useful solutions for following questions which are frequently faced with in practical applications: 1 How to design an observation likelihood model? 2 How to detect the localization failure? 3 How to recover from the localization failure? We present design guidelines of observation likelihood model. Localization failure detection and recovery schemes are presented by focusing on abrupt wheel slippage. Experiments were carried out in a typical office building environment. The proposed scheme to identify the localizer status is useful in practical environments. Moreover, the semi-global localization is a computationally efficient recovery scheme from localization failure. The results of experiments and analysis clearly present the usefulness of proposed solutions.

  8. On some problems of weak consistency of quasi-maximum likelihood estimates in generalized linear models

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    In this paper,we explore some weakly consistent properties of quasi-maximum likelihood estimates(QMLE) concerning the quasi-likelihood equation in=1 Xi(yi-μ(Xiβ)) = 0 for univariate generalized linear model E(y |X) = μ(X’β).Given uncorrelated residuals {ei = Yi-μ(Xiβ0),1 i n} and other conditions,we prove that βn-β0 = Op(λn-1/2) holds,where βn is a root of the above equation,β0 is the true value of parameter β and λn denotes the smallest eigenvalue of the matrix Sn = ni=1 XiXi.We also show that the convergence rate above is sharp,provided independent non-asymptotically degenerate residual sequence and other conditions.Moreover,paralleling to the elegant result of Drygas(1976) for classical linear regression models,we point out that the necessary condition guaranteeing the weak consistency of QMLE is Sn-1→ 0,as the sample size n →∞.

  9. On some problems of weak consistency of quasi-maximum likelihood estimates in generalized linear models

    Institute of Scientific and Technical Information of China (English)

    ZHANG SanGuo; LIAO Yuan

    2008-01-01

    In this paper, we explore some weakly consistent properties of quasi-maximum likelihood estimates(QMLE)concerning the quasi-likelihood equation ∑ni=1 Xi(yi-μ(X1iβ)) =0 for univariate generalized linear model E(y|X) =μ(X1β). Given uncorrelated residuals{ei=Yi-μ(X1iβ0), 1≤i≤n}and other conditions, we prove that (β)n-β0=Op(λ--1/2n)holds, where (β)n is a root of the above equation,β0 is the true value of parameter β and λ-n denotes the smallest eigenvalue of the matrix Sn=Σni=1 XiX1i. We also show that the convergence rate above is sharp, provided independent nonasymptotically degenerate residual sequence and other conditions. Moreover, paralleling to the elegant result of Drygas(1976)for classical linear regression models,we point out that the necessary condition guaranteeing the weak consistency of QMLE is S-1n→0, as the sample size n→∞.

  10. WOMBAT: a tool for mixed model analyses in quantitative genetics by restricted maximum likelihood (REML).

    Science.gov (United States)

    Meyer, Karin

    2007-11-01

    WOMBAT is a software package for quantitative genetic analyses of continuous traits, fitting a linear, mixed model; estimates of covariance components and the resulting genetic parameters are obtained by restricted maximum likelihood. A wide range of models, comprising numerous traits, multiple fixed and random effects, selected genetic covariance structures, random regression models and reduced rank estimation are accommodated. WOMBAT employs up-to-date numerical and computational methods. Together with the use of efficient compilers, this generates fast executable programs, suitable for large scale analyses. Use of WOMBAT is illustrated for a bivariate analysis. The package consists of the executable program, available for LINUX and WINDOWS environments, manual and a set of worked example, and can be downloaded free of charge from (http://agbu. une.edu.au/~kmeyer/wombat.html).

  11. Effects of deceptive packaging and product involvement on purchase intention: an elaboration likelihood model perspective.

    Science.gov (United States)

    Lammers, H B

    2000-04-01

    From an Elaboration Likelihood Model perspective, it was hypothesized that postexposure awareness of deceptive packaging claims would have a greater negative effect on scores for purchase intention by consumers lowly involved rather than highly involved with a product (n = 40). Undergraduates who were classified as either highly or lowly (ns = 20 and 20) involved with M&Ms examined either a deceptive or non-deceptive package design for M&Ms candy and were subsequently informed of the deception employed in the packaging before finally rating their intention to purchase. As anticipated, highly deceived subjects who were low in involvement rated intention to purchase lower than their highly involved peers. Overall, the results attest to the robustness of the model and suggest that the model has implications beyond advertising effects and into packaging effects.

  12. The likelihood of achieving quantified road safety targets: a binary logistic regression model for possible factors.

    Science.gov (United States)

    Sze, N N; Wong, S C; Lee, C Y

    2014-12-01

    In past several decades, many countries have set quantified road safety targets to motivate transport authorities to develop systematic road safety strategies and measures and facilitate the achievement of continuous road safety improvement. Studies have been conducted to evaluate the association between the setting of quantified road safety targets and road fatality reduction, in both the short and long run, by comparing road fatalities before and after the implementation of a quantified road safety target. However, not much work has been done to evaluate whether the quantified road safety targets are actually achieved. In this study, we used a binary logistic regression model to examine the factors - including vehicle ownership, fatality rate, and national income, in addition to level of ambition and duration of target - that contribute to a target's success. We analyzed 55 quantified road safety targets set by 29 countries from 1981 to 2009, and the results indicate that targets that are in progress and with lower level of ambitions had a higher likelihood of eventually being achieved. Moreover, possible interaction effects on the association between level of ambition and the likelihood of success are also revealed.

  13. Strong Consistency of Maximum Quasi-Likelihood Estimator in Quasi-Likelihood Nonlinear Models%拟似然非线性模型中最大拟似然估计的强相合性

    Institute of Scientific and Technical Information of China (English)

    夏天; 孔繁超

    2008-01-01

    This paper proposes some regularity conditions.On the basis of the proposed regularity conditions,we show the strong consistency of maximum quasi-likelihood estimation (MQLE)in quasi-likelihood nonlinear models (QLNM).Our results may he regarded as a further generalization of the relevant results in Ref.[4].

  14. Frequency-Domain Maximum-Likelihood Estimation of High-Voltage Pulse Transformer Model Parameters

    CERN Document Server

    Aguglia, D

    2014-01-01

    This paper presents an offline frequency-domain nonlinear and stochastic identification method for equivalent model parameter estimation of high-voltage pulse transformers. Such kinds of transformers are widely used in the pulsed-power domain, and the difficulty in deriving pulsed-power converter optimal control strategies is directly linked to the accuracy of the equivalent circuit parameters. These components require models which take into account electric fields energies represented by stray capacitance in the equivalent circuit. These capacitive elements must be accurately identified, since they greatly influence the general converter performances. A nonlinear frequency-based identification method, based on maximum-likelihood estimation, is presented, and a sensitivity analysis of the best experimental test to be considered is carried out. The procedure takes into account magnetic saturation and skin effects occurring in the windings during the frequency tests. The presented method is validated by experim...

  15. Cognitive theories as reinforcement history surrogates: the case of likelihood ratio models of human recognition memory.

    Science.gov (United States)

    Wixted, John T; Gaitan, Santino C

    2002-11-01

    B. F. Skinner (1977) once argued that cognitive theories are essentially surrogates for the organism's (usually unknown) reinforcement history. In this article, we argue that this notion applies rather directly to a class of likelihood ratio models of human recognition memory. The point is not that such models are fundamentally flawed or that they are not useful and should be abandoned. Instead, the point is that the role of reinforcement history in shaping memory decisions could help to explain what otherwise must be explained by assuming that subjects are inexplicably endowed with the relevant distributional information and computational abilities. To the degree that a role for an organism's reinforcement history is appreciated, the importance of animal memory research in understanding human memory comes into clearer focus. As Skinner was also fond of pointing out, it is only in the animal laboratory that an organism's history of reinforcement can be precisely controlled and its effects on behavior clearly understood.

  16. Bayesian Inference using Neural Net Likelihood Models for Protein Secondary Structure Prediction

    Directory of Open Access Journals (Sweden)

    Seong-Gon Kim

    2011-06-01

    Full Text Available Several techniques such as Neural Networks, Genetic Algorithms, Decision Trees and other statistical or heuristic methods have been used to approach the complex non-linear task of predicting Alpha-helicies, Beta-sheets and Turns of a proteins secondary structure in the past. This project introduces a new machine learning method by using an offline trained Multilayered Perceptrons (MLP as the likelihood models within a Bayesian Inference framework to predict secondary structures proteins. Varying window sizes are used to extract neighboring amino acid information and passed back and forth between the Neural Net models and the Bayesian Inference process until there is a convergence of the posterior secondary structure probability.

  17. Adapting Predictive Models for Cepheid Variable Star Classification Using Linear Regression and Maximum Likelihood

    Science.gov (United States)

    Gupta, Kinjal Dhar; Vilalta, Ricardo; Asadourian, Vicken; Macri, Lucas

    2014-05-01

    We describe an approach to automate the classification of Cepheid variable stars into two subtypes according to their pulsation mode. Automating such classification is relevant to obtain a precise determination of distances to nearby galaxies, which in addition helps reduce the uncertainty in the current expansion of the universe. One main difficulty lies in the compatibility of models trained using different galaxy datasets; a model trained using a training dataset may be ineffectual on a testing set. A solution to such difficulty is to adapt predictive models across domains; this is necessary when the training and testing sets do not follow the same distribution. The gist of our methodology is to train a predictive model on a nearby galaxy (e.g., Large Magellanic Cloud), followed by a model-adaptation step to make the model operable on other nearby galaxies. We follow a parametric approach to density estimation by modeling the training data (anchor galaxy) using a mixture of linear models. We then use maximum likelihood to compute the right amount of variable displacement, until the testing data closely overlaps the training data. At that point, the model can be directly used in the testing data (target galaxy).

  18. Maximum Likelihood Implementation of an Isolation-with-Migration Model for Three Species.

    Science.gov (United States)

    Dalquen, Daniel A; Zhu, Tianqi; Yang, Ziheng

    2016-08-02

    We develop a maximum likelihood (ML) method for estimating migration rates between species using genomic sequence data. A species tree is used to accommodate the phylogenetic relationships among three species, allowing for migration between the two sister species, while the third species is used as an out-group. A Markov chain characterization of the genealogical process of coalescence and migration is used to integrate out the migration histories at each locus analytically, whereas Gaussian quadrature is used to integrate over the coalescent times on each genealogical tree numerically. This is an extension of our early implementation of the symmetrical isolation-with-migration model for three species to accommodate arbitrary loci with two or three sequences per locus and to allow asymmetrical migration rates. Our implementation can accommodate tens of thousands of loci, making it feasible to analyze genome-scale data sets to test for gene flow. We calculate the posterior probabilities of gene trees at individual loci to identify genomic regions that are likely to have been transferred between species due to gene flow. We conduct a simulation study to examine the statistical properties of the likelihood ratio test for gene flow between the two in-group species and of the ML estimates of model parameters such as the migration rate. Inclusion of data from a third out-group species is found to increase dramatically the power of the test and the precision of parameter estimation. We compiled and analyzed several genomic data sets from the Drosophila fruit flies. Our analyses suggest no migration from D. melanogaster to D. simulans, and a significant amount of gene flow from D. simulans to D. melanogaster, at the rate of [Formula: see text] migrant individuals per generation. We discuss the utility of the multispecies coalescent model for species tree estimation, accounting for incomplete lineage sorting and migration.

  19. Quasi-Maximum Likelihood Estimators in Generalized Linear Models with Autoregressive Processes

    Institute of Scientific and Technical Information of China (English)

    Hong Chang HU; Lei SONG

    2014-01-01

    The paper studies a generalized linear model (GLM) yt=h(xTtβ)+εt, t=1, 2, . . . , n, whereε1=η1,εt=ρεt-1+ηt, t=2,3,...,n, h is a continuous diff erentiable function,ηt’s are independent and identically distributed random errors with zero mean and finite varianceσ 2. Firstly, the quasi-maximum likelihood (QML) estimators ofβ,ρandσ 2 are given. Secondly, under mild conditions, the asymptotic properties (including the existence, weak consistency and asymptotic distribution) of the QML estimators are investigated. Lastly, the validity of method is illuminated by a simulation example.

  20. Inverse Modeling of Respiratory System during Noninvasive Ventilation by Maximum Likelihood Estimation

    Directory of Open Access Journals (Sweden)

    Esra Saatci

    2010-01-01

    Full Text Available We propose a procedure to estimate the model parameters of presented nonlinear Resistance-Capacitance (RC and the widely used linear Resistance-Inductance-Capacitance (RIC models of the respiratory system by Maximum Likelihood Estimator (MLE. The measurement noise is assumed to be Generalized Gaussian Distributed (GGD, and the variance and the shape factor of the measurement noise are estimated by MLE and Kurtosis method, respectively. The performance of the MLE algorithm is also demonstrated by the Cramer-Rao Lower Bound (CRLB with artificially produced respiratory signals. Airway flow, mask pressure, and lung volume are measured from patients with Chronic Obstructive Pulmonary Disease (COPD under the noninvasive ventilation and from healthy subjects. Simulations show that respiratory signals from healthy subjects are better represented by the RIC model compared to the nonlinear RC model. On the other hand, the Patient group respiratory signals are fitted to the nonlinear RC model with lower measurement noise variance, better converged measurement noise shape factor, and model parameter tracks. Also, it is observed that for the Patient group the shape factor of the measurement noise converges to values between 1 and 2 whereas for the Control group shape factor values are estimated in the super-Gaussian area.

  1. Use of empirical likelihood to calibrate auxiliary information in partly linear monotone regression models.

    Science.gov (United States)

    Chen, Baojiang; Qin, Jing

    2014-05-10

    In statistical analysis, a regression model is needed if one is interested in finding the relationship between a response variable and covariates. When the response depends on the covariate, then it may also depend on the function of this covariate. If one has no knowledge of this functional form but expect for monotonic increasing or decreasing, then the isotonic regression model is preferable. Estimation of parameters for isotonic regression models is based on the pool-adjacent-violators algorithm (PAVA), where the monotonicity constraints are built in. With missing data, people often employ the augmented estimating method to improve estimation efficiency by incorporating auxiliary information through a working regression model. However, under the framework of the isotonic regression model, the PAVA does not work as the monotonicity constraints are violated. In this paper, we develop an empirical likelihood-based method for isotonic regression model to incorporate the auxiliary information. Because the monotonicity constraints still hold, the PAVA can be used for parameter estimation. Simulation studies demonstrate that the proposed method can yield more efficient estimates, and in some situations, the efficiency improvement is substantial. We apply this method to a dementia study.

  2. The null distribution of likelihood-ratio statistics in the conditional-logistic linkage model.

    Science.gov (United States)

    Song, Yeunjoo E; Elston, Robert C

    2013-01-01

    Olson's conditional-logistic model retains the nice property of the LOD score formulation and has advantages over other methods that make it an appropriate choice for complex trait linkage mapping. However, the asymptotic distribution of the conditional-logistic likelihood-ratio (CL-LR) statistic with genetic constraints on the model parameters is unknown for some analysis models, even in the case of samples comprising only independent sib pairs. We derive approximations to the asymptotic null distributions of the CL-LR statistics and compare them with the empirical null distributions by simulation using independent affected sib pairs. Generally, the empirical null distributions of the CL-LR statistics match well the known or approximated asymptotic distributions for all analysis models considered except for the covariate model with a minimum-adjusted binary covariate. This work will provide useful guidelines for linkage analysis of real data sets for the genetic analysis of complex traits, thereby contributing to the identification of genes for disease traits.

  3. Expectation maximization-based likelihood inference for flexible cure rate models with Weibull lifetimes.

    Science.gov (United States)

    Balakrishnan, Narayanaswamy; Pal, Suvra

    2016-08-01

    Recently, a flexible cure rate survival model has been developed by assuming the number of competing causes of the event of interest to follow the Conway-Maxwell-Poisson distribution. This model includes some of the well-known cure rate models discussed in the literature as special cases. Data obtained from cancer clinical trials are often right censored and expectation maximization algorithm can be used in this case to efficiently estimate the model parameters based on right censored data. In this paper, we consider the competing cause scenario and assuming the time-to-event to follow the Weibull distribution, we derive the necessary steps of the expectation maximization algorithm for estimating the parameters of different cure rate survival models. The standard errors of the maximum likelihood estimates are obtained by inverting the observed information matrix. The method of inference developed here is examined by means of an extensive Monte Carlo simulation study. Finally, we illustrate the proposed methodology with a real data on cancer recurrence.

  4. Comparing Bayesian and Maximum Likelihood Predictors in Structural Equation Modeling of Children’s Lifestyle Index

    Directory of Open Access Journals (Sweden)

    Che Wan Jasimah bt Wan Mohamed Radzi

    2016-11-01

    Full Text Available Several factors may influence children’s lifestyle. The main purpose of this study is to introduce a children’s lifestyle index framework and model it based on structural equation modeling (SEM with Maximum likelihood (ML and Bayesian predictors. This framework includes parental socioeconomic status, household food security, parental lifestyle, and children’s lifestyle. The sample for this study involves 452 volunteer Chinese families with children 7–12 years old. The experimental results are compared in terms of root mean square error, coefficient of determination, mean absolute error, and mean absolute percentage error metrics. An analysis of the proposed causal model suggests there are multiple significant interconnections among the variables of interest. According to both Bayesian and ML techniques, the proposed framework illustrates that parental socioeconomic status and parental lifestyle strongly impact children’s lifestyle. The impact of household food security on children’s lifestyle is rejected. However, there is a strong relationship between household food security and both parental socioeconomic status and parental lifestyle. Moreover, the outputs illustrate that the Bayesian prediction model has a good fit with the data, unlike the ML approach. The reasons for this discrepancy between ML and Bayesian prediction are debated and potential advantages and caveats with the application of the Bayesian approach in future studies are discussed.

  5. An automatic composition model of Chinese folk music

    Science.gov (United States)

    Zheng, Xiaomei; Li, Dongyang; Wang, Lei; Shen, Lin; Gao, Yanyuan; Zhu, Yuanyuan

    2017-03-01

    The automatic composition has achieved rich results in recent decades, including Western and some other areas of music. However, the automatic composition of Chinese music is less involved. After thousands of years of development, Chinese folk music has a wealth of resources. To design an automatic composition mode, learn the characters of Chinese folk melody and imitate the creative process of music is of some significance. According to the melodic features of Chinese folk music, a Chinese folk music composition based on Markov model is proposed to analyze Chinese traditional music. Folk songs with typical Chinese national characteristics are selected for analysis. In this paper, an example of automatic composition is given. The experimental results show that this composition model can produce music with characteristics of Chinese folk music.

  6. Automatic Modeling of Virtual Humans and Body Clothing

    Institute of Scientific and Technical Information of China (English)

    Nadia Magnenat-Thalmann; Hyewon Seo; Frederic Cordier

    2004-01-01

    Highly realistic virtual human models are rapidly becoming commonplace in computer graphics.These models, often represented by complex shape and requiring labor-intensive process, challenge the problem of automatic modeling. The problem and solutions to automatic modeling of animatable virtual humans are studied. Methods for capturing the shape of real people, parameterization techniques for modeling static shape (the variety of human body shapes) and dynamic shape (how the body shape changes as it moves) of virtual humans are classified, summarized and compared. Finally, methods for clothed virtual humans are reviewed.

  7. Uncertainty in a monthly water balance model using the generalized likelihood uncertainty estimation methodology

    Indian Academy of Sciences (India)

    Diego Rivera; Yessica Rivas; Alex Godoy

    2015-02-01

    Hydrological models are simplified representations of natural processes and subject to errors. Uncertainty bounds are a commonly used way to assess the impact of an input or model architecture uncertainty in model outputs. Different sets of parameters could have equally robust goodness-of-fit indicators, which is known as Equifinality. We assessed the outputs from a lumped conceptual hydrological model to an agricultural watershed in central Chile under strong interannual variability (coefficient of variability of 25%) by using the Equifinality concept and uncertainty bounds. The simulation period ran from January 1999 to December 2006. Equifinality and uncertainty bounds from GLUE methodology (Generalized Likelihood Uncertainty Estimation) were used to identify parameter sets as potential representations of the system. The aim of this paper is to exploit the use of uncertainty bounds to differentiate behavioural parameter sets in a simple hydrological model. Then, we analyze the presence of equifinality in order to improve the identification of relevant hydrological processes. The water balance model for Chillan River exhibits, at a first stage, equifinality. However, it was possible to narrow the range for the parameters and eventually identify a set of parameters representing the behaviour of the watershed (a behavioural model) in agreement with observational and soft data (calculation of areal precipitation over the watershed using an isohyetal map). The mean width of the uncertainty bound around the predicted runoff for the simulation period decreased from 50 to 20 m3s−1 after fixing the parameter controlling the areal precipitation over the watershed. This decrement is equivalent to decreasing the ratio between simulated and observed discharge from 5.2 to 2.5. Despite the criticisms against the GLUE methodology, such as the lack of statistical formality, it is identified as a useful tool assisting the modeller with the identification of critical parameters.

  8. Neuro-fuzzy system modeling based on automatic fuzzy clustering

    Institute of Scientific and Technical Information of China (English)

    Yuangang TANG; Fuchun SUN; Zengqi SUN

    2005-01-01

    A neuro-fuzzy system model based on automatic fuzzy clustering is proposed.A hybrid model identification algorithm is also developed to decide the model structure and model parameters.The algorithm mainly includes three parts:1) Automatic fuzzy C-means (AFCM),which is applied to generate fuzzy rules automatically,and then fix on the size of the neuro-fuzzy network,by which the complexity of system design is reducesd greatly at the price of the fitting capability;2) Recursive least square estimation (RLSE).It is used to update the parameters of Takagi-Sugeno model,which is employed to describe the behavior of the system;3) Gradient descent algorithm is also proposed for the fuzzy values according to the back propagation algorithm of neural network.Finally,modeling the dynamical equation of the two-link manipulator with the proposed approach is illustrated to validate the feasibility of the method.

  9. Recovery of Graded Response Model Parameters: A Comparison of Marginal Maximum Likelihood and Markov Chain Monte Carlo Estimation

    Science.gov (United States)

    Kieftenbeld, Vincent; Natesan, Prathiba

    2012-01-01

    Markov chain Monte Carlo (MCMC) methods enable a fully Bayesian approach to parameter estimation of item response models. In this simulation study, the authors compared the recovery of graded response model parameters using marginal maximum likelihood (MML) and Gibbs sampling (MCMC) under various latent trait distributions, test lengths, and…

  10. Joint and Conditional Maximum Likelihood Estimation for the Rasch Model for Binary Responses. Research Report. RR-04-20

    Science.gov (United States)

    Haberman, Shelby J.

    2004-01-01

    The usefulness of joint and conditional maximum-likelihood is considered for the Rasch model under realistic testing conditions in which the number of examinees is very large and the number is items is relatively large. Conditions for consistency and asymptotic normality are explored, effects of model error are investigated, measures of prediction…

  11. Issues in acoustic modeling of speech for automatic speech recognition

    OpenAIRE

    Gong, Yifan; Haton, Jean-Paul; Mari, Jean-François

    1994-01-01

    Projet RFIA; Stochastic modeling is a flexible method for handling the large variability in speech for recognition applications. In contrast to dynamic time warping where heuristic training methods for estimating word templates are used, stochastic modeling allows a probabilistic and automatic training for estimating models. This paper deals with the improvement of stochastic techniques, especially for a better representation of time varying phenomena.

  12. Sample Size Determination Within the Scope of Conditional Maximum Likelihood Estimation with Special Focus on Testing the Rasch Model.

    Science.gov (United States)

    Draxler, Clemens; Alexandrowicz, Rainer W

    2015-12-01

    This paper refers to the exponential family of probability distributions and the conditional maximum likelihood (CML) theory. It is concerned with the determination of the sample size for three groups of tests of linear hypotheses, known as the fundamental trinity of Wald, score, and likelihood ratio tests. The main practical purpose refers to the special case of tests of the class of Rasch models. The theoretical background is discussed and the formal framework for sample size calculations is provided, given a predetermined deviation from the model to be tested and the probabilities of the errors of the first and second kinds.

  13. Likelihood analysis of species occurrence probability from presence-only data for modelling species distributions

    Science.gov (United States)

    Royle, J. Andrew; Chandler, Richard B.; Yackulic, Charles; Nichols, James D.

    2012-01-01

    1. Understanding the factors affecting species occurrence is a pre-eminent focus of applied ecological research. However, direct information about species occurrence is lacking for many species. Instead, researchers sometimes have to rely on so-called presence-only data (i.e. when no direct information about absences is available), which often results from opportunistic, unstructured sampling. MAXENT is a widely used software program designed to model and map species distribution using presence-only data. 2. We provide a critical review of MAXENT as applied to species distribution modelling and discuss how it can lead to inferential errors. A chief concern is that MAXENT produces a number of poorly defined indices that are not directly related to the actual parameter of interest – the probability of occurrence (ψ). This focus on an index was motivated by the belief that it is not possible to estimate ψ from presence-only data; however, we demonstrate that ψ is identifiable using conventional likelihood methods under the assumptions of random sampling and constant probability of species detection. 3. The model is implemented in a convenient r package which we use to apply the model to simulated data and data from the North American Breeding Bird Survey. We demonstrate that MAXENT produces extreme under-predictions when compared to estimates produced by logistic regression which uses the full (presence/absence) data set. We note that MAXENT predictions are extremely sensitive to specification of the background prevalence, which is not objectively estimated using the MAXENT method. 4. As with MAXENT, formal model-based inference requires a random sample of presence locations. Many presence-only data sets, such as those based on museum records and herbarium collections, may not satisfy this assumption. However, when sampling is random, we believe that inference should be based on formal methods that facilitate inference about interpretable ecological quantities

  14. The Use of Dynamic Stochastic Social Behavior Models to Produce Likelihood Functions for Risk Modeling of Proliferation and Terrorist Attacks

    Energy Technology Data Exchange (ETDEWEB)

    Young, Jonathan; Thompson, Sandra E.; Brothers, Alan J.; Whitney, Paul D.; Coles, Garill A.; Henderson, Cindy L.; Wolf, Katherine E.; Hoopes, Bonnie L.

    2008-12-01

    The ability to estimate the likelihood of future events based on current and historical data is essential to the decision making process of many government agencies. Successful predictions related to terror events and characterizing the risks will support development of options for countering these events. The predictive tasks involve both technical and social component models. The social components have presented a particularly difficult challenge. This paper outlines some technical considerations of this modeling activity. Both data and predictions associated with the technical and social models will likely be known with differing certainties or accuracies – a critical challenge is linking across these model domains while respecting this fundamental difference in certainty level. This paper will describe the technical approach being taken to develop the social model and identification of the significant interfaces between the technical and social modeling in the context of analysis of diversion of nuclear material.

  15. A Likelihood Approach for Real-Time Calibration of Stochastic Compartmental Epidemic Models.

    Science.gov (United States)

    Zimmer, Christoph; Yaesoubi, Reza; Cohen, Ted

    2017-01-01

    Stochastic transmission dynamic models are especially useful for studying the early emergence of novel pathogens given the importance of chance events when the number of infectious individuals is small. However, methods for parameter estimation and prediction for these types of stochastic models remain limited. In this manuscript, we describe a calibration and prediction framework for stochastic compartmental transmission models of epidemics. The proposed method, Multiple Shooting for Stochastic systems (MSS), applies a linear noise approximation to describe the size of the fluctuations, and uses each new surveillance observation to update the belief about the true epidemic state. Using simulated outbreaks of a novel viral pathogen, we evaluate the accuracy of MSS for real-time parameter estimation and prediction during epidemics. We assume that weekly counts for the number of new diagnosed cases are available and serve as an imperfect proxy of incidence. We show that MSS produces accurate estimates of key epidemic parameters (i.e. mean duration of infectiousness, R0, and Reff) and can provide an accurate estimate of the unobserved number of infectious individuals during the course of an epidemic. MSS also allows for accurate prediction of the number and timing of future hospitalizations and the overall attack rate. We compare the performance of MSS to three state-of-the-art benchmark methods: 1) a likelihood approximation with an assumption of independent Poisson observations; 2) a particle filtering method; and 3) an ensemble Kalman filter method. We find that MSS significantly outperforms each of these three benchmark methods in the majority of epidemic scenarios tested. In summary, MSS is a promising method that may improve on current approaches for calibration and prediction using stochastic models of epidemics.

  16. A Likelihood Approach for Real-Time Calibration of Stochastic Compartmental Epidemic Models

    Science.gov (United States)

    Zimmer, Christoph; Cohen, Ted

    2017-01-01

    Stochastic transmission dynamic models are especially useful for studying the early emergence of novel pathogens given the importance of chance events when the number of infectious individuals is small. However, methods for parameter estimation and prediction for these types of stochastic models remain limited. In this manuscript, we describe a calibration and prediction framework for stochastic compartmental transmission models of epidemics. The proposed method, Multiple Shooting for Stochastic systems (MSS), applies a linear noise approximation to describe the size of the fluctuations, and uses each new surveillance observation to update the belief about the true epidemic state. Using simulated outbreaks of a novel viral pathogen, we evaluate the accuracy of MSS for real-time parameter estimation and prediction during epidemics. We assume that weekly counts for the number of new diagnosed cases are available and serve as an imperfect proxy of incidence. We show that MSS produces accurate estimates of key epidemic parameters (i.e. mean duration of infectiousness, R0, and Reff) and can provide an accurate estimate of the unobserved number of infectious individuals during the course of an epidemic. MSS also allows for accurate prediction of the number and timing of future hospitalizations and the overall attack rate. We compare the performance of MSS to three state-of-the-art benchmark methods: 1) a likelihood approximation with an assumption of independent Poisson observations; 2) a particle filtering method; and 3) an ensemble Kalman filter method. We find that MSS significantly outperforms each of these three benchmark methods in the majority of epidemic scenarios tested. In summary, MSS is a promising method that may improve on current approaches for calibration and prediction using stochastic models of epidemics. PMID:28095403

  17. Results of the Regional Earthquake Likelihood Models (RELM) test of earthquake forecasts in California.

    Science.gov (United States)

    Lee, Ya-Ting; Turcotte, Donald L; Holliday, James R; Sachs, Michael K; Rundle, John B; Chen, Chien-Chih; Tiampo, Kristy F

    2011-10-04

    The Regional Earthquake Likelihood Models (RELM) test of earthquake forecasts in California was the first competitive evaluation of forecasts of future earthquake occurrence. Participants submitted expected probabilities of occurrence of M ≥ 4.95 earthquakes in 0.1° × 0.1° cells for the period 1 January 1, 2006, to December 31, 2010. Probabilities were submitted for 7,682 cells in California and adjacent regions. During this period, 31 M ≥ 4.95 earthquakes occurred in the test region. These earthquakes occurred in 22 test cells. This seismic activity was dominated by earthquakes associated with the M = 7.2, April 4, 2010, El Mayor-Cucapah earthquake in northern Mexico. This earthquake occurred in the test region, and 16 of the other 30 earthquakes in the test region could be associated with it. Nine complete forecasts were submitted by six participants. In this paper, we present the forecasts in a way that allows the reader to evaluate which forecast is the most "successful" in terms of the locations of future earthquakes. We conclude that the RELM test was a success and suggest ways in which the results can be used to improve future forecasts.

  18. Automatic modeling of the linguistic values for database fuzzy querying

    Directory of Open Access Journals (Sweden)

    Diana STEFANESCU

    2007-12-01

    Full Text Available In order to evaluate vague queries, each linguistic term is considered according to its fuzzy model. Usually, the linguistic terms are defined as fuzzy sets, during a classical knowledge acquisition off-line process. But they can also be automatically extracted from the actual content of the database, by an online process. In at least two situations, automatically modeling the linguistic values would be very useful: first, to simplify the knowledge engineer’s task by extracting the definitions from the database content; and second, where mandatory, to dynamically define the linguistic values in complex criteria queries evaluation. Procedures to automatically extract the fuzzy model of the linguistic values from the existing data are presented in this paper.

  19. Computing maximum likelihood estimates of loglinear models from marginal sums with special attention to loglinear item response theory

    NARCIS (Netherlands)

    Kelderman, Henk

    1992-01-01

    In this paper algorithms are described for obtaining the maximum likelihood estimates of the parameters in loglinear models. Modified versions of the iterative proportional fitting and Newton-Raphson algorithms are described that work on the minimal sufficient statistics rather than on the usual cou

  20. Evaluating score- and feature-based likelihood ratio models for multivariate continuous data: applied to forensic MDMA comparison

    NARCIS (Netherlands)

    A. Bolck; H. Ni; M. Lopatka

    2015-01-01

    Likelihood ratio (LR) models are moving into the forefront of forensic evidence evaluation as these methods are adopted by a diverse range of application areas in forensic science. We examine the fundamentally different results that can be achieved when feature- and score-based methodologies are emp

  1. An efficient implementation of maximum likelihood identification of LTI state-space models by local gradient search

    NARCIS (Netherlands)

    Bergboer, N.H; Verdult, V.; Verhaegen, M.H.G.

    2002-01-01

    We present a numerically efficient implementation of the nonlinear least squares and maximum likelihood identification of multivariable linear time-invariant (LTI) state-space models. This implementation is based on a local parameterization of the system and a gradient search in the resulting parame

  2. Efficient Full Information Maximum Likelihood Estimation for Multidimensional IRT Models. Research Report. ETS RR-09-03

    Science.gov (United States)

    Rijmen, Frank

    2009-01-01

    Maximum marginal likelihood estimation of multidimensional item response theory (IRT) models has been hampered by the calculation of the multidimensional integral over the ability distribution. However, the researcher often has a specific hypothesis about the conditional (in)dependence relations among the latent variables. Exploiting these…

  3. A Computer Program for Solving a Set of Conditional Maximum Likelihood Equations Arising in the Rasch Model for Questionnaires.

    Science.gov (United States)

    Andersen, Erling B.

    A computer program for solving the conditional likelihood equations arising in the Rasch model for questionnaires is described. The estimation method and the computational problems involved are described in a previous research report by Andersen, but a summary of those results are given in two sections of this paper. A working example is also…

  4. A Microeconomic Interpretation of the Maximum Entropy Estimator of Multinomial Logit Models and Its Equivalence to the Maximum Likelihood Estimator

    Directory of Open Access Journals (Sweden)

    Louis de Grange

    2010-09-01

    Full Text Available Maximum entropy models are often used to describe supply and demand behavior in urban transportation and land use systems. However, they have been criticized for not representing behavioral rules of system agents and because their parameters seems to adjust only to modeler-imposed constraints. In response, it is demonstrated that the solution to the entropy maximization problem with linear constraints is a multinomial logit model whose parameters solve the likelihood maximization problem of this probabilistic model. But this result neither provides a microeconomic interpretation of the entropy maximization problem nor explains the equivalence of these two optimization problems. This work demonstrates that an analysis of the dual of the entropy maximization problem yields two useful alternative explanations of its solution. The first shows that the maximum entropy estimators of the multinomial logit model parameters reproduce rational user behavior, while the second shows that the likelihood maximization problem for multinomial logit models is the dual of the entropy maximization problem.

  5. Using suggestion to model different types of automatic writing.

    Science.gov (United States)

    Walsh, E; Mehta, M A; Oakley, D A; Guilmette, D N; Gabay, A; Halligan, P W; Deeley, Q

    2014-05-01

    Our sense of self includes awareness of our thoughts and movements, and our control over them. This feeling can be altered or lost in neuropsychiatric disorders as well as in phenomena such as "automatic writing" whereby writing is attributed to an external source. Here, we employed suggestion in highly hypnotically suggestible participants to model various experiences of automatic writing during a sentence completion task. Results showed that the induction of hypnosis, without additional suggestion, was associated with a small but significant reduction of control, ownership, and awareness for writing. Targeted suggestions produced a double dissociation between thought and movement components of writing, for both feelings of control and ownership, and additionally, reduced awareness of writing. Overall, suggestion produced selective alterations in the control, ownership, and awareness of thought and motor components of writing, thus enabling key aspects of automatic writing, observed across different clinical and cultural settings, to be modelled.

  6. Model-Based Reasoning in Humans Becomes Automatic with Training.

    Directory of Open Access Journals (Sweden)

    Marcos Economides

    2015-09-01

    Full Text Available Model-based and model-free reinforcement learning (RL have been suggested as algorithmic realizations of goal-directed and habitual action strategies. Model-based RL is more flexible than model-free but requires sophisticated calculations using a learnt model of the world. This has led model-based RL to be identified with slow, deliberative processing, and model-free RL with fast, automatic processing. In support of this distinction, it has recently been shown that model-based reasoning is impaired by placing subjects under cognitive load--a hallmark of non-automaticity. Here, using the same task, we show that cognitive load does not impair model-based reasoning if subjects receive prior training on the task. This finding is replicated across two studies and a variety of analysis methods. Thus, task familiarity permits use of model-based reasoning in parallel with other cognitive demands. The ability to deploy model-based reasoning in an automatic, parallelizable fashion has widespread theoretical implications, particularly for the learning and execution of complex behaviors. It also suggests a range of important failure modes in psychiatric disorders.

  7. Towards Automatic Processing of Virtual City Models for Simulations

    Science.gov (United States)

    Piepereit, R.; Schilling, A.; Alam, N.; Wewetzer, M.; Pries, M.; Coors, V.

    2016-10-01

    Especially in the field of numerical simulations, such as flow and acoustic simulations, the interest in using virtual 3D models to optimize urban systems is increasing. The few instances in which simulations were already carried out in practice have been associated with an extremely high manual and therefore uneconomical effort for the processing of models. Using different ways of capturing models in Geographic Information System (GIS) and Computer Aided Engineering (CAE), increases the already very high complexity of the processing. To obtain virtual 3D models suitable for simulation, we developed a tool for automatic processing with the goal to establish ties between the world of GIS and CAE. In this paper we introduce a way to use Coons surfaces for the automatic processing of building models in LoD2, and investigate ways to simplify LoD3 models in order to reduce unnecessary information for a numerical simulation.

  8. Automatic Building Information Model Query Generation

    Energy Technology Data Exchange (ETDEWEB)

    Jiang, Yufei; Yu, Nan; Ming, Jiang; Lee, Sanghoon; DeGraw, Jason; Yen, John; Messner, John I.; Wu, Dinghao

    2015-12-01

    Energy efficient building design and construction calls for extensive collaboration between different subfields of the Architecture, Engineering and Construction (AEC) community. Performing building design and construction engineering raises challenges on data integration and software interoperability. Using Building Information Modeling (BIM) data hub to host and integrate building models is a promising solution to address those challenges, which can ease building design information management. However, the partial model query mechanism of current BIM data hub collaboration model has several limitations, which prevents designers and engineers to take advantage of BIM. To address this problem, we propose a general and effective approach to generate query code based on a Model View Definition (MVD). This approach is demonstrated through a software prototype called QueryGenerator. By demonstrating a case study using multi-zone air flow analysis, we show how our approach and tool can help domain experts to use BIM to drive building design with less labour and lower overhead cost.

  9. Small-Scale Helicopter Automatic Autorotation: Modeling, Guidance, and Control

    NARCIS (Netherlands)

    Taamallah, S.

    2015-01-01

    Our research objective consists in developing a, model-based, automatic safety recovery system, for a small-scale helicopter Unmanned Aerial Vehicle (UAV) in autorotation, i.e. an engine OFF flight condition, that safely flies and lands the helicopter to a pre-specified ground location. In pursuit o

  10. Automatic 3D modeling of the urban landscape

    NARCIS (Netherlands)

    I. Esteban; J. Dijk; F. Groen

    2010-01-01

    In this paper we present a fully automatic system for building 3D models of urban areas at the street level. We propose a novel approach for the accurate estimation of the scale consistent camera pose given two previous images. We employ a new method for global optimization and use a novel sampling

  11. Automatic 3D Modeling of the Urban Landscape

    NARCIS (Netherlands)

    Esteban, I.; Dijk, J.; Groen, F.A.

    2010-01-01

    In this paper we present a fully automatic system for building 3D models of urban areas at the street level. We propose a novel approach for the accurate estimation of the scale consistent camera pose given two previous images. We employ a new method for global optimization and use a novel sampling

  12. IRT Item Parameter Recovery with Marginal Maximum Likelihood Estimation Using Loglinear Smoothing Models

    Science.gov (United States)

    Casabianca, Jodi M.; Lewis, Charles

    2015-01-01

    Loglinear smoothing (LLS) estimates the latent trait distribution while making fewer assumptions about its form and maintaining parsimony, thus leading to more precise item response theory (IRT) item parameter estimates than standard marginal maximum likelihood (MML). This article provides the expectation-maximization algorithm for MML estimation…

  13. Maximum likelihood estimation for Cox's regression model under nested case-control sampling

    DEFF Research Database (Denmark)

    Scheike, Thomas Harder; Juul, Anders

    2004-01-01

    -like growth factor I was associated with ischemic heart disease. The study was based on a population of 3784 Danes and 231 cases of ischemic heart disease where controls were matched on age and gender. We illustrate the use of the MLE for these data and show how the maximum likelihood framework can be used...

  14. Geometric model of robotic arc welding for automatic programming

    Institute of Scientific and Technical Information of China (English)

    2000-01-01

    Geometric information is important for automatic programming of arc welding robot. Complete geometric models of robotic arc welding are established in this paper. In the geometric model of weld seam, an equation with seam length as its parameter is introduced to represent any weld seam. The method to determine discrete programming points on a weld seam is presented. In the geometric model of weld workpiece, three class primitives and CSG tree are used to describe weld workpiece. Detailed data structure is presented. In pose transformation of torch, world frame, torch frame and active frame are defined, and transformation between frames is presented. Based on these geometric models, an automatic programming software package for robotic arc welding, RAWCAD, is developed. Experiments show that the geometric models are practical and reliable.

  15. Time series modeling for automatic target recognition

    Science.gov (United States)

    Sokolnikov, Andre

    2012-05-01

    Time series modeling is proposed for identification of targets whose images are not clearly seen. The model building takes into account air turbulence, precipitation, fog, smoke and other factors obscuring and distorting the image. The complex of library data (of images, etc.) serving as a basis for identification provides the deterministic part of the identification process, while the partial image features, distorted parts, irrelevant pieces and absence of particular features comprise the stochastic part of the target identification. The missing data approach is elaborated that helps the prediction process for the image creation or reconstruction. The results are provided.

  16. Mixture model for inferring susceptibility to mastitis in dairy cattle: a procedure for likelihood-based inference

    Directory of Open Access Journals (Sweden)

    Jensen Just

    2004-01-01

    Full Text Available Abstract A Gaussian mixture model with a finite number of components and correlated random effects is described. The ultimate objective is to model somatic cell count information in dairy cattle and to develop criteria for genetic selection against mastitis, an important udder disease. Parameter estimation is by maximum likelihood or by an extension of restricted maximum likelihood. A Monte Carlo expectation-maximization algorithm is used for this purpose. The expectation step is carried out using Gibbs sampling, whereas the maximization step is deterministic. Ranking rules based on the conditional probability of membership in a putative group of uninfected animals, given the somatic cell information, are discussed. Several extensions of the model are suggested.

  17. Towards automatic calibration of 2-dimensional flood propagation models

    Directory of Open Access Journals (Sweden)

    P. Fabio

    2009-11-01

    Full Text Available Hydraulic models for flood propagation description are an essential tool in many fields, e.g. civil engineering, flood hazard and risk assessments, evaluation of flood control measures, etc. Nowadays there are many models of different complexity regarding the mathematical foundation and spatial dimensions available, and most of them are comparatively easy to operate due to sophisticated tools for model setup and control. However, the calibration of these models is still underdeveloped in contrast to other models like e.g. hydrological models or models used in ecosystem analysis. This has basically two reasons: first, the lack of relevant data against the models can be calibrated, because flood events are very rarely monitored due to the disturbances inflicted by them and the lack of appropriate measuring equipment in place. Secondly, especially the two-dimensional models are computationally very demanding and therefore the use of available sophisticated automatic calibration procedures is restricted in many cases. This study takes a well documented flood event in August 2002 at the Mulde River in Germany as an example and investigates the most appropriate calibration strategy for a full 2-D hyperbolic finite element model. The model independent optimiser PEST, that gives the possibility of automatic calibrations, is used. The application of the parallel version of the optimiser to the model and calibration data showed that a it is possible to use automatic calibration in combination of 2-D hydraulic model, and b equifinality of model parameterisation can also be caused by a too large number of degrees of freedom in the calibration data in contrast to a too simple model setup. In order to improve model calibration and reduce equifinality a method was developed to identify calibration data with likely errors that obstruct model calibration.

  18. Implications of the Regional Earthquake Likelihood Models test of earthquake forecasts in California

    Directory of Open Access Journals (Sweden)

    Michael Karl Sachs

    2012-09-01

    Full Text Available The Regional Earthquake Likelihood Models (RELM test was the first competitive comparison of prospective earthquake forecasts. The test was carried out over 5 years from 1 January 2006 to 31 December 2010 over a region that included all of California. The test area was divided into 7682 0.1°x0.1° spatial cells. Each submitted forecast gave the predicted numbers of earthquakes Nemi larger than M=4.95 in 0.1 magnitude bins for each cell. In this paper we present a method that separates the forecast of the number of test earthquakes from the forecast of their locations. We first obtain the number Nem of forecast earthquakes in magnitude bin m. We then determine the conditional probability λemi=Nemi/Nem that an earthquake in magnitude bin m will occur in cell i. The summation of λemi over all 7682 cells is unity. A random (no skill forecast gives equal values of λemi for all spatial cells and magnitude bins. The skill of a forecast, in terms of the location of the earthquakes, is measured by the success in assigning large values of λemi to the cells in which earthquakes occur and low values of λemi to the cells where earthquakes do not occur. Thirty-one test earthquakes occurred in 27 different combinations of spatial cells i and magnitude bins m, we had the highest value of λemi for that mi cell. We evaluate the performance of eleven submitted forecasts in two ways. First, we determine the number of mi cells for which the forecast λemi was the largest, the best forecast is the one with the highest number. Second, we determine the mean value of λemi for the 27 mi cells for each forecast. The best forecast has the highest mean value of λemi. The success of a forecast during the test period is dependent on the allocation of the probabilities λemi between the mi cells, since the sum over the mi cells is unity. We illustrate the forecast distributions of λemi and discuss their differences. We conclude that the RELM test was successful in

  19. A maximum likelihood model for fitting power functions with data uncertainty: A case study on the relationship between body lengths and masses for Sciuridae species worldwide

    Directory of Open Access Journals (Sweden)

    Youhua Chen

    2016-09-01

    Full Text Available In this report, a maximum likelihood model is developed to incorporate data uncertainty in response and explanatory variables when fitting power-law bivariate relationships in ecology and evolution. This simple likelihood model is applied to an empirical data set related to the allometric relationship between body mass and length of Sciuridae species worldwide. The results show that the values of parameters estimated by the proposed likelihood model are substantially different from those fitted by the nonlinear least-of-square (NLOS method. Accordingly, the power-law models fitted by both methods have different curvilinear shapes. These discrepancies are caused by the integration of measurement errors in the proposed likelihood model, in which NLOS method fails to do. Because the current likelihood model and the NLOS method can show different results, the inclusion of measurement errors may offer new insights into the interpretation of scaling or power laws in ecology and evolution.

  20. Automatic balancing of 3D models

    DEFF Research Database (Denmark)

    Christiansen, Asger Nyman; Schmidt, Ryan; Bærentzen, Jakob Andreas

    2014-01-01

    3D printing technologies allow for more diverse shapes than are possible with molds and the cost of making just one single object is negligible compared to traditional production methods. However, not all shapes are suitable for 3D print. One of the remaining costs is therefore human time spent......, in these cases, we will apply a rotation of the object which only deforms the shape a little near the base. No user input is required but it is possible to specify manufacturing constraints related to specific 3D print technologies. Several models have successfully been balanced and printed using both polyjet...

  1. Asymptotic Properties of Induced Maximum Likelihood Estimates of Nonlinear Models for Item Response Variables: The Finite-Generic-Item-Pool Case.

    Science.gov (United States)

    Jones, Douglas H.

    The progress of modern mental test theory depends very much on the techniques of maximum likelihood estimation, and many popular applications make use of likelihoods induced by logistic item response models. While, in reality, item responses are nonreplicate within a single examinee and the logistic models are only ideal, practitioners make…

  2. Model-Based Iterative Reconstruction for Dual-Energy X-Ray CT Using a Joint Quadratic Likelihood Model.

    Science.gov (United States)

    Zhang, Ruoqiao; Thibault, Jean-Baptiste; Bouman, Charles A; Sauer, Ken D; Hsieh, Jiang

    2014-01-01

    Dual-energy X-ray CT (DECT) has the potential to improve contrast and reduce artifacts as compared to traditional CT. Moreover, by applying model-based iterative reconstruction (MBIR) to dual-energy data, one might also expect to reduce noise and improve resolution. However, the direct implementation of dual-energy MBIR requires the use of a nonlinear forward model, which increases both complexity and computation. Alternatively, simplified forward models have been used which treat the material-decomposed channels separately, but these approaches do not fully account for the statistical dependencies in the channels. In this paper, we present a method for joint dual-energy MBIR (JDE-MBIR), which simplifies the forward model while still accounting for the complete statistical dependency in the material-decomposed sinogram components. The JDE-MBIR approach works by using a quadratic approximation to the polychromatic log-likelihood and a simple but exact nonnegativity constraint in the image domain. We demonstrate that our method is particularly effective when the DECT system uses fast kVp switching, since in this case the model accounts for the inaccuracy of interpolated sinogram entries. Both phantom and clinical results show that the proposed model produces images that compare favorably in quality to previous decomposition-based methods, including FBP and other statistical iterative approaches.

  3. Automatic Texture Mapping of Architectural and Archaeological 3d Models

    Science.gov (United States)

    Kersten, T. P.; Stallmann, D.

    2012-07-01

    Today, detailed, complete and exact 3D models with photo-realistic textures are increasingly demanded for numerous applications in architecture and archaeology. Manual texture mapping of 3D models by digital photographs with software packages, such as Maxon Cinema 4D, Autodesk 3Ds Max or Maya, still requires a complex and time-consuming workflow. So, procedures for automatic texture mapping of 3D models are in demand. In this paper two automatic procedures are presented. The first procedure generates 3D surface models with textures by web services, while the second procedure textures already existing 3D models with the software tmapper. The program tmapper is based on the Multi Layer 3D image (ML3DImage) algorithm and developed in the programming language C++. The studies showing that the visibility analysis using the ML3DImage algorithm is not sufficient to obtain acceptable results of automatic texture mapping. To overcome the visibility problem the Point Cloud Painter algorithm in combination with the Z-buffer-procedure will be applied in the future.

  4. Automatic Determination of the Conic Coronal Mass Ejection Model Parameters

    Science.gov (United States)

    Pulkkinen, A.; Oates, T.; Taktakishvili, A.

    2009-01-01

    Characterization of the three-dimensional structure of solar transients using incomplete plane of sky data is a difficult problem whose solutions have potential for societal benefit in terms of space weather applications. In this paper transients are characterized in three dimensions by means of conic coronal mass ejection (CME) approximation. A novel method for the automatic determination of cone model parameters from observed halo CMEs is introduced. The method uses both standard image processing techniques to extract the CME mass from white-light coronagraph images and a novel inversion routine providing the final cone parameters. A bootstrap technique is used to provide model parameter distributions. When combined with heliospheric modeling, the cone model parameter distributions will provide direct means for ensemble predictions of transient propagation in the heliosphere. An initial validation of the automatic method is carried by comparison to manually determined cone model parameters. It is shown using 14 halo CME events that there is reasonable agreement, especially between the heliocentric locations of the cones derived with the two methods. It is argued that both the heliocentric locations and the opening half-angles of the automatically determined cones may be more realistic than those obtained from the manual analysis

  5. MEMOPS: data modelling and automatic code generation.

    Science.gov (United States)

    Fogh, Rasmus H; Boucher, Wayne; Ionides, John M C; Vranken, Wim F; Stevens, Tim J; Laue, Ernest D

    2010-03-25

    In recent years the amount of biological data has exploded to the point where much useful information can only be extracted by complex computational analyses. Such analyses are greatly facilitated by metadata standards, both in terms of the ability to compare data originating from different sources, and in terms of exchanging data in standard forms, e.g. when running processes on a distributed computing infrastructure. However, standards thrive on stability whereas science tends to constantly move, with new methods being developed and old ones modified. Therefore maintaining both metadata standards, and all the code that is required to make them useful, is a non-trivial problem. Memops is a framework that uses an abstract definition of the metadata (described in UML) to generate internal data structures and subroutine libraries for data access (application programming interfaces--APIs--currently in Python, C and Java) and data storage (in XML files or databases). For the individual project these libraries obviate the need for writing code for input parsing, validity checking or output. Memops also ensures that the code is always internally consistent, massively reducing the need for code reorganisation. Across a scientific domain a Memops-supported data model makes it easier to support complex standards that can capture all the data produced in a scientific area, share them among all programs in a complex software pipeline, and carry them forward to deposition in an archive. The principles behind the Memops generation code will be presented, along with example applications in Nuclear Magnetic Resonance (NMR) spectroscopy and structural biology.

  6. 拟似然非线性模型中极大拟似然估计的强收敛速度%Strong Convergence Rates of Maximum Quasi-likelihood Estimation in Quasi-likelihood nonlinear model

    Institute of Scientific and Technical Information of China (English)

    张戈

    2015-01-01

    We studies the issue raised by Reference[3],according to appropriate assumptions and other smooth conditions,With a more simple method,Proved that asymptotic existence of quasi likelihood equations in Quasi-likelihood nonlinear model ,and proved the convergence rate of the solution.%在适当假定及其它一些光滑条件下,用更为简便的方法证明了拟似然非线性模型的拟似然方程解的渐近存在性,并且求出了该解收敛于真值的速度.

  7. Automatic paper sliceform design from 3D solid models.

    Science.gov (United States)

    Le-Nguyen, Tuong-Vu; Low, Kok-Lim; Ruiz, Conrado; Le, Sang N

    2013-11-01

    A paper sliceform or lattice-style pop-up is a form of papercraft that uses two sets of parallel paper patches slotted together to make a foldable structure. The structure can be folded flat, as well as fully opened (popped-up) to make the two sets of patches orthogonal to each other. Automatic design of paper sliceforms is still not supported by existing computational models and remains a challenge. We propose novel geometric formulations of valid paper sliceform designs that consider the stability, flat-foldability and physical realizability of the designs. Based on a set of sufficient construction conditions, we also present an automatic algorithm for generating valid sliceform designs that closely depict the given 3D solid models. By approximating the input models using a set of generalized cylinders, our method significantly reduces the search space for stable and flat-foldable sliceforms. To ensure the physical realizability of the designs, the algorithm automatically generates slots or slits on the patches such that no two cycles embedded in two different patches are interlocking each other. This guarantees local pairwise assembility between patches, which is empirically shown to lead to global assembility. Our method has been demonstrated on a number of example models, and the output designs have been successfully made into real paper sliceforms.

  8. Automatic computational models of acoustical category features: Talking versus singing

    Science.gov (United States)

    Gerhard, David

    2003-10-01

    The automatic discrimination between acoustical categories has been an increasingly interesting problem in the fields of computer listening, multimedia databases, and music information retrieval. A system is presented which automatically generates classification models, given a set of destination classes and a set of a priori labeled acoustic events. Computational models are created using comparative probability density estimations. For the specific example presented, the destination classes are talking and singing. Individual feature models are evaluated using two measures: The Kologorov-Smirnov distance measures feature separation, and accuracy is measured using absolute and relative metrics. The system automatically segments the event set into a user-defined number (n) of development subsets, and runs a development cycle for each set, generating n separate systems, each of which is evaluated using the above metrics to improve overall system accuracy and to reduce inherent data skew from any one development subset. Multiple features for the same acoustical categories are then compared for underlying feature overlap using cross-correlation. Advantages of automated computational models include improved system development and testing, shortened development cycle, and automation of common system evaluation tasks. Numerical results are presented relating to the talking/singing classification problem.

  9. A model for automatic identification of human pulse signals

    Institute of Scientific and Technical Information of China (English)

    Hui-yan WANG; Pei-yong ZHANG

    2008-01-01

    This paper presents a quantitative method for automatic identification of human pulse signals. The idea is to start with the extraction of characteristic parameters and then to construct the recognition model based on Bayesian networks. To identify depth, frequency and rhythm, several parameters are proposed. To distinguish the strength and shape, which cannot be represented by one or several parameters and are hard to recognize, the main time-domain feature parameters are computed based on the feature points of the pulse signal. Then the extracted parameters are taken as the input and five models for automatic pulse signal identification are constructed based on Bayesian networks. Experimental results demonstrate that the method is feasible and effective in recognizing depth, frequency, rhythm, strength and shape of pulse signals, which can be expected to facilitate the modernization of pulse diagnosis.

  10. Global self-weighted and local quasi-maximum exponential likelihood estimators for ARMA--GARCH/IGARCH models

    CERN Document Server

    Zhu, Ke; 10.1214/11-AOS895

    2012-01-01

    This paper investigates the asymptotic theory of the quasi-maximum exponential likelihood estimators (QMELE) for ARMA--GARCH models. Under only a fractional moment condition, the strong consistency and the asymptotic normality of the global self-weighted QMELE are obtained. Based on this self-weighted QMELE, the local QMELE is showed to be asymptotically normal for the ARMA model with GARCH (finite variance) and IGARCH errors. A formal comparison of two estimators is given for some cases. A simulation study is carried out to assess the performance of these estimators, and a real example on the world crude oil price is given.

  11. Modelling of risk events with uncertain likelihoods and impacts in large infrastructure projects

    DEFF Research Database (Denmark)

    Schjær-Jacobsen, Hans

    2010-01-01

    to prevent future budget overruns. One of the central ideas is to introduce improved risk management processes and the present paper addresses this particular issue. A relevant cost function in terms of unit prices and quantities is developed and an event impact matrix with uncertain impacts from independent......This paper presents contributions to the mathematical core of risk and uncertainty management in compliance with the principles of New Budgeting laid out in 2008 by the Danish Ministry of Transport to be used in large infrastructure projects. Basically, the new principles are proposed in order...... uncertain risk events is used to calculate the total uncertain risk budget. Cost impacts from the individual risk events on the individual project activities are kept precisely track of in order to comply with the requirements of New Budgeting. Additionally, uncertain likelihoods for the occurrence of risk...

  12. Automatic Generation of 3D Building Models with Multiple Roofs

    Institute of Scientific and Technical Information of China (English)

    Kenichi Sugihara; Yoshitugu Hayashi

    2008-01-01

    Based on building footprints (building polygons) on digital maps, we are proposing the GIS and CG integrated system that automatically generates 3D building models with multiple roofs. Most building polygons' edges meet at right angles (orthogonal polygon). The integrated system partitions orthogonal building polygons into a set of rectangles and places rectangular roofs and box-shaped building bodies on these rectangles. In order to partition an orthogonal polygon, we proposed a useful polygon expression in deciding from which vertex a dividing line is drawn. In this paper, we propose a new scheme for partitioning building polygons and show the process of creating 3D roof models.

  13. Aircraft automatic flight control system with model inversion

    Science.gov (United States)

    Smith, G. A.; Meyer, George

    1990-01-01

    A simulator study was conducted to verify the advantages of a Newton-Raphson model-inversion technique as a design basis for an automatic trajectory control system in an aircraft with highly nonlinear characteristics. The simulation employed a detailed mathematical model of the aerodynamic and propulsion system performance characteristics of a vertical-attitude takeoff and landing tactical aircraft. The results obtained confirm satisfactory control system performance over a large portion of the flight envelope. System response to wind gusts was satisfactory for various plausible combinations of wind magnitude and direction.

  14. A New Model for Automatic Raster-to-Vector Conversion

    Directory of Open Access Journals (Sweden)

    Hesham E. ElDeeb

    2011-06-01

    Full Text Available There is a growing need for automatic digitizing, or so called automated raster to vector conversion (ARVC for maps. The benefit of ARVC is the production of maps that consume less space and are easy to search for or retrieve information from. In addition, ARVC is the fundamental step to reusing old maps at higher level of recognition. In this paper, a new model for an ARVC is developed. The proposed model converts the “paper maps” into electronic formats for Geographic Information Systems (GIS and evaluates the performance of the conversion process. To overcome the limitations of existing commercial vectorization software packages, the proposed model is customized to separate textual information, usually the cause of problems in the automatic conversion process, from the delimiting graphics of the map. The model retains the coordinates of the textual information for a later merge with the map after the conversion process. The propose model also addresses the localization problems in ARVC through the knowledge-supported intelligent vectorization system that is designed specifically to improve the accuracy and speed of the vectorization process. Finally, the model has beenimplemented on a symmetric multiprocessing (SMP architecture, in order to achieve higher speed up and performance.

  15. Variable-mass Thermodynamics Calculation Model for Gas-operated Automatic Weapon%Variable-mass Thermodynamics Calculation Model for Gas-operated Automatic Weapon

    Institute of Scientific and Technical Information of China (English)

    陈建彬; 吕小强

    2011-01-01

    Aiming at the fact that the energy and mass exchange phenomena exist between barrel and gas-operated device of the automatic weapon, for describing its interior ballistics and dynamic characteristics of the gas-operated device accurately, a new variable-mass thermodynamics model is built. It is used to calculate the automatic mechanism velocity of a certain automatic weapon, the calculation results coincide with the experimental results better, and thus the model is validated. The influences of structure parameters on gas-operated device' s dynamic characteristics are discussed. It shows that the model is valuable for design and accurate performance prediction of gas-operated automatic weapon.

  16. Parameterizing Spatial Models of Infectious Disease Transmission that Incorporate Infection Time Uncertainty Using Sampling-Based Likelihood Approximations.

    Directory of Open Access Journals (Sweden)

    Rajat Malik

    Full Text Available A class of discrete-time models of infectious disease spread, referred to as individual-level models (ILMs, are typically fitted in a Bayesian Markov chain Monte Carlo (MCMC framework. These models quantify probabilistic outcomes regarding the risk of infection of susceptible individuals due to various susceptibility and transmissibility factors, including their spatial distance from infectious individuals. The infectious pressure from infected individuals exerted on susceptible individuals is intrinsic to these ILMs. Unfortunately, quantifying this infectious pressure for data sets containing many individuals can be computationally burdensome, leading to a time-consuming likelihood calculation and, thus, computationally prohibitive MCMC-based analysis. This problem worsens when using data augmentation to allow for uncertainty in infection times. In this paper, we develop sampling methods that can be used to calculate a fast, approximate likelihood when fitting such disease models. A simple random sampling approach is initially considered followed by various spatially-stratified schemes. We test and compare the performance of our methods with both simulated data and data from the 2001 foot-and-mouth disease (FMD epidemic in the U.K. Our results indicate that substantial computation savings can be obtained--albeit, of course, with some information loss--suggesting that such techniques may be of use in the analysis of very large epidemic data sets.

  17. Automatic Construction of Predictive Neuron Models through Large Scale Assimilation of Electrophysiological Data

    Science.gov (United States)

    Nogaret, Alain; Meliza, C. Daniel; Margoliash, Daniel; Abarbanel, Henry D. I.

    2016-09-01

    We report on the construction of neuron models by assimilating electrophysiological data with large-scale constrained nonlinear optimization. The method implements interior point line parameter search to determine parameters from the responses to intracellular current injections of zebra finch HVC neurons. We incorporated these parameters into a nine ionic channel conductance model to obtain completed models which we then use to predict the state of the neuron under arbitrary current stimulation. Each model was validated by successfully predicting the dynamics of the membrane potential induced by 20–50 different current protocols. The dispersion of parameters extracted from different assimilation windows was studied. Differences in constraints from current protocols, stochastic variability in neuron output, and noise behave as a residual temperature which broadens the global minimum of the objective function to an ellipsoid domain whose principal axes follow an exponentially decaying distribution. The maximum likelihood expectation of extracted parameters was found to provide an excellent approximation of the global minimum and yields highly consistent kinetics for both neurons studied. Large scale assimilation absorbs the intrinsic variability of electrophysiological data over wide assimilation windows. It builds models in an automatic manner treating all data as equal quantities and requiring minimal additional insight.

  18. Automatic Generation of Symbolic Model for Parameterized Synchronous Systems

    Institute of Scientific and Technical Information of China (English)

    Wei-Wen Xu

    2004-01-01

    With the purpose of making the verification of parameterized system more general and easier, in this paper, a new and intuitive language PSL (Parameterized-system Specification Language) is proposed to specify a class of parameterized synchronous systems. From a PSL script, an automatic method is proposed to generate a constraint-based symbolic model. The model can concisely symbolically represent the collections of global states by counting the number of processes in a given state. Moreover, a theorem has been proved that there is a simulation relation between the original system and its symbolic model. Since the abstract and symbolic techniques are exploited in the symbolic model, state-explosion problem in traditional verification methods is efficiently avoided. Based on the proposed symbolic model, a reachability analysis procedure is implemented using ANSI C++ on UNIX platform. Thus, a complete tool for verifying the parameterized synchronous systems is obtained and tested for some cases. The experimental results show that the method is satisfactory.

  19. An Automatic Registration Algorithm for 3D Maxillofacial Model

    Science.gov (United States)

    Qiu, Luwen; Zhou, Zhongwei; Guo, Jixiang; Lv, Jiancheng

    2016-09-01

    3D image registration aims at aligning two 3D data sets in a common coordinate system, which has been widely used in computer vision, pattern recognition and computer assisted surgery. One challenging problem in 3D registration is that point-wise correspondences between two point sets are often unknown apriori. In this work, we develop an automatic algorithm for 3D maxillofacial models registration including facial surface model and skull model. Our proposed registration algorithm can achieve a good alignment result between partial and whole maxillofacial model in spite of ambiguous matching, which has a potential application in the oral and maxillofacial reparative and reconstructive surgery. The proposed algorithm includes three steps: (1) 3D-SIFT features extraction and FPFH descriptors construction; (2) feature matching using SAC-IA; (3) coarse rigid alignment and refinement by ICP. Experiments on facial surfaces and mandible skull models demonstrate the efficiency and robustness of our algorithm.

  20. Automatic identification of model reductions for discrete stochastic simulation

    Science.gov (United States)

    Wu, Sheng; Fu, Jin; Li, Hong; Petzold, Linda

    2012-07-01

    Multiple time scales in cellular chemical reaction systems present a challenge for the efficiency of stochastic simulation. Numerous model reductions have been proposed to accelerate the simulation of chemically reacting systems by exploiting time scale separation. However, these are often identified and deployed manually, requiring expert knowledge. This is time-consuming, prone to error, and opportunities for model reduction may be missed, particularly for large models. We propose an automatic model analysis algorithm using an adaptively weighted Petri net to dynamically identify opportunities for model reductions for both the stochastic simulation algorithm and tau-leaping simulation, with no requirement of expert knowledge input. Results are presented to demonstrate the utility and effectiveness of this approach.

  1. WOMBAT——A tool for mixed model analyses in quantitative genetics by restricted maximum likelihood (REML)

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    WOMBAT is a software package for quantitative genetic analyses of continuous traits, fitting a linear, mixed model;estimates of covariance components and the resulting genetic parameters are obtained by restricted maximum likelihood. A wide range of models, comprising numerous traits, multiple fixed and random effects, selected genetic covariance structures, random regression models and reduced rank estimation are accommodated. WOMBAT employs up-to-date numerical and computational methods. Together with the use of efficient compilers, this generates fast executable programs, suitable for large scale analyses.Use of WOMBAT is illustrated for a bivariate analysis. The package consists of the executable program, available for LINUX and WINDOWS environments, manual and a set of worked example, and can be downloaded free of charge from http://agbu.une.edu.au/~kmeyer/wombat.html

  2. Optimization of a Nucleic Acids united-RESidue 2-Point model (NARES-2P) with a maximum-likelihood approach

    Energy Technology Data Exchange (ETDEWEB)

    He, Yi; Scheraga, Harold A., E-mail: has5@cornell.edu [Department of Chemistry and Chemical Biology, Cornell University, Ithaca, New York 14853 (United States); Liwo, Adam [Faculty of Chemistry, University of Gdańsk, Wita Stwosza 63, 80-308 Gdańsk (Poland)

    2015-12-28

    Coarse-grained models are useful tools to investigate the structural and thermodynamic properties of biomolecules. They are obtained by merging several atoms into one interaction site. Such simplified models try to capture as much as possible information of the original biomolecular system in all-atom representation but the resulting parameters of these coarse-grained force fields still need further optimization. In this paper, a force field optimization method, which is based on maximum-likelihood fitting of the simulated to the experimental conformational ensembles and least-squares fitting of the simulated to the experimental heat-capacity curves, is applied to optimize the Nucleic Acid united-RESidue 2-point (NARES-2P) model for coarse-grained simulations of nucleic acids recently developed in our laboratory. The optimized NARES-2P force field reproduces the structural and thermodynamic data of small DNA molecules much better than the original force field.

  3. Maximum Marginal Likelihood Estimation of a Monotonic Polynomial Generalized Partial Credit Model with Applications to Multiple Group Analysis.

    Science.gov (United States)

    Falk, Carl F; Cai, Li

    2016-06-01

    We present a semi-parametric approach to estimating item response functions (IRF) useful when the true IRF does not strictly follow commonly used functions. Our approach replaces the linear predictor of the generalized partial credit model with a monotonic polynomial. The model includes the regular generalized partial credit model at the lowest order polynomial. Our approach extends Liang's (A semi-parametric approach to estimate IRFs, Unpublished doctoral dissertation, 2007) method for dichotomous item responses to the case of polytomous data. Furthermore, item parameter estimation is implemented with maximum marginal likelihood using the Bock-Aitkin EM algorithm, thereby facilitating multiple group analyses useful in operational settings. Our approach is demonstrated on both educational and psychological data. We present simulation results comparing our approach to more standard IRF estimation approaches and other non-parametric and semi-parametric alternatives.

  4. Modeling of a Multiple Digital Automatic Gain Control System

    Institute of Scientific and Technical Information of China (English)

    WANG Jingdian; LU Xiuhong; ZHANG Li

    2008-01-01

    Automatic gain control (AGC) has been used in many applications. The key features of AGC, including a steady state output and static/dynamic timing response, depend mainly on key parameters such as the reference and the filter coefficients. A simple model developed to describe AGC systems based on several simple assumptions shows that AGC always converges to the reference and that the timing constant depends on the filter coefficients. Measures are given to prevent oscillations and limit cycle effects. The simple AGC system is adapted to a multiple AGC system for a TV tuner in a much more efficient model. Simulations using the C language are 16 times faster than those with MATLAB, and 10 times faster than those with a mixed register transfer level (RTL)-simulation program with integrated circuit emphasis (SPICE) model.

  5. Model Considerations for Memory-based Automatic Music Transcription

    Science.gov (United States)

    Albrecht, Štěpán; Šmídl, Václav

    2009-12-01

    The problem of automatic music description is considered. The recorded music is modeled as a superposition of known sounds from a library weighted by unknown weights. Similar observation models are commonly used in statistics and machine learning. Many methods for estimation of the weights are available. These methods differ in the assumptions imposed on the weights. In Bayesian paradigm, these assumptions are typically expressed in the form of prior probability density function (pdf) on the weights. In this paper, commonly used assumptions about music signal are summarized and complemented by a new assumption. These assumptions are translated into pdfs and combined into a single prior density using combination of pdfs. Validity of the model is tested in simulation using synthetic data.

  6. Automatic Modelling of Photograhed Parts in CATIA CAD Environment

    Directory of Open Access Journals (Sweden)

    Yunus Kayır

    2014-04-01

    Full Text Available In this study, a system was developed that can model parts in CATIA CAD program automatically by using photographic images obtained from the parts. The system, called ImageCAD, can use very kind of photography that was taken for prismatic and cylindrical parts. It can recognize geometric entities, such as lines, circles, arc and free curve, in the image by according to the selection of the user. ImageCAD can save generated knowledge of the entities in a suitable format for the CATIA program. ImageCAD, is controlled by using menus that were done in the CATIA interface, turn whatever you want photographs into 3B CAD models. The obtained CAD models have suitable structure that can be used for all CATIA application. Visual Basic programing language was preferred to design the system.

  7. Fast Automatic Precision Tree Models from Terrestrial Laser Scanner Data

    Directory of Open Access Journals (Sweden)

    Mathias Disney

    2013-01-01

    Full Text Available This paper presents a new method for constructing quickly and automatically precision tree models from point clouds of the trunk and branches obtained by terrestrial laser scanning. The input of the method is a point cloud of a single tree scanned from multiple positions. The surface of the visible parts of the tree is robustly reconstructed by making a flexible cylinder model of the tree. The thorough quantitative model records also the topological branching structure. In this paper, every major step of the whole model reconstruction process, from the input to the finished model, is presented in detail. The model is constructed by a local approach in which the point cloud is covered with small sets corresponding to connected surface patches in the tree surface. The neighbor-relations and geometrical properties of these cover sets are used to reconstruct the details of the tree and, step by step, the whole tree. The point cloud and the sets are segmented into branches, after which the branches are modeled as collections of cylinders. From the model, the branching structure and size properties, such as volume and branch size distributions, for the whole tree or some of its parts, can be approximated. The approach is validated using both measured and modeled terrestrial laser scanner data from real trees and detailed 3D models. The results show that the method allows an easy extraction of various tree attributes from terrestrial or mobile laser scanning point clouds.

  8. An automatic fault management model for distribution networks

    Energy Technology Data Exchange (ETDEWEB)

    Lehtonen, M.; Haenninen, S. [VTT Energy, Espoo (Finland); Seppaenen, M. [North-Carelian Power Co (Finland); Antila, E.; Markkila, E. [ABB Transmit Oy (Finland)

    1998-08-01

    An automatic computer model, called the FI/FL-model, for fault location, fault isolation and supply restoration is presented. The model works as an integrated part of the substation SCADA, the AM/FM/GIS system and the medium voltage distribution network automation systems. In the model, three different techniques are used for fault location. First, by comparing the measured fault current to the computed one, an estimate for the fault distance is obtained. This information is then combined, in order to find the actual fault point, with the data obtained from the fault indicators in the line branching points. As a third technique, in the absence of better fault location data, statistical information of line section fault frequencies can also be used. For combining the different fault location information, fuzzy logic is used. As a result, the probability weights for the fault being located in different line sections, are obtained. Once the faulty section is identified, it is automatically isolated by remote control of line switches. Then the supply is restored to the remaining parts of the network. If needed, reserve connections from other adjacent feeders can also be used. During the restoration process, the technical constraints of the network are checked. Among these are the load carrying capacity of line sections, voltage drop and the settings of relay protection. If there are several possible network topologies, the model selects the technically best alternative. The FI/IL-model has been in trial use at two substations of the North-Carelian Power Company since November 1996. This chapter lists the practical experiences during the test use period. Also the benefits of this kind of automation are assessed and future developments are outlined

  9. A Maximum Likelihood Approach for Multisample Nonlinear Structural Equation Models with Missing Continuous and Dichotomous Data

    Science.gov (United States)

    Song, Xin-Yuan; Lee, Sik-Yum

    2006-01-01

    Structural equation models are widely appreciated in social-psychological research and other behavioral research to model relations between latent constructs and manifest variables and to control for measurement error. Most applications of SEMs are based on fully observed continuous normal data and models with a linear structural equation.…

  10. A corrected likelihood approach for the nonlinear transformation model with application to fluorescence lifetime measurements using exponential mixtures.

    Science.gov (United States)

    Rebafka, Tabea; Roueff, François; Souloumiac, Antoine

    2010-01-01

    A fast and efficient estimation method is proposed that compensates the distortion in nonlinear transformation models. A likelihood-based estimator is developed that can be computed by an EM-type algorithm. The consistency of the estimator is shown and its limit distribution is provided. The new estimator is particularly well suited for fluorescence lifetime measurements, where only the shortest arrival time of a random number of emitted fluorescence photons can be detected and where arrival times are often modeled by a mixture of exponential distributions. The method is evaluated on real and synthetic data. Compared to currently used methods in fluorescence, the new estimator should allow a reduction of the acquisition time of an order of magnitude.

  11. The Work Ratio--modeling the likelihood of return to work for workers with musculoskeletal disorders: A fuzzy logic approach.

    Science.gov (United States)

    Apalit, Nathan

    2010-01-01

    The world of musculoskeletal disorders (MSDs) is complicated and fuzzy. Fuzzy logic provides a precise framework for complex problems characterized by uncertainty, vagueness and imprecision. Although fuzzy logic would appear to be an ideal modeling language to help address the complexity of MSDs, little research has been done in this regard. The Work Ratio is a novel mathematical model that uses fuzzy logic to provide a numerical and linguistic valuation of the likelihood of return to work and remaining at work. It can be used for a worker with any MSD at any point in time. Basic mathematical concepts from set theory and fuzzy logic are reviewed. A case study is then used to illustrate the use of the Work Ratio. Its potential strengths and limitations are discussed. Further research of its use with a variety of MSDs, settings and multidisciplinary teams is needed to confirm its universal value.

  12. Quasi-Maximum Likelihood Estimation and Bootstrap Inference in Fractional Time Series Models with Heteroskedasticity of Unknown Form

    DEFF Research Database (Denmark)

    Cavaliere, Giuseppe; Nielsen, Morten Ørregaard; Taylor, Robert

    We consider the problem of conducting estimation and inference on the parameters of univariate heteroskedastic fractionally integrated time series models. We first extend existing results in the literature, developed for conditional sum-of squares estimators in the context of parametric fractional...... time series models driven by conditionally homoskedastic shocks, to allow for conditional and unconditional heteroskedasticity both of a quite general and unknown form. Global consistency and asymptotic normality are shown to still obtain; however, the covariance matrix of the limiting distribution...... of the estimator now depends on nuisance parameters derived both from the weak dependence and heteroskedasticity present in the shocks. We then investigate classical methods of inference based on the Wald, likelihood ratio and Lagrange multiplier tests for linear hypotheses on either or both of the long and short...

  13. Automatically extracting sheet-metal features from solid model

    Institute of Scientific and Technical Information of China (English)

    刘志坚; 李建军; 王义林; 李材元; 肖祥芷

    2004-01-01

    With the development of modern industry, sheet-metal parts in mass production have been widely applied in mechanical, communication, electronics, and light industries in recent decades; but the advances in sheet-metal part design and manufacturing remain too slow compared with the increasing importance of sheet-metal parts in modern industry. This paper proposes a method for automatically extracting features from an arbitrary solid model of sheet-metal parts; whose characteristics are used for classification and graph-based representation of the sheet-metal features to extract the features embodied in a sheet-metal part. The extracting feature process can be divided for valid checking of the model geometry, feature matching, and feature relationship. Since the extracted features include abundant geometry and engineering information, they will be effective for downstream application such as feature rebuilding and stamping process planning.

  14. Automatically extracting sheet-metal features from solid model

    Institute of Scientific and Technical Information of China (English)

    刘志坚; 李建军; 王义林; 李材元; 肖祥芷

    2004-01-01

    With the development of modern industry,sheet-metal parts in mass production have been widely applied in mechanical,communication,electronics,and light industries in recent decades; but the advances in sheet-metal part design and manufacturing remain too slow compared with the increasing importance of sheet-metal parts in modern industry. This paper proposes a method for automatically extracting features from an arbitrary solid model of sheet-metal parts; whose characteristics are used for classification and graph-based representation of the sheet-metal features to extract the features embodied in a sheet-metal part. The extracting feature process can be divided for valid checking of the model geometry,feature matching,and feature relationship. Since the extracted features include abundant geometry and engineering information,they will be effective for downstream application such as feature rebuilding and stamping process planning.

  15. An automatic invisible axion in the SUSY preon model

    Science.gov (United States)

    Babu, K. S.; Choi, Kiwoon; Pati, J. C.; Zhang, X.

    1994-08-01

    It is shown that the recently proposed preon model which provides a unified origin of the diverse mass scales and an explanation of family replication as well as of inter-family mass-hierarchy, naturally possesses a Peccei-Quinn (PQ) symmetry whose spontaneous breaking leads to an automatic invisible axion. Existence of the PQ-symmetry is simply a consequence of supersymmetry and the requirement of minimality in the field-content and interactions, which proposes that the lagrangian should possess only those terms which are dictated by the gauge principle and no others. In addition to the axion, the model also generates two superlight Goldstone bosons and their superpartners all of which are cosmologically safe.

  16. Automatic generation of matrix element derivatives for tight binding models

    Science.gov (United States)

    Elena, Alin M.; Meister, Matthias

    2005-10-01

    Tight binding (TB) models are one approach to the quantum mechanical many-particle problem. An important role in TB models is played by hopping and overlap matrix elements between the orbitals on two atoms, which of course depend on the relative positions of the atoms involved. This dependence can be expressed with the help of Slater-Koster parameters, which are usually taken from tables. Recently, a way to generate these tables automatically was published. If TB approaches are applied to simulations of the dynamics of a system, also derivatives of matrix elements can appear. In this work we give general expressions for first and second derivatives of such matrix elements. Implemented in a tight binding computer program, like, for instance, DINAMO, they obviate the need to type all the required derivatives of all occurring matrix elements by hand.

  17. Concepts of Information Content and Likelihood in Parameter Calibration for Hydrological Simulation Models

    Science.gov (United States)

    Beven, Keith; Smith, Paul

    2013-04-01

    There remains a great deal of uncertainty about uncertainty estimation in hydrological modelling. Given that hydrology is still a subject limited by the available measurement techniques, it does not appear that the issue of epistemic error in hydrological data will go away for the foreseeable future and it may be necessary to find a way of allowing for robust model conditioning and more subjective treatments of potential epistemic errors in prediction. This paper attempts to analyse how the results of the epistemic uncertainties inherent in the hydrological modelling process impact on model conditioning, hypothesis testing and forecasting. We propose an assessment of the information in hydrological data used for calibration based upon hydrological reasoning. This is performed prior to the assessment of any of the proposed hydrological models. It can then inform the evaluation of competing models and resulting prediction uncertainties. An illustration of how this information assessment might influence model conditioning is provided by an application; the rainfall-runoff modelling of a catchment in Northern England where inconsistent data for some events can introduce disinformation into the model conditioning process. The construction of the resulting prediction uncertainties is also considered.

  18. On the Concepts of Information Content and Likelihood in Parameter Calibration for Hydrological Simulation Models (Invited)

    Science.gov (United States)

    Smith, P. J.; Beven, K.

    2013-12-01

    There remains a great deal of uncertainty about appropriate uncertainty estimation in hydrological modelling. Given that hydrology is still a subject limited by the available measurement techniques; and we cannot go back in time to take better observations of the past; the issue of epistemic error in hydrological data will not go away for the foreseeable future. It is therefore necessary to find a way of allowing for robust model conditioning and more subjective treatments of potential epistemic errors in prediction. This paper offers an analysis of how the epistemic uncertainties inherent in the hydrological modelling process impact on model conditioning, hypothesis testing and forecasting. We propose an assessment of the information in hydrological data used for calibration based upon hydrological reasoning and prior to the assessment of any of the proposed hydrological models. This can then inform the evaluation of competing models and resulting prediction uncertainties. An illustration of how this information assessment might influence model conditioning is provided by an application; the rainfall-runoff modelling of a catchment in Northern England where inconsistent data for some events can introduce disinformation into the model conditioning process. The construction of the resulting prediction uncertainties is also considered.

  19. Complex DNA mixture analysis in a forensic context: evaluating the probative value using a likelihood ratio model.

    Science.gov (United States)

    Haned, Hinda; Benschop, Corina C G; Gill, Peter D; Sijen, Titia

    2015-05-01

    The interpretation of mixed DNA profiles obtained from low template DNA samples has proven to be a particularly difficult task in forensic casework. Newly developed likelihood ratio (LR) models that account for PCR-related stochastic effects, such as allelic drop-out, drop-in and stutters, have enabled the analysis of complex cases that would otherwise have been reported as inconclusive. In such samples, there are uncertainties about the number of contributors, and the correct sets of propositions to consider. Using experimental samples, where the genotypes of the donors are known, we evaluated the feasibility and the relevance of the interpretation of high order mixtures, of three, four and five donors. The relative risks of analyzing high order mixtures of three, four, and five donors, were established by comparison of a 'gold standard' LR, to the LR that would be obtained in casework. The 'gold standard' LR is the ideal LR: since the genotypes and number of contributors are known, it follows that the parameters needed to compute the LR can be determined per contributor. The 'casework LR' was calculated as used in standard practice, where unknown donors are assumed; the parameters were estimated from the available data. Both LRs were calculated using the basic standard model, also termed the drop-out/drop-in model, implemented in the LRmix module of the R package Forensim. We show how our results furthered the understanding of the relevance of analyzing high order mixtures in a forensic context. Limitations are highlighted, and it is illustrated how our study serves as a guide to implement likelihood ratio interpretation of complex DNA profiles in forensic casework.

  20. Application of a generalized likelihood function for parameter inference of a carbon balance model using multiple, joint constraints

    Science.gov (United States)

    Hammerle, Albin; Wohlfahrt, Georg; Schoups, Gerrit

    2014-05-01

    Advances in automated data collection systems enabled ecologists to collect enormous amounts of varied data. Data assimilation (or data model synthesis) is one way to make sense of this mass of data. Given a process model designed to learn about ecological processes these data can be integrated within a statistical framework for data interpretation and extrapolation. Results of such a data assimilation framework clearly depend on the information content of the observed data, on the associated uncertainties (data uncertainties, model structural uncertainties and parameter uncertainties) and underlying assumptions. Parameter estimation is usually done by minimizing a simple least squares objective function with respect to the model parameters - presuming Gaussian, independent and homoscedastic errors (formal approach). Recent contributions to the (ecological) literature, however, have questioned the validity of this approach when confronted with significant errors and uncertainty in the model forcing (inputs) and model structure. Very often residual errors are non-Gaussian, correlated and heteroscedastic. Thus these error sources have to be considered and residual-errors have to be described in a statistically correct fashion order to draw statistically sound conclusions about parameter- and model predictive-uncertainties. We examined the effects of a generalized likelihood (GL) function on the parameter estimation of a carbon balance model. Compared with the formal approach, the GL function allows for correlation, non-stationarity and non-normality of model residuals. Carbon model parameters have been constrained using three different datasets, each of them modelled by its own GL function. As shown in literature the use of different datasets for parameter estimation reduces the uncertainty in model parameters and model predictions and does allow for a better quantification and for more insights into model processes.

  1. An automatic and effective parameter optimization method for model tuning

    Directory of Open Access Journals (Sweden)

    T. Zhang

    2015-05-01

    Full Text Available Physical parameterizations in General Circulation Models (GCMs, having various uncertain parameters, greatly impact model performance and model climate sensitivity. Traditional manual and empirical tuning of these parameters is time consuming and ineffective. In this study, a "three-step" methodology is proposed to automatically and effectively obtain the optimum combination of some key parameters in cloud and convective parameterizations according to a comprehensive objective evaluation metrics. Different from the traditional optimization methods, two extra steps, one determines parameter sensitivity and the other chooses the optimum initial value of sensitive parameters, are introduced before the downhill simplex method to reduce the computational cost and improve the tuning performance. Atmospheric GCM simulation results show that the optimum combination of these parameters determined using this method is able to improve the model's overall performance by 9%. The proposed methodology and software framework can be easily applied to other GCMs to speed up the model development process, especially regarding unavoidable comprehensive parameters tuning during the model development stage.

  2. 基于多维名义模型的加权似然估计方法%Weighted Likelihood Estimation Method Based on the Multidimensional Nominal Response Model

    Institute of Scientific and Technical Information of China (English)

    孙珊珊; 陶剑

    2008-01-01

    The Monte Carlo study evaluates the relative accuracy of Warm's (1989) weighted likelihood estimate (WLE) compared to the maximum likelihood estimate (MLE) using the nominal response model. And the results indicate that WLE was more accurate than MLE.

  3. Maximum likelihood model based on minor allele frequencies and weighted Max-SAT formulation for haplotype assembly.

    Science.gov (United States)

    Mousavi, Sayyed R; Khodadadi, Ilnaz; Falsafain, Hossein; Nadimi, Reza; Ghadiri, Nasser

    2014-06-07

    Human haplotypes include essential information about SNPs, which in turn provide valuable information for such studies as finding relationships between some diseases and their potential genetic causes, e.g., for Genome Wide Association Studies. Due to expensiveness of directly determining haplotypes and recent progress in high throughput sequencing, there has been an increasing motivation for haplotype assembly, which is the problem of finding a pair of haplotypes from a set of aligned fragments. Although the problem has been extensively studied and a number of algorithms have already been proposed for the problem, more accurate methods are still beneficial because of high importance of the haplotypes information. In this paper, first, we develop a probabilistic model, that incorporates the Minor Allele Frequency (MAF) of SNP sites, which is missed in the existing maximum likelihood models. Then, we show that the probabilistic model will reduce to the Minimum Error Correction (MEC) model when the information of MAF is omitted and some approximations are made. This result provides a novel theoretical support for the MEC, despite some criticisms against it in the recent literature. Next, under the same approximations, we simplify the model to an extension of the MEC in which the information of MAF is used. Finally, we extend the haplotype assembly algorithm HapSAT by developing a weighted Max-SAT formulation for the simplified model, which is evaluated empirically with positive results.

  4. Improving Statistical Language Model Performance with Automatically Generated Word Hierarchies

    CERN Document Server

    McMahon, J; Mahon, John Mc

    1995-01-01

    An automatic word classification system has been designed which processes word unigram and bigram frequency statistics extracted from a corpus of natural language utterances. The system implements a binary top-down form of word clustering which employs an average class mutual information metric. Resulting classifications are hierarchical, allowing variable class granularity. Words are represented as structural tags --- unique $n$-bit numbers the most significant bit-patterns of which incorporate class information. Access to a structural tag immediately provides access to all classification levels for the corresponding word. The classification system has successfully revealed some of the structure of English, from the phonemic to the semantic level. The system has been compared --- directly and indirectly --- with other recent word classification systems. Class based interpolated language models have been constructed to exploit the extra information supplied by the classifications and some experiments have sho...

  5. Maximum profile likelihood estimation of differential equation parameters through model based smoothing state estimates.

    Science.gov (United States)

    Campbell, D A; Chkrebtii, O

    2013-12-01

    Statistical inference for biochemical models often faces a variety of characteristic challenges. In this paper we examine state and parameter estimation for the JAK-STAT intracellular signalling mechanism, which exemplifies the implementation intricacies common in many biochemical inference problems. We introduce an extension to the Generalized Smoothing approach for estimating delay differential equation models, addressing selection of complexity parameters, choice of the basis system, and appropriate optimization strategies. Motivated by the JAK-STAT system, we further extend the generalized smoothing approach to consider a nonlinear observation process with additional unknown parameters, and highlight how the approach handles unobserved states and unevenly spaced observations. The methodology developed is generally applicable to problems of estimation for differential equation models with delays, unobserved states, nonlinear observation processes, and partially observed histories.

  6. Towards Automatic Semantic Labelling of 3D City Models

    Science.gov (United States)

    Rook, M.; Biljecki, F.; Diakité, A. A.

    2016-10-01

    The lack of semantic information in many 3D city models is a considerable limiting factor in their use, as a lot of applications rely on semantics. Such information is not always available, since it is not collected at all times, it might be lost due to data transformation, or its lack may be caused by non-interoperability in data integration from other sources. This research is a first step in creating an automatic workflow that semantically labels plain 3D city model represented by a soup of polygons, with semantic and thematic information, as defined in the CityGML standard. The first step involves the reconstruction of the topology, which is used in a region growing algorithm that clusters upward facing adjacent triangles. Heuristic rules, embedded in a decision tree, are used to compute a likeliness score for these regions that either represent the ground (terrain) or a RoofSurface. Regions with a high likeliness score, to one of the two classes, are used to create a decision space, which is used in a support vector machine (SVM). Next, topological relations are utilised to select seeds that function as a start in a region growing algorithm, to create regions of triangles of other semantic classes. The topological relationships of the regions are used in the aggregation of the thematic building features. Finally, the level of detail is detected to generate the correct output in CityGML. The results show an accuracy between 85 % and 99 % in the automatic semantic labelling on four different test datasets. The paper is concluded by indicating problems and difficulties implying the next steps in the research.

  7. Signature prediction for model-based automatic target recognition

    Science.gov (United States)

    Keydel, Eric R.; Lee, Shung W.

    1996-06-01

    The moving and stationary target recognition (MSTAR) model- based automatic target recognition (ATR) system utilizes a paradigm which matches features extracted form an unknown SAR target signature against predictions of those features generated from models of the sensing process and candidate target geometries. The candidate target geometry yielding the best match between predicted and extracted features defines the identify of the unknown target. MSTAR will extend the current model-based ATR state-of-the-art in a number of significant directions. These include: use of Bayesian techniques for evidence accrual, reasoning over target subparts, coarse-to-fine hypothesis search strategies, and explicit reasoning over target articulation, configuration, occlusion, and lay-over. These advances also imply significant technical challenges, particularly for the MSTAR feature prediction module (MPM). In addition to accurate electromagnetics, the MPM must provide traceback between input target geometry and output features, on-line target geometry manipulation, target subpart feature prediction, explicit models for local scene effects, and generation of sensitivity and uncertainty measures for the predicted features. This paper describes the MPM design which is being developed to satisfy these requirements. The overall module structure is presented, along with the specific deign elements focused on MSTAR requirements. Particular attention is paid to design elements that enable on-line prediction of features within the time constraints mandated by model-driven ATR. Finally, the current status, development schedule, and further extensions in the module design are described.

  8. Maximum likelihood Bayesian averaging of airflow models in unsaturated fractured tuff using Occam and variance windows

    NARCIS (Netherlands)

    Morales-Casique, E.; Neuman, S.P.; Vesselinov, V.V.

    2010-01-01

    We use log permeability and porosity data obtained from single-hole pneumatic packer tests in six boreholes drilled into unsaturated fractured tuff near Superior, Arizona, to postulate, calibrate and compare five alternative variogram models (exponential, exponential with linear drift, power, trunca

  9. Use of Maximum Likelihood-Mixed Models to select stable reference genes: a case of heat stress response in sheep

    Directory of Open Access Journals (Sweden)

    Salces Judit

    2011-08-01

    Full Text Available Abstract Background Reference genes with stable expression are required to normalize expression differences of target genes in qPCR experiments. Several procedures and companion software have been proposed to find the most stable genes. Model based procedures are attractive because they provide a solid statistical framework. NormFinder, a widely used software, uses a model based method. The pairwise comparison procedure implemented in GeNorm is a simpler procedure but one of the most extensively used. In the present work a statistical approach based in Maximum Likelihood estimation under mixed models was tested and compared with NormFinder and geNorm softwares. Sixteen candidate genes were tested in whole blood samples from control and heat stressed sheep. Results A model including gene and treatment as fixed effects, sample (animal, gene by treatment, gene by sample and treatment by sample interactions as random effects with heteroskedastic residual variance in gene by treatment levels was selected using goodness of fit and predictive ability criteria among a variety of models. Mean Square Error obtained under the selected model was used as indicator of gene expression stability. Genes top and bottom ranked by the three approaches were similar; however, notable differences for the best pair of genes selected for each method and the remaining genes of the rankings were shown. Differences among the expression values of normalized targets for each statistical approach were also found. Conclusions Optimal statistical properties of Maximum Likelihood estimation joined to mixed model flexibility allow for more accurate estimation of expression stability of genes under many different situations. Accurate selection of reference genes has a direct impact over the normalized expression values of a given target gene. This may be critical when the aim of the study is to compare expression rate differences among samples under different environmental

  10. Maximum likelihood cost functions for neural network models of air quality data

    Science.gov (United States)

    Dorling, Stephen R.; Foxall, Robert J.; Mandic, Danilo P.; Cawley, Gavin C.

    The prediction of episodes of poor air quality using artificial neural networks is investigated, concentrating on selection of the most appropriate cost function used in training. Different cost functions correspond to different distributional assumptions regarding the data, the appropriate choice depends on whether a forecast of absolute pollutant concentration or prediction of exceedence events is of principle importance. The cost functions investigated correspond to logistic regression, homoscedastic Gaussian (i.e. conventional sum-of-squares) regression and heteroscedastic Gaussian regression. Both linear and nonlinear neural network architectures are evaluated. While the results presented relate to a dataset describing the daily time-series of the concentration of surface level ozone (O 3) in urban Berlin, the methods applied are quite general and applicable to a wide range of pollutants and locations. The heteroscedastic Gaussian regression model outperforms the other nonlinear methods investigated; however, there is little improvement resulting from the use of nonlinear rather than linear models. Of greater significance is the flexibility afforded by the nonlinear heteroscedastic Gaussian regression model for a range of potential end-users, who may all have different answers to the question: "What is more important, correctly predicting exceedences or avoiding false alarms?".

  11. Strong consistency of maximum quasi-likelihood estimates in generalized linear models

    Institute of Scientific and Technical Information of China (English)

    YiN; Changming; ZHAO; Lincheng

    2005-01-01

    In a generalized linear model with q × 1 responses, bounded and fixed p × qregressors Zi and general link function, under the most general assumption on the mini-mum eigenvalue of∑ni=1n ZiZ'i, the moment condition on responses as weak as possibleand other mild regular conditions, we prove that with probability one, the quasi-likelihoodequation has a solutionβn for all large sample size n, which converges to the true regres-sion parameterβo. This result is an essential improvement over the relevant results in literature.

  12. Reinforcement learning models and their neural correlates: An activation likelihood estimation meta-analysis.

    Science.gov (United States)

    Chase, Henry W; Kumar, Poornima; Eickhoff, Simon B; Dombrovski, Alexandre Y

    2015-06-01

    Reinforcement learning describes motivated behavior in terms of two abstract signals. The representation of discrepancies between expected and actual rewards/punishments-prediction error-is thought to update the expected value of actions and predictive stimuli. Electrophysiological and lesion studies have suggested that mesostriatal prediction error signals control behavior through synaptic modification of cortico-striato-thalamic networks. Signals in the ventromedial prefrontal and orbitofrontal cortex are implicated in representing expected value. To obtain unbiased maps of these representations in the human brain, we performed a meta-analysis of functional magnetic resonance imaging studies that had employed algorithmic reinforcement learning models across a variety of experimental paradigms. We found that the ventral striatum (medial and lateral) and midbrain/thalamus represented reward prediction errors, consistent with animal studies. Prediction error signals were also seen in the frontal operculum/insula, particularly for social rewards. In Pavlovian studies, striatal prediction error signals extended into the amygdala, whereas instrumental tasks engaged the caudate. Prediction error maps were sensitive to the model-fitting procedure (fixed or individually estimated) and to the extent of spatial smoothing. A correlate of expected value was found in a posterior region of the ventromedial prefrontal cortex, caudal and medial to the orbitofrontal regions identified in animal studies. These findings highlight a reproducible motif of reinforcement learning in the cortico-striatal loops and identify methodological dimensions that may influence the reproducibility of activation patterns across studies.

  13. Monte Carlo Likelihood Estimation of Mixed-Effects State Space Models with Application to HIV Dynamics

    Institute of Scientific and Technical Information of China (English)

    ZHOU Jie; TANG Aiping; FENG Hailin

    2016-01-01

    The statistical inference for generalized mixed-effects state space models (MESSM) are investigated when the random effects are unknown.Two filtering algorithms are designed both of which are based on mixture Kalman filter.These algorithms are particularly useful when the longitudinal measurements are sparse.The authors also propose a globally convergent algorithm for parameter estimation of MESSM which can be used to locate the initial value of parameters for local while more efficient algorithms.Simulation examples are carried out which validate the efficacy of the proposed approaches.A data set from the clinical trial is investigated and a smaller mean square error is achieved compared to the existing results in literatures.

  14. Likelihood Analysis of Seasonal Cointegration

    DEFF Research Database (Denmark)

    Johansen, Søren; Schaumburg, Ernst

    1999-01-01

    The error correction model for seasonal cointegration is analyzed. Conditions are found under which the process is integrated of order 1 and cointegrated at seasonal frequency, and a representation theorem is given. The likelihood function is analyzed and the numerical calculation of the maximum...... likelihood estimators is discussed. The asymptotic distribution of the likelihood ratio test for cointegrating rank is given. It is shown that the estimated cointegrating vectors are asymptotically mixed Gaussian. The results resemble the results for cointegration at zero frequency when expressed in terms...

  15. Constrained Maximum Likelihood Estimation for Model Calibration Using Summary-level Information from External Big Data Sources.

    Science.gov (United States)

    Chatterjee, Nilanjan; Chen, Yi-Hau; Maas, Paige; Carroll, Raymond J

    2016-03-01

    Information from various public and private data sources of extremely large sample sizes are now increasingly available for research purposes. Statistical methods are needed for utilizing information from such big data sources while analyzing data from individual studies that may collect more detailed information required for addressing specific hypotheses of interest. In this article, we consider the problem of building regression models based on individual-level data from an "internal" study while utilizing summary-level information, such as information on parameters for reduced models, from an "external" big data source. We identify a set of very general constraints that link internal and external models. These constraints are used to develop a framework for semiparametric maximum likelihood inference that allows the distribution of covariates to be estimated using either the internal sample or an external reference sample. We develop extensions for handling complex stratified sampling designs, such as case-control sampling, for the internal study. Asymptotic theory and variance estimators are developed for each case. We use simulation studies and a real data application to assess the performance of the proposed methods in contrast to the generalized regression (GR) calibration methodology that is popular in the sample survey literature.

  16. Performance comparison of various maximum likelihood nonlinear mixed-effects estimation methods for dose-response models.

    Science.gov (United States)

    Plan, Elodie L; Maloney, Alan; Mentré, France; Karlsson, Mats O; Bertrand, Julie

    2012-09-01

    Estimation methods for nonlinear mixed-effects modelling have considerably improved over the last decades. Nowadays, several algorithms implemented in different software are used. The present study aimed at comparing their performance for dose-response models. Eight scenarios were considered using a sigmoid E(max) model, with varying sigmoidicity and residual error models. One hundred simulated datasets for each scenario were generated. One hundred individuals with observations at four doses constituted the rich design and at two doses, the sparse design. Nine parametric approaches for maximum likelihood estimation were studied: first-order conditional estimation (FOCE) in NONMEM and R, LAPLACE in NONMEM and SAS, adaptive Gaussian quadrature (AGQ) in SAS, and stochastic approximation expectation maximization (SAEM) in NONMEM and MONOLIX (both SAEM approaches with default and modified settings). All approaches started first from initial estimates set to the true values and second, using altered values. Results were examined through relative root mean squared error (RRMSE) of the estimates. With true initial conditions, full completion rate was obtained with all approaches except FOCE in R. Runtimes were shortest with FOCE and LAPLACE and longest with AGQ. Under the rich design, all approaches performed well except FOCE in R. When starting from altered initial conditions, AGQ, and then FOCE in NONMEM, LAPLACE in SAS, and SAEM in NONMEM and MONOLIX with tuned settings, consistently displayed lower RRMSE than the other approaches. For standard dose-response models analyzed through mixed-effects models, differences were identified in the performance of estimation methods available in current software, giving material to modellers to identify suitable approaches based on an accuracy-versus-runtime trade-off.

  17. Terrain Classification on Venus from Maximum-Likelihood Inversion of Parameterized Models of Topography, Gravity, and their Relation

    Science.gov (United States)

    Eggers, G. L.; Lewis, K. W.; Simons, F. J.; Olhede, S.

    2013-12-01

    Venus does not possess a plate-tectonic system like that observed on Earth, and many surface features--such as tesserae and coronae--lack terrestrial equivalents. To understand Venus' tectonics is to understand its lithosphere, requiring a study of topography and gravity, and how they relate. Past studies of topography dealt with mapping and classification of visually observed features, and studies of gravity dealt with inverting the relation between topography and gravity anomalies to recover surface density and elastic thickness in either the space (correlation) or the spectral (admittance, coherence) domain. In the former case, geological features could be delineated but not classified quantitatively. In the latter case, rectangular or circular data windows were used, lacking geological definition. While the estimates of lithospheric strength on this basis were quantitative, they lacked robust error estimates. Here, we remapped the surface into 77 regions visually and qualitatively defined from a combination of Magellan topography, gravity, and radar images. We parameterize the spectral covariance of the observed topography, treating it as a Gaussian process assumed to be stationary over the mapped regions, using a three-parameter isotropic Matern model, and perform maximum-likelihood based inversions for the parameters. We discuss the parameter distribution across the Venusian surface and across terrain types such as coronoae, dorsae, tesserae, and their relation with mean elevation and latitudinal position. We find that the three-parameter model, while mathematically established and applicable to Venus topography, is overparameterized, and thus reduce the results to a two-parameter description of the peak spectral variance and the range-to-half-peak variance (in function of the wavenumber). With the reduction the clustering of geological region types in two-parameter space becomes promising. Finally, we perform inversions for the JOINT spectral variance of

  18. Improving on hidden Markov models: An articulatorily constrained, maximum likelihood approach to speech recognition and speech coding

    Energy Technology Data Exchange (ETDEWEB)

    Hogden, J.

    1996-11-05

    The goal of the proposed research is to test a statistical model of speech recognition that incorporates the knowledge that speech is produced by relatively slow motions of the tongue, lips, and other speech articulators. This model is called Maximum Likelihood Continuity Mapping (Malcom). Many speech researchers believe that by using constraints imposed by articulator motions, we can improve or replace the current hidden Markov model based speech recognition algorithms. Unfortunately, previous efforts to incorporate information about articulation into speech recognition algorithms have suffered because (1) slight inaccuracies in our knowledge or the formulation of our knowledge about articulation may decrease recognition performance, (2) small changes in the assumptions underlying models of speech production can lead to large changes in the speech derived from the models, and (3) collecting measurements of human articulator positions in sufficient quantity for training a speech recognition algorithm is still impractical. The most interesting (and in fact, unique) quality of Malcom is that, even though Malcom makes use of a mapping between acoustics and articulation, Malcom can be trained to recognize speech using only acoustic data. By learning the mapping between acoustics and articulation using only acoustic data, Malcom avoids the difficulties involved in collecting articulator position measurements and does not require an articulatory synthesizer model to estimate the mapping between vocal tract shapes and speech acoustics. Preliminary experiments that demonstrate that Malcom can learn the mapping between acoustics and articulation are discussed. Potential applications of Malcom aside from speech recognition are also discussed. Finally, specific deliverables resulting from the proposed research are described.

  19. 拟似然非线性模型的某些渐近推断:几何方法%SOME ASYMPTOTIC INFERENCE IN QUASI-LIKELIHOOD NONLINEAR MODELS:A GEOMETRIC APPROACH

    Institute of Scientific and Technical Information of China (English)

    韦博成; 唐年胜; 王学仁

    2000-01-01

    A modified Bates and Watts geometric framework is proposed for quasi-likelihood nonlinear models in Euclidean inner product space.Based on the modified geometric framework,some asymptotic inference in terms of curvatures for quasi-likelihood nonlinear models is studied.Several previous results for nonlinear regression models and exponential family nonlinear models etc.are extended to quasi-likelihood nonlinear models.

  20. Empirical likelihood-based dimension reduction inference for linear error-in-responses models with validation study

    Institute of Scientific and Technical Information of China (English)

    2004-01-01

    [1]Fuller, W. A., Measurement Error Models, New York: John Wiley & Sons Inc., 1987.[2]Carroll, R. J., Ruppert, D., Stefanski, L. W., Measurement Error in Nonlinear Models, New York: Chapman and Hall, 1995.[3]Wittes, J., Lakatos, E., Probstfied, J., Surrogate endpoints in clinical trails: Cardiovascular diseases, Statist,Med., 1989, 8: 415-425.[4]Buonaccorsi, J. P., Measurement error in the response in the general linear model, J. Amer. Statist. Assoc., 1996,91(434): 633-642.[5]Carroll, R. J., Stefanski, L. A., Approximate quasi-likelihood estimation in models with surrogate predictors, J.Amer. Statist. Assoc., 1990, 85: 652-663.[6]Pepe, M. S., Inference using surrogate outcome data and a validation sample, Biometrika, 1992, 79: 355-365.[7]Duncan, G., Hill, D., An investigations of the extent and consequences of measurement error in labor-economics survey data, Journal of Labor Economics, 1985, 3: 508-532.[8]Stefanski, L. A., Carrol, R. J., Conditional scores and optimal scores for generalized linear measurement error models, Biometrika, 1987, 74:703-716.[9]Carroll, R. J., Wand, M. P., Semiparametric estimation in logistic measure error models, J. Roy. Statist. Soc.,Ser B, 1991, 53: 652-663.[10]Pepe, M. S., Fleming, T. R., A general nonparametric method for dealing with errors in missing or surrogate covariate data, J. Amer. Statist. Assoc. 1991, 86:108-113.[11]Pepe, M. S., Reilly, M., Fleming, T. R., Auxiliary outcome data and the mean score method, J. Statist. Plan.Inference, 1994, 42: 137-160.[12]Reilly, M., Pepe, M. S., A mean score method for missing and auxiliary covariate data in regression models,Biometrika, 1995, 82: 299-314.[13]Carroll, R. J., Knickerbocker, R. K., Wang, C. Y., Dimension reduction in a semiparametric regression model with errors in covariates, The Annals of Statistics, 1995, 23: 161-181.[14]Sepanski, J. H., Lee, L. F., Semiparametric estimation of nonlinear error-in-variables models

  1. Maximum Likelihood Fusion Model

    Science.gov (United States)

    2014-08-09

    by the DLR Institute of Robotics and Mechatronics building (dataset courtesy of the University of Bre- men). In contrast to the Victoria Park dataset...Institute of Robotics and Mechatronics building (dataset courtesy of the University of Bremen). In contrast to the Victoria Park dataset, a camera sensor is

  2. Research in Adaptronic Automatic Control System and Biosensor System Modelling

    Directory of Open Access Journals (Sweden)

    Skopis Vladimir

    2015-07-01

    Full Text Available This paper describes the research on adaptronic systems made by the author and offers to use biosensors that can be later inserted into the adaptronic systems. Adaptronic systems are based, on the one hand, on the adaptronic approach when the system is designed not to always meet the worst condition, but to change the structure of the system according to the external conditions. On the other hand, it is an extension of common automatic control ad adaptive systems. So, in the introduction firstly the adaptronic approach and biosensor as a term is explained. Adaptive systems, upon which adaptronic ones are based, are also mentioned. Then the construction of biosensor is described, as well as some information is given about the classification of biosensors and their main groups. Also it is suggested to use lichen indicators in industry to control concentration of chemical substances in the air. After that mathematical models and computer experiments for adaptronic system and biosensor analysis are given.

  3. Automatic prediction of facial trait judgments: appearance vs. structural models.

    Directory of Open Access Journals (Sweden)

    Mario Rojas

    Full Text Available Evaluating other individuals with respect to personality characteristics plays a crucial role in human relations and it is the focus of attention for research in diverse fields such as psychology and interactive computer systems. In psychology, face perception has been recognized as a key component of this evaluation system. Multiple studies suggest that observers use face information to infer personality characteristics. Interactive computer systems are trying to take advantage of these findings and apply them to increase the natural aspect of interaction and to improve the performance of interactive computer systems. Here, we experimentally test whether the automatic prediction of facial trait judgments (e.g. dominance can be made by using the full appearance information of the face and whether a reduced representation of its structure is sufficient. We evaluate two separate approaches: a holistic representation model using the facial appearance information and a structural model constructed from the relations among facial salient points. State of the art machine learning methods are applied to a derive a facial trait judgment model from training data and b predict a facial trait value for any face. Furthermore, we address the issue of whether there are specific structural relations among facial points that predict perception of facial traits. Experimental results over a set of labeled data (9 different trait evaluations and classification rules (4 rules suggest that a prediction of perception of facial traits is learnable by both holistic and structural approaches; b the most reliable prediction of facial trait judgments is obtained by certain type of holistic descriptions of the face appearance; and c for some traits such as attractiveness and extroversion, there are relationships between specific structural features and social perceptions.

  4. Impact of Animated Spokes-Characters in Print Direct-to-Consumer Prescription Drug Advertising: An Elaboration Likelihood Model Approach.

    Science.gov (United States)

    Bhutada, Nilesh S; Rollins, Brent L; Perri, Matthew

    2017-04-01

    A randomized, posttest-only online survey study of adult U.S. consumers determined the advertising effectiveness (attitude toward ad, brand, company, spokes-characters, attention paid to the ad, drug inquiry intention, and perceived product risk) of animated spokes-characters in print direct-to-consumer (DTC) advertising of prescription drugs and the moderating effects of consumers' involvement. Consumers' responses (n = 490) were recorded for animated versus nonanimated (human) spokes-characters in a fictitious DTC ad. Guided by the elaboration likelihood model, data were analyzed using a 2 (spokes-character type: animated/human) × 2 (involvement: high/low) factorial multivariate analysis of covariance (MANCOVA). The MANCOVA indicated significant main effects of spokes-character type and involvement on the dependent variables after controlling for covariate effects. Of the several ad effectiveness variables, consumers only differed on their attitude toward the spokes-characters between the two spokes-character types (specifically, more favorable attitudes toward the human spokes-character). Apart from perceived product risk, high-involvement consumers reacted more favorably to the remaining ad effectiveness variables compared to the low-involvement consumers, and exhibited significantly stronger drug inquiry intentions during their next doctor visit. Further, the moderating effect of consumers' involvement was not observed (nonsignificant interaction effect between spokes-character type and involvement).

  5. Empirical likelihood based inference for second-order diffusion models%二阶扩散模型的经验似然推断

    Institute of Scientific and Technical Information of China (English)

    王允艳; 张立新; 王汉超

    2012-01-01

    In this paper, we develop an empirical likelihood method to construct empirical likelihood estimators for nonparametric drift and diffusion functions in the second-order diffusion model, and the consistency and asymptotic normality of the empirical likelihood estimators are obtained. Moreover, the nonsymmetric confidence intervals for drift and diffusion functions based on empirical likelihood methods are obtained, and the adjusted empirical log-likelihood ratio is proved to be asymptotically standard chi-square under some mild conditions.%本文利用经验似然方法得到了二阶扩散模型的漂移系数和扩散系数的经验似然估计量,并研究这些估计量的相合性和渐近正态性.进一步在经验似然方法的基础上给出了漂移系数和扩散系数的非对称的置信区间,并且在一定的条件下证明了调整的对数似然比是渐近卡方分布的.

  6. Recovery of Item Parameters in the Nominal Response Model: A Comparison of Marginal Maximum Likelihood Estimation and Markov Chain Monte Carlo Estimation.

    Science.gov (United States)

    Wollack, James A.; Bolt, Daniel M.; Cohen, Allan S.; Lee, Young-Sun

    2002-01-01

    Compared the quality of item parameter estimates for marginal maximum likelihood (MML) and Markov Chain Monte Carlo (MCMC) with the nominal response model using simulation. The quality of item parameter recovery was nearly identical for MML and MCMC, and both methods tended to produce good estimates. (SLD)

  7. Evaluation of Model Recognition for Grammar-Based Automatic 3d Building Model Reconstruction

    Science.gov (United States)

    Yu, Qian; Helmholz, Petra; Belton, David

    2016-06-01

    In recent years, 3D city models are in high demand by many public and private organisations, and the steadily growing capacity in both quality and quantity are increasing demand. The quality evaluation of these 3D models is a relevant issue both from the scientific and practical points of view. In this paper, we present a method for the quality evaluation of 3D building models which are reconstructed automatically from terrestrial laser scanning (TLS) data based on an attributed building grammar. The entire evaluation process has been performed in all the three dimensions in terms of completeness and correctness of the reconstruction. Six quality measures are introduced to apply on four datasets of reconstructed building models in order to describe the quality of the automatic reconstruction, and also are assessed on their validity from the evaluation point of view.

  8. Empirical likelihood method in survival analysis

    CERN Document Server

    Zhou, Mai

    2015-01-01

    Add the Empirical Likelihood to Your Nonparametric ToolboxEmpirical Likelihood Method in Survival Analysis explains how to use the empirical likelihood method for right censored survival data. The author uses R for calculating empirical likelihood and includes many worked out examples with the associated R code. The datasets and code are available for download on his website and CRAN.The book focuses on all the standard survival analysis topics treated with empirical likelihood, including hazard functions, cumulative distribution functions, analysis of the Cox model, and computation of empiric

  9. Rising Above Chaotic Likelihoods

    CERN Document Server

    Du, Hailiang

    2014-01-01

    Berliner (Likelihood and Bayesian prediction for chaotic systems, J. Am. Stat. Assoc. 1991) identified a number of difficulties in using the likelihood function within the Bayesian paradigm for state estimation and parameter estimation of chaotic systems. Even when the equations of the system are given, he demonstrated "chaotic likelihood functions" of initial conditions and parameter values in the 1-D Logistic Map. Chaotic likelihood functions, while ultimately smooth, have such complicated small scale structure as to cast doubt on the possibility of identifying high likelihood estimates in practice. In this paper, the challenge of chaotic likelihoods is overcome by embedding the observations in a higher dimensional sequence-space, which is shown to allow good state estimation with finite computational power. An Importance Sampling approach is introduced, where Pseudo-orbit Data Assimilation is employed in the sequence-space in order first to identify relevant pseudo-orbits and then relevant trajectories. Es...

  10. Path Tracking Control of Automatic Parking Cloud Model considering the Influence of Time Delay

    Directory of Open Access Journals (Sweden)

    Yiding Hua

    2017-01-01

    Full Text Available This paper establishes the kinematic model of the automatic parking system and analyzes the kinematic constraints of the vehicle. Furthermore, it solves the problem where the traditional automatic parking system model fails to take into account the time delay. Firstly, based on simulating calculation, the influence of time delay on the dynamic trajectory of a vehicle in the automatic parking system is analyzed under the transverse distance Dlateral between different target spaces. Secondly, on the basis of cloud model, this paper utilizes the tracking control of an intelligent path closer to human intelligent behavior to further study the Cloud Generator-based parking path tracking control method and construct a vehicle path tracking control model. Moreover, tracking and steering control effects of the model are verified through simulation analysis. Finally, the effectiveness and timeliness of automatic parking controller in the aspect of path tracking are tested through a real vehicle experiment.

  11. Automatic removal of eye movement artifacts from the EEG using ICA and the dipole model

    Institute of Scientific and Technical Information of China (English)

    Weidong Zhou; Jean Gotman

    2009-01-01

    12 patients were analyzed.The experimental results indicate that ICA with the dipole model is very efficient at automatically subtracting the eye movement artifacts,while retaining the EEG slow waves and making their interpretation easier.

  12. Technical Note: Automatic river network generation for a physically-based river catchment model

    OpenAIRE

    2010-01-01

    SHETRAN is a physically-based distributed modelling system that gives detailed simulations in time and space of water flow and sediment and solute transport in river catchments. Standard algorithms for the automatic generation of river channel networks from digital elevation data are impossible to apply in SHETRAN and other similar models because the river channels are assumed to run along the edges of grid cells. In this work a new algorithm for the automatic generation of a river cha...

  13. Technical Note: Automatic river network generation for a physically-based river catchment model

    OpenAIRE

    2010-01-01

    SHETRAN is a physically-based distributed modelling system that gives detailed simulations in time and space of water flow and sediment and solute transport in river catchments. Standard algorithms for the automatic generation of river channel networks from digital elevation data are impossible to apply in SHETRAN and other similar models because the river channels are assumed to run along the edges of grid cells. In this work a new algorithm for the automatic generation of a river channel ne...

  14. Obtaining reliable Likelihood Ratio tests from simulated likelihood functions

    DEFF Research Database (Denmark)

    Andersen, Laura Mørch

    It is standard practice by researchers and the default option in many statistical programs to base test statistics for mixed models on simulations using asymmetric draws (e.g. Halton draws). This paper shows that when the estimated likelihood functions depend on standard deviations of mixed...

  15. Assessing Automatic Aid as an Emergency Response Model

    Science.gov (United States)

    2013-12-01

    often include the idea of mutual and automatic aid. Clovis noted in his paper “Thinking about National Preparedness” that resources, being a...Attributes of Collaboration.” National Public Management Research Conference. October 2009. 39 Samuel Clovis , “Thinking About National Preparedness...to Fail. Boston, MA: Perseus Books Group. Kindle Edition Clovis , Samuel. “Thinking About National Preparedness: The National Planning Scenarios and

  16. Modeling and Prototyping of Automatic Clutch System for Light Vehicles

    Science.gov (United States)

    Murali, S.; Jothi Prakash, V. M.; Vishal, S.

    2017-03-01

    Nowadays, recycling or regenerating the waste in to something useful is appreciated all around the globe. It reduces greenhouse gas emissions that contribute to global climate change. This study deals with provision of the automatic clutch mechanism in vehicles to facilitate the smooth changing of gears. This study proposed to use the exhaust gases which are normally expelled out as a waste from the turbocharger to actuate the clutch mechanism in vehicles to facilitate the smooth changing of gears. At present, clutches are operated automatically by using an air compressor in the four wheelers. In this study, a conceptual design is proposed in which the clutch is operated by the exhaust gas from the turbocharger and this will remove the usage of air compressor in the existing system. With this system, usage of air compressor is eliminated and the riders need not to operate the clutch manually. This work involved in development, analysation and validation of the conceptual design through simulation software. Then the developed conceptual design of an automatic pneumatic clutch system is tested with proto type.

  17. ARCH-M模型的经验似然估计%Empirical likelihood estimation for ARCH-M models

    Institute of Scientific and Technical Information of China (English)

    孙岩; 李元

    2012-01-01

    This paper deals with the metric of risk aversion.Test statistics are constructed by the empirical likelihood method.Under mild conditions,asymptotic x2 distributions of test statistics are obtained,based on which confidence regions for risk aversion are given.Simulations show that the empirical likelihood method behaves well.%基于ARCH-M模型研究市场总体风险厌恶的度量问题.首先应用经验似然方法构造了检验统计量,并在一定的条件下,证明了所构造的统计量的渐近分布为卡方分布,在此基础上构造了市场总体风险厌恶的置信区间.模拟结果表明,经验似然方法表现良好.

  18. Modified Maxium Likelihood Estimation Method for Completely Separated and Quasi-Completely Separated Data for a Dose-Response Model

    Science.gov (United States)

    2015-08-01

    Respondents should be aware that notwithstanding any other provision of law, no person shall be subject to any penalty for failing to comply with a...quasi-completely separated, the traditional maximum likelihood estimation (MLE) method generates infinite estimates. The bias -reduction (BR) method...which is a variant of the bias -correction method, removes the first-order bias term by applying a modified score function, and it always produces

  19. Low-rank and sparse decomposition based shape model and probabilistic atlas for automatic pathological organ segmentation.

    Science.gov (United States)

    Shi, Changfa; Cheng, Yuanzhi; Wang, Jinke; Wang, Yadong; Mori, Kensaku; Tamura, Shinichi

    2017-02-22

    One major limiting factor that prevents the accurate delineation of human organs has been the presence of severe pathology and pathology affecting organ borders. Overcoming these limitations is exactly what we are concerned in this study. We propose an automatic method for accurate and robust pathological organ segmentation from CT images. The method is grounded in the active shape model (ASM) framework. It leverages techniques from low-rank and sparse decomposition (LRSD) theory to robustly recover a subspace from grossly corrupted data. We first present a population-specific LRSD-based shape prior model, called LRSD-SM, to handle non-Gaussian gross errors caused by weak and misleading appearance cues of large lesions, complex shape variations, and poor adaptation to the finer local details in a unified framework. For the shape model initialization, we introduce a method based on patient-specific LRSD-based probabilistic atlas (PA), called LRSD-PA, to deal with large errors in atlas-to-target registration and low likelihood of the target organ. Furthermore, to make our segmentation framework more efficient and robust against local minima, we develop a hierarchical ASM search strategy. Our method is tested on the SLIVER07 database for liver segmentation competition, and ranks 3rd in all the published state-of-the-art automatic methods. Our method is also evaluated on some pathological organs (pathological liver and right lung) from 95 clinical CT scans and its results are compared with the three closely related methods. The applicability of the proposed method to segmentation of the various pathological organs (including some highly severe cases) is demonstrated with good results on both quantitative and qualitative experimentation; our segmentation algorithm can delineate organ boundaries that reach a level of accuracy comparable with those of human raters.

  20. Automatic gate design model from wood & tire for farmers

    Directory of Open Access Journals (Sweden)

    Indrawan Ivan

    2017-01-01

    Full Text Available Indonesia is one of the potential paddy farming area in Southeast Asia, and North Sumatra Province is one of many provinces that provides it. Yet, Indonesia is still importing rice from foreign country, eventhough today the government has been willing to supply its own need. Almost 10% irrigation areas in Indonesia are connected to sea current, which means it must have a system to manage the circulation of fresh water and block the seawater from entering the irrigation area through the irrigation channel. Many systems use gates to control the water management, and most of them are using automatic sluice gate because the gates are usually positioned far from village, this makes the manual operating become difficult. Unfortunately, not all farmers can use this kind of gate due to its accessibility and cost. This research was done to design the automatic gates, which are easy to build, user friendly, low cost and dependable. In the future, poor farmers or farmers who do not have connection to government, can make this gate by themselves. The research was conducted in laboratory, using flume, pumps, reservoir, and gate prototype.

  1. Diagnostic Measures for Nonlinear Regression Models Based on Empirical Likelihood Method%非线性回归模型的经验似然诊断

    Institute of Scientific and Technical Information of China (English)

    丁先文; 徐亮; 林金官

    2012-01-01

    经验似然方法已经被广泛用于线性模型和广义线性模型.本文基于经验似然方法对非线性回归模型进行统计诊断.首先得到模型参数的极大经验似然估计;其次基于经验似然研究了三种不同的影响曲率度量;最后通过一个实际例子,说明了诊断方法的有效性.%The empirical likelihood method has been extensively applied to linear regression and generalized linear regression models. In this paper, the diagnostic measures for nonlinear regression models are studied based on the empirical likelihood method. First, the maximum empirical likelihood estimate of the parameters are obtained. Then, three different measures of influence curvatures are studied. Last, real data analysis are given to illustrate the validity of statistical diagnostic measures.

  2. EMPIRICAL LIKELIHOOD FOR LINEAR MODELS UNDER m-DEPENDENT ERRORS%m-相依误差情形线性模型中的经验似然方法

    Institute of Scientific and Technical Information of China (English)

    秦永松; 姜波; 黎玉芳

    2005-01-01

    In this paper,the empirical likelihood confidence regions for the regression coefficient in a linear model are constructed under m-dependent errors.It is shown that the blockwise empirical likelihood is a good way to deal with dependent samples.

  3. Automatic discovery of the communication network topology for building a supercomputer model

    Science.gov (United States)

    Sobolev, Sergey; Stefanov, Konstantin; Voevodin, Vadim

    2016-10-01

    The Research Computing Center of Lomonosov Moscow State University is developing the Octotron software suite for automatic monitoring and mitigation of emergency situations in supercomputers so as to maximize hardware reliability. The suite is based on a software model of the supercomputer. The model uses a graph to describe the computing system components and their interconnections. One of the most complex components of a supercomputer that needs to be included in the model is its communication network. This work describes the proposed approach for automatically discovering the Ethernet communication network topology in a supercomputer and its description in terms of the Octotron model. This suite automatically detects computing nodes and switches, collects information about them and identifies their interconnections. The application of this approach is demonstrated on the "Lomonosov" and "Lomonosov-2" supercomputers.

  4. Likelihood inference for unions of interacting discs

    DEFF Research Database (Denmark)

    Møller, Jesper; Helisova, K.

    2010-01-01

    with respect to a given marked Poisson model (i.e. a Boolean model). We show how edge effects and other complications can be handled by considering a certain conditional likelihood. Our methodology is illustrated by analysing Peter Diggle's heather data set, where we discuss the results of simulation......This is probably the first paper which discusses likelihood inference for a random set using a germ-grain model, where the individual grains are unobservable, edge effects occur and other complications appear. We consider the case where the grains form a disc process modelled by a marked point......-based maximum likelihood inference and the effect of specifying different reference Poisson models....

  5. A new method for automatic discontinuity traces sampling on rock mass 3D model

    Science.gov (United States)

    Umili, G.; Ferrero, A.; Einstein, H. H.

    2013-02-01

    A new automatic method for discontinuity traces mapping and sampling on a rock mass digital model is described in this work. The implemented procedure allows one to automatically identify discontinuity traces on a Digital Surface Model: traces are detected directly as surface breaklines, by means of maximum and minimum principal curvature values of the vertices that constitute the model surface. Color influence and user errors, that usually characterize the trace mapping on images, are eliminated. Also trace sampling procedures based on circular windows and circular scanlines have been implemented: they are used to infer trace data and to calculate values of mean trace length, expected discontinuity diameter and intensity of rock discontinuities. The method is tested on a case study: results obtained applying the automatic procedure on the DSM of a rock face are compared to those obtained performing a manual sampling on the orthophotograph of the same rock face.

  6. Maximum likelihood polynomial regression for robust speech recognition

    Institute of Scientific and Technical Information of China (English)

    LU Yong; WU Zhenyang

    2011-01-01

    The linear hypothesis is the main disadvantage of maximum likelihood linear re- gression (MLLR). This paper applies the polynomial regression method to model adaptation and establishes a nonlinear model adaptation algorithm using maximum likelihood polyno

  7. Automatic Model-Based Generation of Parameterized Test Cases Using Data Abstraction

    NARCIS (Netherlands)

    Calamé, Jens R.; Ioustinova, Natalia; Pol, van de Jaco; Romijn, J.M.T.; Smith, G.; Pol, van de J.C.

    2007-01-01

    Developing test suites is a costly and error-prone process. Model-based test generation tools facilitate this process by automatically generating test cases from system models. The applicability of these tools, however, depends on the size of the target systems. Here, we propose an approach to gener

  8. Technical Note: Automatic river network generation for a physically-based river catchment model

    Directory of Open Access Journals (Sweden)

    S. J. Birkinshaw

    2010-05-01

    Full Text Available SHETRAN is a physically-based distributed modelling system that gives detailed simulations in time and space of water flow and sediment and solute transport in river catchments. Standard algorithms for the automatic generation of river channel networks from digital elevation data are impossible to apply in SHETRAN and other similar models because the river channels are assumed to run along the edges of grid cells. In this work a new algorithm for the automatic generation of a river channel network in SHETRAN is described and its use in an example catchment demonstrated.

  9. Automatic methods for the refinement of system models from the specification to the implementation

    CERN Document Server

    Seiter, Julia; Drechsler, Rolf

    2017-01-01

    This book provides a comprehensive overview of automatic model refinement, which helps readers close the gap between initial textual specification and its desired implementation. The authors enable readers to follow two “directions” for refinement: Vertical refinement, for adding detail and precision to single description for a given model and Horizontal refinement, which considers several views on one level of abstraction, refining the system specification by dedicated descriptions for structure or behavior. The discussion includes several methods which support designers of electronic systems in this refinement process, including verification methods to check automatically whether a refinement has been conducted as intended.

  10. Semi-automatic simulation model generation of virtual dynamic networks for production flow planning

    Science.gov (United States)

    Krenczyk, D.; Skolud, B.; Olender, M.

    2016-08-01

    Computer modelling, simulation and visualization of production flow allowing to increase the efficiency of production planning process in dynamic manufacturing networks. The use of the semi-automatic model generation concept based on parametric approach supporting processes of production planning is presented. The presented approach allows the use of simulation and visualization for verification of production plans and alternative topologies of manufacturing network configurations as well as with automatic generation of a series of production flow scenarios. Computational examples with the application of Enterprise Dynamics simulation software comprising the steps of production planning and control for manufacturing network have been also presented.

  11. Technical Note: Automatic river network generation for a physically-based river catchment model

    Directory of Open Access Journals (Sweden)

    S. J. Birkinshaw

    2010-09-01

    Full Text Available SHETRAN is a physically-based distributed modelling system that gives detailed simulations in time and space of water flow and sediment and solute transport in river catchments. Standard algorithms for the automatic generation of river channel networks from digital elevation data are impossible to apply in SHETRAN and other similar models because the river channels are assumed to run along the edges of grid cells. In this work a new algorithm for the automatic generation of a river channel network in SHETRAN is described and its use in an example catchment demonstrated.

  12. Technical Note: Automatic river network generation for a physically-based river catchment model

    Science.gov (United States)

    Birkinshaw, S. J.

    2010-09-01

    SHETRAN is a physically-based distributed modelling system that gives detailed simulations in time and space of water flow and sediment and solute transport in river catchments. Standard algorithms for the automatic generation of river channel networks from digital elevation data are impossible to apply in SHETRAN and other similar models because the river channels are assumed to run along the edges of grid cells. In this work a new algorithm for the automatic generation of a river channel network in SHETRAN is described and its use in an example catchment demonstrated.

  13. A cultural evolutionary programming approach to automatic analytical modeling of electrochemical phenomena through impedance spectroscopy

    CERN Document Server

    Arpaia, Pasquale

    2009-01-01

    An approach to automatic analytical modeling of electrochemical impedance spectroscopy data by evolutionary programming based on cultural algorithms is proposed. A solution-search strategy based on a cultural mechanism is exploited for defining the equivalent-circuit model automatically: information on search advance is transmitted to all potential solutions, rather than only to a small inheriting subset, such as in a traditional genetic approach. Moreover, with respect to the state of the art, also specific information related to constraints on the application physics knowledge is transferred. Experimental results of the proposed approach implementation in impedance spectroscopy for general-purpose electrochemical circuit analysis and for corrosion monitoring and diagnosing are presented.

  14. Towards an automatic model transformation mechanism from UML state machines to DEVS models

    Directory of Open Access Journals (Sweden)

    Ariel González

    2015-08-01

    Full Text Available The development of complex event-driven systems requires studies and analysis prior to deployment with the goal of detecting unwanted behavior. UML is a language widely used by the software engineering community for modeling these systems through state machines, among other mechanisms. Currently, these models do not have appropriate execution and simulation tools to analyze the real behavior of systems. Existing tools do not provide appropriate libraries (sampling from a probability distribution, plotting, etc. both to build and to analyze models. Modeling and simulation for design and prototyping of systems are widely used techniques to predict, investigate and compare the performance of systems. In particular, the Discrete Event System Specification (DEVS formalism separates the modeling and simulation; there are several tools available on the market that run and collect information from DEVS models. This paper proposes a model transformation mechanism from UML state machines to DEVS models in the Model-Driven Development (MDD context, through the declarative QVT Relations language, in order to perform simulations using tools, such as PowerDEVS. A mechanism to validate the transformation is proposed. Moreover, examples of application to analyze the behavior of an automatic banking machine and a control system of an elevator are presented.

  15. An automatic and effective parameter optimization method for model tuning

    Directory of Open Access Journals (Sweden)

    T. Zhang

    2015-11-01

    simulation results show that the optimum combination of these parameters determined using this method is able to improve the model's overall performance by 9 %. The proposed methodology and software framework can be easily applied to other GCMs to speed up the model development process, especially regarding unavoidable comprehensive parameter tuning during the model development stage.

  16. Automatic fitting of spiking neuron models to electrophysiological recordings

    Directory of Open Access Journals (Sweden)

    Cyrille Rossant

    2010-03-01

    Full Text Available Spiking models can accurately predict the spike trains produced by cortical neurons in response to somatically injected currents. Since the specific characteristics of the model depend on the neuron, a computational method is required to fit models to electrophysiological recordings. The fitting procedure can be very time consuming both in terms of computer simulations and in terms of code writing. We present algorithms to fit spiking models to electrophysiological data (time-varying input and spike trains that can run in parallel on graphics processing units (GPUs. The model fitting library is interfaced with Brian, a neural network simulator in Python. If a GPU is present it uses just-in-time compilation to translate model equations into optimized code. Arbitrary models can then be defined at script level and run on the graphics card. This tool can be used to obtain empirically validated spiking models of neurons in various systems. We demonstrate its use on public data from the INCF Quantitative Single-Neuron Modeling 2009 competition by comparing the performance of a number of neuron spiking models.

  17. AUTOMATIC MESH GENERATION OF 3-D GEOMETRIC MODELS

    Institute of Scientific and Technical Information of China (English)

    刘剑飞

    2003-01-01

    In this paper the presentation of the ball-packing method is reviewed,and a scheme to generate mesh for complex 3-D geometric models is given,which consists of 4 steps:(1)create nodes in 3-D models by ball-packing method,(2)connect nodes to generate mesh by 3-D Delaunay triangulation,(3)retrieve the boundary of the model after Delaunay triangulation,(4)improve the mesh.

  18. Equalized near maximum likelihood detector

    OpenAIRE

    2012-01-01

    This paper presents new detector that is used to mitigate intersymbol interference introduced by bandlimited channels. This detector is named equalized near maximum likelihood detector which combines nonlinear equalizer and near maximum likelihood detector. Simulation results show that the performance of equalized near maximum likelihood detector is better than the performance of nonlinear equalizer but worse than near maximum likelihood detector.

  19. Active Shapes for Automatic 3D Modeling of Buildings

    NARCIS (Netherlands)

    Sirmacek, B.; Lindenbergh, R.C.

    2015-01-01

    Recent technological developments help us to acquire high quality 3D measurements of our urban environment. However, these measurements, which come as point clouds or Digital Surface Models (DSM), do not directly give 3D geometrical models of buildings. In addition to that, they are not suitable for

  20. AUTOMATIC CALIBRATION OF A STOCHASTIC-LAGRANGIAN TRANSPORT MODEL (SLAM)

    Science.gov (United States)

    Numerical models are a useful tool in evaluating and designing NAPL remediation systems. Traditional constitutive finite difference and finite element models are complex and expensive to apply. For this reason, this paper presents the application of a simplified stochastic-Lagran...

  1. Towards automatic model based controller design for reconfigurable plants

    DEFF Research Database (Denmark)

    Michelsen, Axel Gottlieb; Stoustrup, Jakob; Izadi-Zamanabadi, Roozbeh

    2008-01-01

    This paper introduces model-based Plug and Play Process Control, a novel concept for process control, which allows a model-based control system to be reconfigured when a sensor or an actuator is plugged into a controlled process. The work reported in this paper focuses on composing a monolithic m...

  2. Showing Automatically Generated Students' Conceptual Models to Students and Teachers

    Science.gov (United States)

    Perez-Marin, Diana; Pascual-Nieto, Ismael

    2010-01-01

    A student conceptual model can be defined as a set of interconnected concepts associated with an estimation value that indicates how well these concepts are used by the students. It can model just one student or a group of students, and can be represented as a concept map, conceptual diagram or one of several other knowledge representation…

  3. SEMIOTIC MODELS AND AUTOMATIZATION OF PEDAGOGICAL TESTS DESIGN

    Directory of Open Access Journals (Sweden)

    Gennady N. Zverev

    2013-01-01

    Full Text Available The paper deals with the problems of construction objective models of educational course, learning processes, control of learning results. We considered the possibility of automated test generation using formalized concepts of testology, semiotic and mathematical models of pedagogical processes. 

  4. Inference of Gene Flow in the Process of Speciation: An Efficient Maximum-Likelihood Method for the Isolation-with-Initial-Migration Model

    Science.gov (United States)

    Costa, Rui J.; Wilkinson-Herbots, Hilde

    2017-01-01

    The isolation-with-migration (IM) model is commonly used to make inferences about gene flow during speciation, using polymorphism data. However, it has been reported that the parameter estimates obtained by fitting the IM model are very sensitive to the model’s assumptions—including the assumption of constant gene flow until the present. This article is concerned with the isolation-with-initial-migration (IIM) model, which drops precisely this assumption. In the IIM model, one ancestral population divides into two descendant subpopulations, between which there is an initial period of gene flow and a subsequent period of isolation. We derive a very fast method of fitting an extended version of the IIM model, which also allows for asymmetric gene flow and unequal population sizes. This is a maximum-likelihood method, applicable to data on the number of segregating sites between pairs of DNA sequences from a large number of independent loci. In addition to obtaining parameter estimates, our method can also be used, by means of likelihood-ratio tests, to distinguish between alternative models representing the following divergence scenarios: (a) divergence with potentially asymmetric gene flow until the present, (b) divergence with potentially asymmetric gene flow until some point in the past and in isolation since then, and (c) divergence in complete isolation. We illustrate the procedure on pairs of Drosophila sequences from ∼30,000 loci. The computing time needed to fit the most complex version of the model to this data set is only a couple of minutes. The R code to fit the IIM model can be found in the supplementary files of this article. PMID:28193727

  5. Automatically Creating Design Models from 3D Anthropometry Data

    CERN Document Server

    Wuhrer, Stefanie; Bose, Prosenjit

    2011-01-01

    When designing a product that needs to fit the human shape, designers often use a small set of 3D models, called design models, either in physical or digital form, as representative shapes to cover the shape variabilities of the population for which the products are designed. Until recently, the process of creating these models has been an art involving manual interaction and empirical guesswork. The availability of the 3D anthropometric databases provides an opportunity to create design models optimally. In this paper, we propose a novel way to use 3D anthropometric databases to generate design models that represent a given population for design applications such as the sizing of garments and gear. We generate the representative shapes by solving a covering problem in a parameter space. Well-known techniques in computational geometry are used to solve this problem. We demonstrate the method using examples in designing glasses and helmets.

  6. A Method for Modeling the Virtual Instrument Automatic Test System Based on the Petri Net

    Institute of Scientific and Technical Information of China (English)

    MA Min; CHEN Guang-ju

    2005-01-01

    Virtual instrument is playing the important role in automatic test system. This paper introduces a composition of a virtual instrument automatic test system and takes the VXIbus based a test software platform which is developed by CAT lab of the UESTC as an example. Then a method to model this system based on Petri net is proposed. Through this method, we can analyze the test task scheduling to prevent the deadlock or resources conflict. At last, this paper analyzes the feasibility of this method.

  7. A Stochastic Approach for Automatic and Dynamic Modeling of Students' Learning Styles in Adaptive Educational Systems

    Science.gov (United States)

    Dorça, Fabiano Azevedo; Lima, Luciano Vieira; Fernandes, Márcia Aparecida; Lopes, Carlos Roberto

    2012-01-01

    Considering learning and how to improve students' performances, an adaptive educational system must know how an individual learns best. In this context, this work presents an innovative approach for student modeling through probabilistic learning styles combination. Experiments have shown that our approach is able to automatically detect and…

  8. Unidirectional high fiber content composites: Automatic 3D FE model generation and damage simulation

    DEFF Research Database (Denmark)

    Qing, Hai; Mishnaevsky, Leon

    2009-01-01

    A new method and a software code for the automatic generation of 3D micromechanical FE models of unidirectional long-fiber-reinforced composite (LFRC) with high fiber volume fraction with random fiber arrangement are presented. The fiber arrangement in the cross-section is generated through random...

  9. Mathematical modeling and analytical solution for stretching force of automatic feed mechanism

    Institute of Scientific and Technical Information of China (English)

    魏志芳; 陈国光

    2008-01-01

    Load of an automatic feed mechanism is composed of the stretching force of feed belt at the entrance to lower flexible guidance and the friction force between feed belt and flexible guidance. A mathematical model for computing the load was presented. An optimization problem was formulated to determine the attitude of the flexible guidance based on the principle that the potential energy stored in the system was the minimum at the equilibrium. Then the friction force was obtained according to the attitude of guide leaves and the moving velocity of the feed belt and the friction factor. Consequently, the load of the automatic feed mechanism can be calculated. Finally, an example was given to compute the load when the horizontal and elevating firing angles of the automation were respectively 45° and 30°. The computing result can be a criterion to determine the designing parameters of automat.

  10. An Approach Using a 1D Hydraulic Model, Landsat Imaging and Generalized Likelihood Uncertainty Estimation for an Approximation of Flood Discharge

    Directory of Open Access Journals (Sweden)

    Seung Oh Lee

    2013-10-01

    Full Text Available Collection and investigation of flood information are essential to understand the nature of floods, but this has proved difficult in data-poor environments, or in developing or under-developed countries due to economic and technological limitations. The development of remote sensing data, GIS, and modeling techniques have, therefore, proved to be useful tools in the analysis of the nature of floods. Accordingly, this study attempts to estimate a flood discharge using the generalized likelihood uncertainty estimation (GLUE methodology and a 1D hydraulic model, with remote sensing data and topographic data, under the assumed condition that there is no gauge station in the Missouri river, Nebraska, and Wabash River, Indiana, in the United States. The results show that the use of Landsat leads to a better discharge approximation on a large-scale reach than on a small-scale. Discharge approximation using the GLUE depended on the selection of likelihood measures. Consideration of physical conditions in study reaches could, therefore, contribute to an appropriate selection of informal likely measurements. The river discharge assessed by using Landsat image and the GLUE Methodology could be useful in supplementing flood information for flood risk management at a planning level in ungauged basins. However, it should be noted that this approach to the real-time application might be difficult due to the GLUE procedure.

  11. CADLIVE toolbox for MATLAB: automatic dynamic modeling of biochemical networks with comprehensive system analysis.

    Science.gov (United States)

    Inoue, Kentaro; Maeda, Kazuhiro; Miyabe, Takaaki; Matsuoka, Yu; Kurata, Hiroyuki

    2014-09-01

    Mathematical modeling has become a standard technique to understand the dynamics of complex biochemical systems. To promote the modeling, we had developed the CADLIVE dynamic simulator that automatically converted a biochemical map into its associated mathematical model, simulated its dynamic behaviors and analyzed its robustness. To enhance the feasibility by CADLIVE and extend its functions, we propose the CADLIVE toolbox available for MATLAB, which implements not only the existing functions of the CADLIVE dynamic simulator, but also the latest tools including global parameter search methods with robustness analysis. The seamless, bottom-up processes consisting of biochemical network construction, automatic construction of its dynamic model, simulation, optimization, and S-system analysis greatly facilitate dynamic modeling, contributing to the research of systems biology and synthetic biology. This application can be freely downloaded from http://www.cadlive.jp/CADLIVE_MATLAB/ together with an instruction.

  12. An automatic 3D CAD model errors detection method of aircraft structural part for NC machining

    Directory of Open Access Journals (Sweden)

    Bo Huang

    2015-10-01

    Full Text Available Feature-based NC machining, which requires high quality of 3D CAD model, is widely used in machining aircraft structural part. However, there has been little research on how to automatically detect the CAD model errors. As a result, the user has to manually check the errors with great effort before NC programming. This paper proposes an automatic CAD model errors detection approach for aircraft structural part. First, the base faces are identified based on the reference directions corresponding to machining coordinate systems. Then, the CAD models are partitioned into multiple local regions based on the base faces. Finally, the CAD model error types are evaluated based on the heuristic rules. A prototype system based on CATIA has been developed to verify the effectiveness of the proposed approach.

  13. Semi-Automatic Modelling of Building FAÇADES with Shape Grammars Using Historic Building Information Modelling

    Science.gov (United States)

    Dore, C.; Murphy, M.

    2013-02-01

    This paper outlines a new approach for generating digital heritage models from laser scan or photogrammetric data using Historic Building Information Modelling (HBIM). HBIM is a plug-in for Building Information Modelling (BIM) software that uses parametric library objects and procedural modelling techniques to automate the modelling stage. The HBIM process involves a reverse engineering solution whereby parametric interactive objects representing architectural elements are mapped onto laser scan or photogrammetric survey data. A library of parametric architectural objects has been designed from historic manuscripts and architectural pattern books. These parametric objects were built using an embedded programming language within the ArchiCAD BIM software called Geometric Description Language (GDL). Procedural modelling techniques have been implemented with the same language to create a parametric building façade which automatically combines library objects based on architectural rules and proportions. Different configurations of the façade are controlled by user parameter adjustment. The automatically positioned elements of the façade can be subsequently refined using graphical editing while overlaying the model with orthographic imagery. Along with this semi-automatic method for generating façade models, manual plotting of library objects can also be used to generate a BIM model from survey data. After the 3D model has been completed conservation documents such as plans, sections, elevations and 3D views can be automatically generated for conservation projects.

  14. A Maximum Likelihood Estimator of a Markov Model for Disease Activity in Crohn's Disease and Ulcerative Colitis for Annually Aggregated Partial Observations

    DEFF Research Database (Denmark)

    Borg, Søren; Persson, U.; Jess, T.;

    2010-01-01

    Hospital, Copenhagen, Denmark, during 1991 to 1993. The data were aggregated over calendar years; for each year, the number of relapses and the number of surgical operations were recorded. Our aim was to estimate Markov models for disease activity in CD and UC, in terms of relapse and remission...... data and has good face validity. The disease activity model is less suitable for UC due to its transient nature through the presence of curative surgery......, with a cycle length of 1 month. The purpose of these models was to enable evaluation of interventions that would shorten relapses or postpone future relapses. An exact maximum likelihood estimator was developed that disaggregates the yearly observations into monthly transition probabilities between remission...

  15. A generic method for automatic translation between input models for different versions of simulation codes

    Energy Technology Data Exchange (ETDEWEB)

    Serfontein, Dawid E., E-mail: Dawid.Serfontein@nwu.ac.za [School of Mechanical and Nuclear Engineering, North West University (PUK-Campus), PRIVATE BAG X6001 (Internal Post Box 360), Potchefstroom 2520 (South Africa); Mulder, Eben J. [School of Mechanical and Nuclear Engineering, North West University (South Africa); Reitsma, Frederik [Calvera Consultants (South Africa)

    2014-05-01

    A computer code was developed for the semi-automatic translation of input models for the VSOP-A diffusion neutronics simulation code to the format of the newer VSOP 99/05 code. In this paper, this algorithm is presented as a generic method for producing codes for the automatic translation of input models from the format of one code version to another, or even to that of a completely different code. Normally, such translations are done manually. However, input model files, such as for the VSOP codes, often are very large and may consist of many thousands of numeric entries that make no particular sense to the human eye. Therefore the task, of for instance nuclear regulators, to verify the accuracy of such translated files can be very difficult and cumbersome. This may cause translation errors not to be picked up, which may have disastrous consequences later on when a reactor with such a faulty design is built. Therefore a generic algorithm for producing such automatic translation codes may ease the translation and verification process to a great extent. It will also remove human error from the process, which may significantly enhance the accuracy and reliability of the process. The developed algorithm also automatically creates a verification log file which permanently record the names and values of each variable used, as well as the list of meanings of all the possible values. This should greatly facilitate reactor licensing applications.

  16. Automaticity and control in prospective memory: a computational model.

    Directory of Open Access Journals (Sweden)

    Sam J Gilbert

    Full Text Available Prospective memory (PM refers to our ability to realize delayed intentions. In event-based PM paradigms, participants must act on an intention when they detect the occurrence of a pre-established cue. Some theorists propose that in such paradigms PM responding can only occur when participants deliberately initiate processes for monitoring their environment for appropriate cues. Others propose that perceptual processing of PM cues can directly trigger PM responding in the absence of strategic monitoring, at least under some circumstances. In order to address this debate, we present a computational model implementing the latter account, using a parallel distributed processing (interactive activation framework. In this model PM responses can be triggered directly as a result of spreading activation from units representing perceptual inputs. PM responding can also be promoted by top-down monitoring for PM targets. The model fits a wide variety of empirical findings from PM paradigms, including the effect of maintaining PM intentions on ongoing response time and the intention superiority effect. The model also makes novel predictions concerning the effect of stimulus degradation on PM performance, the shape of response time distributions on ongoing and prospective memory trials, and the effects of instructing participants to make PM responses instead of ongoing responses or alongside them. These predictions were confirmed in two empirical experiments. We therefore suggest that PM should be considered to result from the interplay between bottom-up triggering of PM responses by perceptual input, and top-down monitoring for appropriate cues. We also show how the model can be extended to simulate encoding new intentions and subsequently deactivating them, and consider links between the model's performance and results from neuroimaging.

  17. On Automatic Modeling and Use of Domain-specific Ontologies

    DEFF Research Database (Denmark)

    Andreasen, Troels; Knappe, Rasmus; Bulskov, Henrik

    2005-01-01

    In this paper, we firstly introduce an approach to the modeling of a domain-specific ontology for use in connection with a given document collection. Secondly, we present a methodology for deriving conceptual similarity from the domain-specific ontology. Adopted for ontology representation is a s...

  18. Information Model for Connection Management in Automatic Switched Optical Network

    Institute of Scientific and Technical Information of China (English)

    Xu Yunbin(徐云斌); Song Hongsheng; Gui Xuan; Zhang Jie; Gu Wanyi

    2004-01-01

    The three types of connections (Permanent Connection, Soft Permanent Connection and Switched Connection) provided by ASON can adapt the requirement of different network services. Management and maintenance of these three connections are the most important aspect of ASON management. The information models proposed in this paper are used for the purpose of ASON connection management. Firstly a new information model is proposed to meet the requirement for the control plane introduced by ASON. In this model, a new class ControlNE is given, and the relationship between the ControlNE and the transport NE (network element) is also defined. Then this paper proposes information models for the three types of connections for the first time, and analyzes the relationship between the three kinds of connections and the basic network transport entities. Finally, the paper defines some CORBA interfaces for the management of the three connections. In these interfaces, some operations such as create or release a connection are defined, and some other operations can manage the performance of the three kinds of connections, which is necessary for a distributed management system.

  19. Automatic Relevance Determination for multi-way models

    DEFF Research Database (Denmark)

    Mørup, Morten; Hansen, Lars Kai

    2009-01-01

    of components of data within the Tucker and CP structure. For the Tucker and CP model the approach performs better than heuristics such as the Bayesian Information Criterion, Akaikes Information Criterion, DIFFIT and the numerical convex hull (NumConvHull) while operating only at the cost of estimating...

  20. Automatic age and gender classification using supervised appearance model

    Science.gov (United States)

    Bukar, Ali Maina; Ugail, Hassan; Connah, David

    2016-11-01

    Age and gender classification are two important problems that recently gained popularity in the research community, due to their wide range of applications. Research has shown that both age and gender information are encoded in the face shape and texture, hence the active appearance model (AAM), a statistical model that captures shape and texture variations, has been one of the most widely used feature extraction techniques for the aforementioned problems. However, AAM suffers from some drawbacks, especially when used for classification. This is primarily because principal component analysis (PCA), which is at the core of the model, works in an unsupervised manner, i.e., PCA dimensionality reduction does not take into account how the predictor variables relate to the response (class labels). Rather, it explores only the underlying structure of the predictor variables, thus, it is no surprise if PCA discards valuable parts of the data that represent discriminatory features. Toward this end, we propose a supervised appearance model (sAM) that improves on AAM by replacing PCA with partial least-squares regression. This feature extraction technique is then used for the problems of age and gender classification. Our experiments show that sAM has better predictive power than the conventional AAM.

  1. The Sherpa Maximum Likelihood Estimator

    Science.gov (United States)

    Nguyen, D.; Doe, S.; Evans, I.; Hain, R.; Primini, F.

    2011-07-01

    A primary goal for the second release of the Chandra Source Catalog (CSC) is to include X-ray sources with as few as 5 photon counts detected in stacked observations of the same field, while maintaining acceptable detection efficiency and false source rates. Aggressive source detection methods will result in detection of many false positive source candidates. Candidate detections will then be sent to a new tool, the Maximum Likelihood Estimator (MLE), to evaluate the likelihood that a detection is a real source. MLE uses the Sherpa modeling and fitting engine to fit a model of a background and source to multiple overlapping candidate source regions. A background model is calculated by simultaneously fitting the observed photon flux in multiple background regions. This model is used to determine the quality of the fit statistic for a background-only hypothesis in the potential source region. The statistic for a background-plus-source hypothesis is calculated by adding a Gaussian source model convolved with the appropriate Chandra point spread function (PSF) and simultaneously fitting the observed photon flux in each observation in the stack. Since a candidate source may be located anywhere in the field of view of each stacked observation, a different PSF must be used for each observation because of the strong spatial dependence of the Chandra PSF. The likelihood of a valid source being detected is a function of the two statistics (for background alone, and for background-plus-source). The MLE tool is an extensible Python module with potential for use by the general Chandra user.

  2. On divergences tests for composite hypotheses under composite likelihood

    OpenAIRE

    Martin, Nirian; Pardo, Leandro; Zografos, Konstantinos

    2016-01-01

    It is well-known that in some situations it is not easy to compute the likelihood function as the datasets might be large or the model is too complex. In that contexts composite likelihood, derived by multiplying the likelihoods of subjects of the variables, may be useful. The extension of the classical likelihood ratio test statistics to the framework of composite likelihoods is used as a procedure to solve the problem of testing in the context of composite likelihood. In this paper we intro...

  3. Stochastic Modeling as a Means of Automatic Speech Recognition

    Science.gov (United States)

    1975-04-01

    posienon probability Pr( X| I :T|=x| l:T| | Y| I :T|«.y! I :T|. A, I, P, S ). where A, L, P, S represent the acoustic- phonetic , lexical, phonological , and... phonetic sequence by using multiple dictionary entries, phonological rules embedded in the dictionary, and a "degarbling" procedure. The search is...statistical model i f the hnguage. 2) a phonemic dictionary and statistical phonological rules. 3) a phonetic matching algorithm. 4) word level search

  4. 非线性半参数EV模型的最大经验似然估计%Maximum Empirical Likelihood Estimators in Nonlinear Semiparametric EV Regression Models

    Institute of Scientific and Technical Information of China (English)

    冯三营; 薛留根

    2012-01-01

    考虑非参数协变量带有测量误差(EV)的非线性半参数模型,在测量误差分布为普通光滑分布时,利用经验似然方法,给出了回归系数,光滑函数以及误差方差的最大经验似然估计.在一定条件下证明了所得估计量的渐近正态性和相合性.最后通过数值模拟研究了所提估计方法在有限样本下的实际表现.%In this paper, we consider the nonlinear semiparametric models with measurement error in the nonparametric part. When the error is ordinarily smooth, we obtain the maximum empirical likelihood estimators of regression coefficient, smooth function and error variance by using the empirical likelihood method. The asymptotic normality and consistency of the proposed estimators are proved under some appropriate conditions. Finite sample performance of the proposed method is illustrated in a simulation study.

  5. On the method of the automatic modeling in hydraulic pipe networks

    Institute of Scientific and Technical Information of China (English)

    孙以泽; 徐本洲; 王祖温

    2003-01-01

    In this paper the dynamic characteristics in pipes are analyzed with frequency method, and puts for-ward a simple and practical describing method. By establishing the model library beforehand, the modeling ofthe pipe-net is completed automatically, and we can accurately calculate the impedance characteristics of thepipe network, achieve the reasonable configuration of the pipe network, so that to decrease the pressure pulsa-tion.

  6. Statistical Inference for Autoregressive Conditional Duration Models Based on Empirical Likelihood%基于经验似然的自回归条件久期模型的统计推断

    Institute of Scientific and Technical Information of China (English)

    韩玉; 金应华; 吴武清

    2013-01-01

    利用经验似然方法对自回归条件久期(ACD)模型参数进行统计检验,给出了自回归条件久期模型参数的经验似然比统计量,并证明了该统计量渐近服从x2-分布.数值模拟结果表明,经验似然方法优于拟似然方法.%This paper solves the statistical test problem of an autoregressive conditional duration (ACD) models based on an empirical likelihood method. We construct the log empirical likelihood ratio statistics for the parameters of ACD model, it is showed that the proposed statistics asymptotically follows an χ2-distribution. A numerical simulation demonstrates that the performance of the empirical likelihood method are better than that of the quasi-likelihood method.

  7. Automatic generation of computable implementation guides from clinical information models.

    Science.gov (United States)

    Boscá, Diego; Maldonado, José Alberto; Moner, David; Robles, Montserrat

    2015-06-01

    Clinical information models are increasingly used to describe the contents of Electronic Health Records. Implementation guides are a common specification mechanism used to define such models. They contain, among other reference materials, all the constraints and rules that clinical information must obey. However, these implementation guides typically are oriented to human-readability, and thus cannot be processed by computers. As a consequence, they must be reinterpreted and transformed manually into an executable language such as Schematron or Object Constraint Language (OCL). This task can be difficult and error prone due to the big gap between both representations. The challenge is to develop a methodology for the specification of implementation guides in such a way that humans can read and understand easily and at the same time can be processed by computers. In this paper, we propose and describe a novel methodology that uses archetypes as basis for generation of implementation guides. We use archetypes to generate formal rules expressed in Natural Rule Language (NRL) and other reference materials usually included in implementation guides such as sample XML instances. We also generate Schematron rules from NRL rules to be used for the validation of data instances. We have implemented these methods in LinkEHR, an archetype editing platform, and exemplify our approach by generating NRL rules and implementation guides from EN ISO 13606, openEHR, and HL7 CDA archetypes.

  8. Usefulness and limitations of dK random graph models to predict interactions and functional homogeneity in biological networks under a pseudo-likelihood parameter estimation approach

    Directory of Open Access Journals (Sweden)

    Luan Yihui

    2009-09-01

    Full Text Available Abstract Background Many aspects of biological functions can be modeled by biological networks, such as protein interaction networks, metabolic networks, and gene coexpression networks. Studying the statistical properties of these networks in turn allows us to infer biological function. Complex statistical network models can potentially more accurately describe the networks, but it is not clear whether such complex models are better suited to find biologically meaningful subnetworks. Results Recent studies have shown that the degree distribution of the nodes is not an adequate statistic in many molecular networks. We sought to extend this statistic with 2nd and 3rd order degree correlations and developed a pseudo-likelihood approach to estimate the parameters. The approach was used to analyze the MIPS and BIOGRID yeast protein interaction networks, and two yeast coexpression networks. We showed that 2nd order degree correlation information gave better predictions of gene interactions in both protein interaction and gene coexpression networks. However, in the biologically important task of predicting functionally homogeneous modules, degree correlation information performs marginally better in the case of the MIPS and BIOGRID protein interaction networks, but worse in the case of gene coexpression networks. Conclusion Our use of dK models showed that incorporation of degree correlations could increase predictive power in some contexts, albeit sometimes marginally, but, in all contexts, the use of third-order degree correlations decreased accuracy. However, it is possible that other parameter estimation methods, such as maximum likelihood, will show the usefulness of incorporating 2nd and 3rd degree correlations in predicting functionally homogeneous modules.

  9. Towards the Availability of the Distributed Cluster Rendering System: Automatic Modeling and Verification

    DEFF Research Database (Denmark)

    Wang, Kemin; Jiang, Zhengtao; Wang, Yongbin;

    2012-01-01

    , whenever the number of node-n and related parameters vary, we can create the PRISM model file rapidly and then we can use PRISM model checker to verify ralated system properties. At the end of this study, we analyzed and verified the availability distributions of the Distributed Cluster Rendering System......In this study, we proposed a Continuous Time Markov Chain Model towards the availability of n-node clusters of Distributed Rendering System. It's an infinite one, we formalized it, based on the model, we implemented a software, which can automatically model with PRISM language. With the tool...

  10. Introductory statistical inference with the likelihood function

    CERN Document Server

    Rohde, Charles A

    2014-01-01

    This textbook covers the fundamentals of statistical inference and statistical theory including Bayesian and frequentist approaches and methodology possible without excessive emphasis on the underlying mathematics. This book is about some of the basic principles of statistics that are necessary to understand and evaluate methods for analyzing complex data sets. The likelihood function is used for pure likelihood inference throughout the book. There is also coverage of severity and finite population sampling. The material was developed from an introductory statistical theory course taught by the author at the Johns Hopkins University’s Department of Biostatistics. Students and instructors in public health programs will benefit from the likelihood modeling approach that is used throughout the text. This will also appeal to epidemiologists and psychometricians.  After a brief introduction, there are chapters on estimation, hypothesis testing, and maximum likelihood modeling. The book concludes with secti...

  11. Likelihood estimators for multivariate extremes

    KAUST Repository

    Huser, Raphaël

    2015-11-17

    The main approach to inference for multivariate extremes consists in approximating the joint upper tail of the observations by a parametric family arising in the limit for extreme events. The latter may be expressed in terms of componentwise maxima, high threshold exceedances or point processes, yielding different but related asymptotic characterizations and estimators. The present paper clarifies the connections between the main likelihood estimators, and assesses their practical performance. We investigate their ability to estimate the extremal dependence structure and to predict future extremes, using exact calculations and simulation, in the case of the logistic model.

  12. Maximum Likelihood Associative Memories

    OpenAIRE

    Gripon, Vincent; Rabbat, Michael

    2013-01-01

    Associative memories are structures that store data in such a way that it can later be retrieved given only a part of its content -- a sort-of error/erasure-resilience property. They are used in applications ranging from caches and memory management in CPUs to database engines. In this work we study associative memories built on the maximum likelihood principle. We derive minimum residual error rates when the data stored comes from a uniform binary source. Second, we determine the minimum amo...

  13. An automatic synthesis method of compact models of integrated circuit devices based on equivalent circuits

    Science.gov (United States)

    Abramov, I. I.

    2006-05-01

    An automatic synthesis method of equivalent circuits of integrated circuit devices is described in the paper. This method is based on a physical approach to construction of finite-difference approximation to basic equations of semiconductor device physics. It allows to synthesize compact equivalent circuits of different devices automatically as alternative to, for example, sufficiently formal BSIM2 and BSIM3 models used in circuit simulation programs of SPICE type. The method is one of possible variants of general methodology for automatic synthesis of compact equivalent circuits of almost arbitrary devices and circuit-type structures of micro- and nanoelecronics [1]. The method is easily extended in the case of necessity to account thermal effects in integrated circuits. It was shown that its application would be especially perspective for analysis of integrated circuit fragments as a whole and for identification of significant collective physical effects, including parasitic effects in VLSI and ULSI. In the paper the examples illustrating possibilities of the method for automatic synthesis of compact equivalent circuits of some of semiconductor devices and integrated circuit devices are considered. Special attention is given to examples of integrated circuit devices for coarse grids of spatial discretization (less than 10 nodes).

  14. Automatic Navigation for Rat-Robots with Modeling of the Human Guidance

    Institute of Scientific and Technical Information of China (English)

    Chao Sun; Nenggan Zheng; Xinlu Zhang; Weidong Chen; Xiaoxiang Zheng

    2013-01-01

    A bio-robot system refers to an animal equipped with Brain-Computer Interface (BCI),through which the outer stimulation is delivered directly into the animal's brain to control its behaviors.The development ofbio-robots suffers from the dependency on real-time guidance by human operators.Because of its inherent difficulties,there is no feasible method for automatic controlling of bio-robots yet.In this paper,we propose a new method to realize the automatic navigation for bio-robots.A General Regression Neural Network (GRNN) is adopted to analyze and model the controlling procedure of human operations.Comparing to the traditional approaches with explicit controlling rules,our algorithm learns the controlling process and imitates the decision-making of human-beings to steer the rat-robot automatically.In real-time navigation experiments,our method successfully controls bio-robots to follow given paths automatically and precisely.This work would be significant for future applications of bio-robots and provide a new way to realize hybrid intelligent systems with artificial intelligence and natural biological intelligence combined together.

  15. A Simulated Annealing based Optimization Algorithm for Automatic Variogram Model Fitting

    Science.gov (United States)

    Soltani-Mohammadi, Saeed; Safa, Mohammad

    2016-09-01

    Fitting a theoretical model to an experimental variogram is an important issue in geostatistical studies because if the variogram model parameters are tainted with uncertainty, the latter will spread in the results of estimations and simulations. Although the most popular fitting method is fitting by eye, in some cases use is made of the automatic fitting method on the basis of putting together the geostatistical principles and optimization techniques to: 1) provide a basic model to improve fitting by eye, 2) fit a model to a large number of experimental variograms in a short time, and 3) incorporate the variogram related uncertainty in the model fitting. Effort has been made in this paper to improve the quality of the fitted model by improving the popular objective function (weighted least squares) in the automatic fitting. Also, since the variogram model function (£) and number of structures (m) too affect the model quality, a program has been provided in the MATLAB software that can present optimum nested variogram models using the simulated annealing method. Finally, to select the most desirable model from among the single/multi-structured fitted models, use has been made of the cross-validation method, and the best model has been introduced to the user as the output. In order to check the capability of the proposed objective function and the procedure, 3 case studies have been presented.

  16. Bond graph modeling, simulation, and reflex control of the Mars planetary automatic vehicle

    Science.gov (United States)

    Amara, Maher; Friconneau, Jean Pierre; Micaelli, Alain

    1993-01-01

    The bond graph modeling, simulation, and reflex control study of the Planetary Automatic Vehicle are considered. A simulator derived from a complete bond graph model of the vehicle is presented. This model includes both knowledge and representation models of the mechanical structure, the floor contact, and the Mars site. The MACSYMEN (French acronym for aided design method of multi-energetic systems) is used and applied to study the input-output power transfers. The reflex control is then considered. Controller architecture and locomotion specificity are described. A numerical stage highlights some interesting results of the robot and the controller capabilities.

  17. Automatic sleep classification using a data-driven topic model reveals latent sleep states

    DEFF Research Database (Denmark)

    Koch, Henriette; Christensen, Julie Anja Engelhard; Frandsen, Rune

    2014-01-01

    sleep states, this study developed a general and automatic sleep classifier using a data-driven approach. Spectral EEG and EOG measures and eye correlation in 1 s windows were calculated and each sleep epoch was expressed as a mixture of probabilities of latent sleep states by using the topic model....... The model was optimized using 50 subjects and validated on 76 subjects. Results: The optimized sleep model used six topics, and the topic probabilities changed smoothly during transitions. According to the manual scorings, the model scored an overall subject-specific accuracy of 68.3 +/- 7.44 (% mu +/-sigma...

  18. Regular algorithm for the automatic refinement of the spectral characteristics of acoustic finite element models

    Science.gov (United States)

    Suvorov, A. S.; Sokov, E. M.; V'yushkina, I. A.

    2016-09-01

    A new method is presented for the automatic refinement of finite element models of complex mechanical-acoustic systems using the results of experimental studies. The method is based on control of the spectral characteristics via selection of the optimal distribution of adjustments to the stiffness of a finite element mesh. The results of testing the method are given to show the possibility of its use to significantly increase the simulation accuracy of vibration characteristics of bodies with arbitrary spatial configuration.

  19. Maximum Likelihood Methods in Treating Outliers and Symmetrically Heavy-Tailed Distributions for Nonlinear Structural Equation Models with Missing Data

    Science.gov (United States)

    Lee, Sik-Yum; Xia, Ye-Mao

    2006-01-01

    By means of more than a dozen user friendly packages, structural equation models (SEMs) are widely used in behavioral, education, social, and psychological research. As the underlying theory and methods in these packages are vulnerable to outliers and distributions with longer-than-normal tails, a fundamental problem in the field is the…

  20. GIS Data Based Automatic High-Fidelity 3D Road Network Modeling

    Science.gov (United States)

    Wang, Jie; Shen, Yuzhong

    2011-01-01

    3D road models are widely used in many computer applications such as racing games and driving simulations_ However, almost all high-fidelity 3D road models were generated manually by professional artists at the expense of intensive labor. There are very few existing methods for automatically generating 3D high-fidelity road networks, especially those existing in the real world. This paper presents a novel approach thai can automatically produce 3D high-fidelity road network models from real 2D road GIS data that mainly contain road. centerline in formation. The proposed method first builds parametric representations of the road centerlines through segmentation and fitting . A basic set of civil engineering rules (e.g., cross slope, superelevation, grade) for road design are then selected in order to generate realistic road surfaces in compliance with these rules. While the proposed method applies to any types of roads, this paper mainly addresses automatic generation of complex traffic interchanges and intersections which are the most sophisticated elements in the road networks

  1. Grammar-based Automatic 3D Model Reconstruction from Terrestrial Laser Scanning Data

    Science.gov (United States)

    Yu, Q.; Helmholz, P.; Belton, D.; West, G.

    2014-04-01

    The automatic reconstruction of 3D buildings has been an important research topic during the last years. In this paper, a novel method is proposed to automatically reconstruct the 3D building models from segmented data based on pre-defined formal grammar and rules. Such segmented data can be extracted e.g. from terrestrial or mobile laser scanning devices. Two steps are considered in detail. The first step is to transform the segmented data into 3D shapes, for instance using the DXF (Drawing Exchange Format) format which is a CAD data file format used for data interchange between AutoCAD and other program. Second, we develop a formal grammar to describe the building model structure and integrate the pre-defined grammars into the reconstruction process. Depending on the different segmented data, the selected grammar and rules are applied to drive the reconstruction process in an automatic manner. Compared with other existing approaches, our proposed method allows the model reconstruction directly from 3D shapes and takes the whole building into account.

  2. Likelihood inference for unions of interacting discs

    DEFF Research Database (Denmark)

    Møller, Jesper; Helisová, Katarina

    is specified with respect to a given marked Poisson model (i.e. a Boolean model). We show how edge effects and other complications can be handled by considering a certain conditional likelihood. Our methodology is illustrated by analyzing Peter Diggle's heather dataset, where we discuss the results......To the best of our knowledge, this is the first paper which discusses likelihood inference or a random set using a germ-grain model, where the individual grains are unobservable edge effects occur, and other complications appear. We consider the case where the grains form a disc process modelled...... of simulation-based maximum likelihood inference and the effect of specifying different reference Poisson models....

  3. Automatic parametrization of implicit solvent models for the blind prediction of solvation free energies

    CERN Document Server

    Wang, Bao; Wei, Guowei

    2016-01-01

    In this work, a systematic protocol is proposed to automatically parametrize implicit solvent models with polar and nonpolar components. The proposed protocol utilizes the classical Poisson model or the Kohn-Sham density functional theory (KSDFT) based polarizable Poisson model for modeling polar solvation free energies. For the nonpolar component, either the standard model of surface area, molecular volume, and van der Waals interactions, or a model with atomic surface areas and molecular volume is employed. Based on the assumption that similar molecules have similar parametrizations, we develop scoring and ranking algorithms to classify solute molecules. Four sets of radius parameters are combined with four sets of charge force fields to arrive at a total of 16 different parametrizations for the Poisson model. A large database with 668 experimental data is utilized to validate the proposed protocol. The lowest leave-one-out root mean square (RMS) error for the database is 1.33k cal/mol. Additionally, five s...

  4. Adjustment of automatic control systems of production facilities at coal processing plants using multivariant physico- mathematical models

    Science.gov (United States)

    Evtushenko, V. F.; Myshlyaev, L. P.; Makarov, G. V.; Ivushkin, K. A.; Burkova, E. V.

    2016-10-01

    The structure of multi-variant physical and mathematical models of control system is offered as well as its application for adjustment of automatic control system (ACS) of production facilities on the example of coal processing plant.

  5. LanHEP - a package for automatic generation of Feynman rules in gauge models

    CERN Document Server

    Semenov, A Yu

    1996-01-01

    We consider the general problem of derivation of the Feynman rules for the matrix elements in momentum representation from the given Lagrangian in coordinate space invariant under the transformation of some gauge group. LanHEP package presented in this paper allows to define in a convenient way the gauge model Lagrangian in canonical form and then to generate automatically the Feynman rules that can be used in the following calculation of the physical processes by means of CompHEP package. The detailed description of LanHEP commands is given and several examples of LanHEP applications (QED, QCD, Standard Model in the t'Hooft-Feynman gauge) are presented.

  6. The Modelling Of Basing Holes Machining Of Automatically Replaceable Cubical Units For Reconfigurable Manufacturing Systems With Low-Waste Production

    Science.gov (United States)

    Bobrovskij, N. M.; Levashkin, D. G.; Bobrovskij, I. N.; Melnikov, P. A.; Lukyanov, A. A.

    2017-01-01

    Article is devoted the decision of basing holes machining accuracy problems of automatically replaceable cubical units (carriers) for reconfigurable manufacturing systems with low-waste production (RMS). Results of automatically replaceable units basing holes machining modeling on the basis of the dimensional chains analysis are presented. Influence of machining parameters processing on accuracy spacings on centers between basing apertures is shown. The mathematical model of carriers basing holes machining accuracy is offered.

  7. Automatic parameter extraction techniques in IC-CAP for a compact double gate MOSFET model

    Science.gov (United States)

    Darbandy, Ghader; Gneiting, Thomas; Alius, Heidrun; Alvarado, Joaquín; Cerdeira, Antonio; Iñiguez, Benjamin

    2013-05-01

    In this paper, automatic parameter extraction techniques of Agilent's IC-CAP modeling package are presented to extract our explicit compact model parameters. This model is developed based on a surface potential model and coded in Verilog-A. The model has been adapted to Trigate MOSFETs, includes short channel effects (SCEs) and allows accurate simulations of the device characteristics. The parameter extraction routines provide an effective way to extract the model parameters. The techniques minimize the discrepancy and error between the simulation results and the available experimental data for more accurate parameter values and reliable circuit simulation. Behavior of the second derivative of the drain current is also verified and proves to be accurate and continuous through the different operating regimes. The results show good agreement with measured transistor characteristics under different conditions and through all operating regimes.

  8. Accelerated maximum likelihood parameter estimation for stochastic biochemical systems

    Directory of Open Access Journals (Sweden)

    Daigle Bernie J

    2012-05-01

    Full Text Available Abstract Background A prerequisite for the mechanistic simulation of a biochemical system is detailed knowledge of its kinetic parameters. Despite recent experimental advances, the estimation of unknown parameter values from observed data is still a bottleneck for obtaining accurate simulation results. Many methods exist for parameter estimation in deterministic biochemical systems; methods for discrete stochastic systems are less well developed. Given the probabilistic nature of stochastic biochemical models, a natural approach is to choose parameter values that maximize the probability of the observed data with respect to the unknown parameters, a.k.a. the maximum likelihood parameter estimates (MLEs. MLE computation for all but the simplest models requires the simulation of many system trajectories that are consistent with experimental data. For models with unknown parameters, this presents a computational challenge, as the generation of consistent trajectories can be an extremely rare occurrence. Results We have developed Monte Carlo Expectation-Maximization with Modified Cross-Entropy Method (MCEM2: an accelerated method for calculating MLEs that combines advances in rare event simulation with a computationally efficient version of the Monte Carlo expectation-maximization (MCEM algorithm. Our method requires no prior knowledge regarding parameter values, and it automatically provides a multivariate parameter uncertainty estimate. We applied the method to five stochastic systems of increasing complexity, progressing from an analytically tractable pure-birth model to a computationally demanding model of yeast-polarization. Our results demonstrate that MCEM2 substantially accelerates MLE computation on all tested models when compared to a stand-alone version of MCEM. Additionally, we show how our method identifies parameter values for certain classes of models more accurately than two recently proposed computationally efficient methods

  9. Augmented Likelihood Image Reconstruction.

    Science.gov (United States)

    Stille, Maik; Kleine, Matthias; Hägele, Julian; Barkhausen, Jörg; Buzug, Thorsten M

    2016-01-01

    The presence of high-density objects remains an open problem in medical CT imaging. Data of projections passing through objects of high density, such as metal implants, are dominated by noise and are highly affected by beam hardening and scatter. Reconstructed images become less diagnostically conclusive because of pronounced artifacts that manifest as dark and bright streaks. A new reconstruction algorithm is proposed with the aim to reduce these artifacts by incorporating information about shape and known attenuation coefficients of a metal implant. Image reconstruction is considered as a variational optimization problem. The afore-mentioned prior knowledge is introduced in terms of equality constraints. An augmented Lagrangian approach is adapted in order to minimize the associated log-likelihood function for transmission CT. During iterations, temporally appearing artifacts are reduced with a bilateral filter and new projection values are calculated, which are used later on for the reconstruction. A detailed evaluation in cooperation with radiologists is performed on software and hardware phantoms, as well as on clinically relevant patient data of subjects with various metal implants. Results show that the proposed reconstruction algorithm is able to outperform contemporary metal artifact reduction methods such as normalized metal artifact reduction.

  10. Automatic Texture Reconstruction of 3d City Model from Oblique Images

    Science.gov (United States)

    Kang, Junhua; Deng, Fei; Li, Xinwei; Wan, Fang

    2016-06-01

    In recent years, the photorealistic 3D city models are increasingly important in various geospatial applications related to virtual city tourism, 3D GIS, urban planning, real-estate management. Besides the acquisition of high-precision 3D geometric data, texture reconstruction is also a crucial step for generating high-quality and visually realistic 3D models. However, most of the texture reconstruction approaches are probably leading to texture fragmentation and memory inefficiency. In this paper, we introduce an automatic framework of texture reconstruction to generate textures from oblique images for photorealistic visualization. Our approach include three major steps as follows: mesh parameterization, texture atlas generation and texture blending. Firstly, mesh parameterization procedure referring to mesh segmentation and mesh unfolding is performed to reduce geometric distortion in the process of mapping 2D texture to 3D model. Secondly, in the texture atlas generation step, the texture of each segmented region in texture domain is reconstructed from all visible images with exterior orientation and interior orientation parameters. Thirdly, to avoid color discontinuities at boundaries between texture regions, the final texture map is generated by blending texture maps from several corresponding images. We evaluated our texture reconstruction framework on a dataset of a city. The resulting mesh model can get textured by created texture without resampling. Experiment results show that our method can effectively mitigate the occurrence of texture fragmentation. It is demonstrated that the proposed framework is effective and useful for automatic texture reconstruction of 3D city model.

  11. Model-based automatic 3d building model generation by integrating LiDAR and aerial images

    Science.gov (United States)

    Habib, A.; Kwak, E.; Al-Durgham, M.

    2011-12-01

    Accurate, detailed, and up-to-date 3D building models are important for several applications such as telecommunication network planning, urban planning, and military simulation. Existing building reconstruction approaches can be classified according to the data sources they use (i.e., single versus multi-sensor approaches), the processing strategy (i.e., data-driven, model-driven, or hybrid), or the amount of user interaction (i.e., manual, semiautomatic, or fully automated). While it is obvious that 3D building models are important components for many applications, they still lack the economical and automatic techniques for their generation while taking advantage of the available multi-sensory data and combining processing strategies. In this research, an automatic methodology for building modelling by integrating multiple images and LiDAR data is proposed. The objective of this research work is to establish a framework for automatic building generation by integrating data driven and model-driven approaches while combining the advantages of image and LiDAR datasets.

  12. Automatic localization of IASLC-defined mediastinal lymph node stations on CT images using fuzzy models

    Science.gov (United States)

    Matsumoto, Monica M. S.; Beig, Niha G.; Udupa, Jayaram K.; Archer, Steven; Torigian, Drew A.

    2014-03-01

    Lung cancer is associated with the highest cancer mortality rates among men and women in the United States. The accurate and precise identification of the lymph node stations on computed tomography (CT) images is important for staging disease and potentially for prognosticating outcome in patients with lung cancer, as well as for pretreatment planning and response assessment purposes. To facilitate a standard means of referring to lymph nodes, the International Association for the Study of Lung Cancer (IASLC) has recently proposed a definition of the different lymph node stations and zones in the thorax. However, nodal station identification is typically performed manually by visual assessment in clinical radiology. This approach leaves room for error due to the subjective and potentially ambiguous nature of visual interpretation, and is labor intensive. We present a method of automatically recognizing the mediastinal IASLC-defined lymph node stations by modifying a hierarchical fuzzy modeling approach previously developed for body-wide automatic anatomy recognition (AAR) in medical imagery. Our AAR-lymph node (AAR-LN) system follows the AAR methodology and consists of two steps. In the first step, the various lymph node stations are manually delineated on a set of CT images following the IASLC definitions. These delineations are then used to build a fuzzy hierarchical model of the nodal stations which are considered as 3D objects. In the second step, the stations are automatically located on any given CT image of the thorax by using the hierarchical fuzzy model and object recognition algorithms. Based on 23 data sets used for model building, 22 independent data sets for testing, and 10 lymph node stations, a mean localization accuracy of within 1-6 voxels has been achieved by the AAR-LN system.

  13. AUTOMATIC MODEL SELECTION FOR 3D RECONSTRUCTION OF BUILDINGS FROM SATELLITE IMAGARY

    Directory of Open Access Journals (Sweden)

    T. Partovi

    2013-09-01

    Full Text Available Through the improvements of satellite sensor and matching technology, the derivation of 3D models from space borne stereo data obtained a lot of interest for various applications such as mobile navigation, urban planning, telecommunication, and tourism. The automatic reconstruction of 3D building models from space borne point cloud data is still an active research topic. The challenging problem in this field is the relatively low quality of the Digital Surface Model (DSM generated by stereo matching of satellite data comparing to airborne LiDAR data. In order to establish an efficient method to achieve high quality models and complete automation from the mentioned DSM, in this paper a new method based on a model-driven strategy is proposed. For improving the results, refined orthorectified panchromatic images are introduced into the process as additional data. The idea of this method is based on ridge line extraction and analysing height values in direction of and perpendicular to the ridgeline direction. After applying pre-processing to the orthorectified data, some feature descriptors are extracted from the DSM, to improve the automatic ridge line detection. Applying RANSAC a line is fitted to each group of ridge points. Finally these ridge lines are refined by matching them or closing gaps. In order to select the type of roof model the heights of point in extension of the ridge line and height differences perpendicular to the ridge line are analysed. After roof model selection, building edge information is extracted from canny edge detection and parameters derived from the roof parts. Then the best model is fitted to extracted façade roofs based on detected type of model. Each roof is modelled independently and final 3D buildings are reconstructed by merging the roof models with the corresponding walls.

  14. Verossimilhança na seleção de modelos para predição espacial Likelihood in the selection of models for spatial prediction

    Directory of Open Access Journals (Sweden)

    Cristiano Nunes Nesi

    2013-04-01

    the area from the available measurements on 48 experimental plots located in Xanxerê/SC with emphasis on the methodological framework. Choices of covariates in the model and for data transformation define four modeling options to be assessed. The Matèrn correlation function was used, evaluated at values 0.5; 1.5 and 2.5 for smoothness parameter. Models were compared by the maximized logarithm of the likelihood function and also by cross validation. The model with transformed response variable, including coordinates of the area as covariates and the value of 0.5 for the smoothness parameter was selected. The cross validation measures did not add relevant information to the likelihood, and the analysis highlights care must be taken with globally or locally atypical data, as well as the need of objective choice based on different candidate models which ought to be the focus of geostatistical modeling to ensure results compatible with reality.

  15. Automatic parameter extraction technique for gate leakage current modeling in double gate MOSFET

    Science.gov (United States)

    Darbandy, Ghader; Gneiting, Thomas; Alius, Heidrun; Alvarado, Joaquín; Cerdeira, Antonio; Iñiguez, Benjamin

    2013-11-01

    Direct Tunneling (DT) and Trap Assisted Tunneling (TAT) gate leakage current parameters have been extracted and verified considering automatic parameter extraction approach. The industry standard package IC-CAP is used to extract our leakage current model parameters. The model is coded in Verilog-A and the comparison between the model and measured data allows to obtain the model parameter values and parameters correlations/relations. The model and parameter extraction techniques have been used to study the impact of parameters in the gate leakage current based on the extracted parameter values. It is shown that the gate leakage current depends on the interfacial barrier height more strongly than the barrier height of the dielectric layer. There is almost the same scenario with respect to the carrier effective masses into the interfacial layer and the dielectric layer. The comparison between the simulated results and available measured gate leakage current transistor characteristics of Trigate MOSFETs shows good agreement.

  16. A 6D CAD Model for the Automatic Assessment of Building Sustainability

    Directory of Open Access Journals (Sweden)

    Ping Yung

    2014-08-01

    Full Text Available Current building assessment methods limit themselves in their environmental impact by failing to consider the other two aspects of sustainability: the economic and the social. They tend to be complex and costly to run, and therefore are of limited value in comparing design options. This paper proposes and develops a model for the automatic assessment of a building’s sustainability life cycle with the building information modelling (BIM approach and its enabling technologies. A 6D CAD model is developed which could be used as a design aid instead of as a post-construction evaluation tool. 6D CAD includes 3D design as well as a fourth dimension (schedule, a fifth dimension (cost and a sixth dimension (sustainability. The model can automatically derive quantities (5D, calculate economic (5D and 6D, environmental and social impacts (6D, and evaluate the sustainability performance of alternative design options. The sustainability assessment covers the life cycle stages of a building, namely material production, construction, operation, maintenance, demolition and disposal.

  17. A semi-automatic method for developing an anthropomorphic numerical model of dielectric anatomy by MRI

    Energy Technology Data Exchange (ETDEWEB)

    Mazzurana, M [ITC-irst - Bioelectromagnetism Laboratory, FCS Department, 38050 Povo, Trento (Italy); Sandrini, L [ITC-irst - Bioelectromagnetism Laboratory, FCS Department, 38050 Povo, Trento (Italy); Vaccari, A [ITC-irst - Bioelectromagnetism Laboratory, FCS Department, 38050 Povo, Trento (Italy); Malacarne, C [ITC-irst - Bioelectromagnetism Laboratory, FCS Department, 38050 Povo, Trento (Italy); Cristoforetti, L [ITC-irst - Bioelectromagnetism Laboratory, FCS Department, 38050 Povo, Trento (Italy); Pontalti, R [ITC-irst - Bioelectromagnetism Laboratory, FCS Department, 38050 Povo, Trento (Italy)

    2003-10-07

    Complex permittivity values have a dominant role in the overall consideration of interaction between radiofrequency electromagnetic fields and living matter, and in related applications such as electromagnetic dosimetry. There are still some concerns about the accuracy of published data and about their variability due to the heterogeneous nature of biological tissues. The aim of this study is to provide an alternative semi-automatic method by which numerical dielectric human models for dosimetric studies can be obtained. Magnetic resonance imaging (MRI) tomography was used to acquire images. A new technique was employed to correct nonuniformities in the images and frequency-dependent transfer functions to correlate image intensity with complex permittivity were used. The proposed method provides frequency-dependent models in which permittivity and conductivity vary with continuity-even in the same tissue-reflecting the intrinsic realistic spatial dispersion of such parameters. The human model is tested with an FDTD (finite difference time domain) algorithm at different frequencies; the results of layer-averaged and whole-body-averaged SAR (specific absorption rate) are compared with published work, and reasonable agreement has been found. Due to the short time needed to obtain a whole body model, this semi-automatic method may be suitable for efficient study of various conditions that can determine large differences in the SAR distribution, such as body shape, posture, fat-to-muscle ratio, height and weight.

  18. Automatic Seamline Network Generation for Urban Orthophoto Mosaicking with the Use of a Digital Surface Model

    Directory of Open Access Journals (Sweden)

    Qi Chen

    2014-12-01

    Full Text Available Intelligent seamline selection for image mosaicking is an area of active research in the fields of massive data processing, computer vision, photogrammetry and remote sensing. In mosaicking applications for digital orthophoto maps (DOMs, the visual transition in mosaics is mainly caused by differences in positioning accuracy, image tone and relief displacement of high ground objects between overlapping DOMs. Among these three factors, relief displacement, which prevents the seamless mosaicking of images, is relatively more difficult to address. To minimize visual discontinuities, many optimization algorithms have been studied for the automatic selection of seamlines to avoid high ground objects. Thus, a new automatic seamline selection algorithm using a digital surface model (DSM is proposed. The main idea of this algorithm is to guide a seamline toward a low area on the basis of the elevation information in a DSM. Given that the elevation of a DSM is not completely synchronous with a DOM, a new model, called the orthoimage elevation synchronous model (OESM, is derived and introduced. OESM can accurately reflect the elevation information for each DOM unit. Through the morphological processing of the OESM data in the overlapping area, an initial path network is obtained for seamline selection. Subsequently, a cost function is defined on the basis of several measurements, and Dijkstra’s algorithm is adopted to determine the least-cost path from the initial network. Finally, the proposed algorithm is employed for automatic seamline network construction; the effective mosaic polygon of each image is determined, and a seamless mosaic is generated. The experiments with three different datasets indicate that the proposed method meets the requirements for seamline network construction. In comparative trials, the generated seamlines pass through fewer ground objects with low time consumption.

  19. Highly accurate SVM model with automatic feature selection for word sense disambiguation

    Institute of Scientific and Technical Information of China (English)

    王浩; 陈贵林; 吴连献

    2004-01-01

    A novel algorithm for word sense disambiguation(WSD) that is based on SVM model improved with automatic feature selection is introduced. This learning method employs rich contextual features to predict the proper senses for specific words. Experimental results show that this algorithm can achieve an execellent performance on the set of data released during the SENSEEVAL-2 competition. We present the results obtained and discuss the transplantation of this algorithm to other languages such as Chinese. Experimental results on Chinese corpus show that our algorithm achieves an accuracy of 70.0 % even with small training data.

  20. SAR Automatic Target Recognition Based on Numerical Scattering Simulation and Model-based Matching

    Directory of Open Access Journals (Sweden)

    Zhou Yu

    2015-12-01

    Full Text Available This study proposes a model-based Synthetic Aperture Radar (SAR automatic target recognition algorithm. Scattering is computed offline using the laboratory-developed Bidirectional Analytic Ray Tracing software and the same system parameter settings as the Moving and Stationary Target Acquisition and Recognition (MSTAR datasets. SAR images are then created by simulated electromagnetic scattering data. Shape features are extracted from the measured and simulated images, and then, matches are searched. The algorithm is verified using three types of targets from MSTAR data and simulated SAR images, and it is shown that the proposed approach is fast and easy to implement with high accuracy.

  1. A Model for Semi-Automatic Composition of Educational Content from Open Repositories of Learning Objects

    Directory of Open Access Journals (Sweden)

    Paula Andrea Rodríguez Marín

    2014-04-01

    Full Text Available Learning objects (LOs repositories are important in building educational content and should allow search, retrieval and composition processes to be successfully developed to reach educational goals. However, such processes require so much time-consuming and not always provide the desired results. Thus, the aim of this paper is to propose a model for the semiautomatic composition of LOs, which are automatically recovered from open repositories. For the development of model, various text similarity measures are discussed, while for calibration and validation some comparison experiments were performed using the results obtained by teachers. Experimental results show that when using a value of k (number of LOs selected of at least 3, the percentage of similarities between the model and such made by experts exceeds 75%. To conclude, it can be established that the model proposed allows teachers to save time and effort for LOs selection by performing a pre-filter process.

  2. Automatic method for building indoor boundary models from dense point clouds collected by laser scanners.

    Science.gov (United States)

    Valero, Enrique; Adán, Antonio; Cerrada, Carlos

    2012-11-22

    In this paper we present a method that automatically yields Boundary Representation Models (B-rep) for indoors after processing dense point clouds collected by laser scanners from key locations through an existing facility. Our objective is particularly focused on providing single models which contain the shape, location and relationship of primitive structural elements of inhabited scenarios such as walls, ceilings and floors. We propose a discretization of the space in order to accurately segment the 3D data and generate complete B-rep models of indoors in which faces, edges and vertices are coherently connected. The approach has been tested in real scenarios with data coming from laser scanners yielding promising results. We have deeply evaluated the results by analyzing how reliably these elements can be detected and how accurately they are modeled.

  3. Sequential Clustering based Facial Feature Extraction Method for Automatic Creation of Facial Models from Orthogonal Views

    CERN Document Server

    Ghahari, Alireza

    2009-01-01

    Multiview 3D face modeling has attracted increasing attention recently and has become one of the potential avenues in future video systems. We aim to make more reliable and robust automatic feature extraction and natural 3D feature construction from 2D features detected on a pair of frontal and profile view face images. We propose several heuristic algorithms to minimize possible errors introduced by prevalent nonperfect orthogonal condition and noncoherent luminance. In our approach, we first extract the 2D features that are visible to both cameras in both views. Then, we estimate the coordinates of the features in the hidden profile view based on the visible features extracted in the two orthogonal views. Finally, based on the coordinates of the extracted features, we deform a 3D generic model to perform the desired 3D clone modeling. Present study proves the scope of resulted facial models for practical applications like face recognition and facial animation.

  4. Semi-automatic registration of 3D orthodontics models from photographs

    Science.gov (United States)

    Destrez, Raphaël.; Treuillet, Sylvie; Lucas, Yves; Albouy-Kissi, Benjamin

    2013-03-01

    In orthodontics, a common practice used to diagnose and plan the treatment is the dental cast. After digitization by a CT-scan or a laser scanner, the obtained 3D surface models can feed orthodontics numerical tools for computer-aided diagnosis and treatment planning. One of the pre-processing critical steps is the 3D registration of dental arches to obtain the occlusion of these numerical models. For this task, we propose a vision based method to automatically compute the registration based on photos of patient mouth. From a set of matched singular points between two photos and the dental 3D models, the rigid transformation to apply to the mandible to be in contact with the maxillary may be computed by minimizing the reprojection errors. During a precedent study, we established the feasibility of this visual registration approach with a manual selection of singular points. This paper addresses the issue of automatic point detection. Based on a priori knowledge, histogram thresholding and edge detection are used to extract specific points in 2D images. Concurrently, curvatures information detects 3D corresponding points. To improve the quality of the final registration, we also introduce a combined optimization of the projection matrix with the 2D/3D point positions. These new developments are evaluated on real data by considering the reprojection errors and the deviation angles after registration in respect to the manual reference occlusion realized by a specialist.

  5. Automatic Gauge Control in Rolling Process Based on Multiple Smith Predictor Models

    Directory of Open Access Journals (Sweden)

    Jiangyun Li

    2014-01-01

    Full Text Available Automatic rolling process is a high-speed system which always requires high-speed control and communication capabilities. Meanwhile, it is also a typical complex electromechanical system; distributed control has become the mainstream of computer control system for rolling mill. Generally, the control system adopts the 2-level control structure—basic automation (Level 1 and process control (Level 2—to achieve the automatic gauge control. In Level 1, there is always a certain distance between the roll gap of each stand and the thickness testing point, leading to the time delay of gauge control. Smith predictor is a method to cope with time-delay system, but the practical feedback control based on traditional Smith predictor cannot get the ideal control result, because the time delay is hard to be measured precisely and in some situations it may vary in a certain range. In this paper, based on adaptive Smith predictor, we employ multiple models to cover the uncertainties of time delay. The optimal model will be selected by the proposed switch mechanism. Simulations show that the proposed multiple Smith model method exhibits excellent performance in improving the control result even for system with jumping time delay.

  6. A chest-shape target automatic detection method based on Deformable Part Models

    Science.gov (United States)

    Zhang, Mo; Jin, Weiqi; Li, Li

    2016-10-01

    Automatic weapon platform is one of the important research directions at domestic and overseas, it needs to accomplish fast searching for the object to be shot under complex background. Therefore, fast detection for given target is the foundation of further task. Considering that chest-shape target is common target of shoot practice, this paper treats chestshape target as the target and studies target automatic detection method based on Deformable Part Models. The algorithm computes Histograms of Oriented Gradient(HOG) features of the target and trains a model using Latent variable Support Vector Machine(SVM); In this model, target image is divided into several parts then we can obtain foot filter and part filters; Finally, the algorithm detects the target at the HOG features pyramid with method of sliding window. The running time of extracting HOG pyramid with lookup table can be shorten by 36%. The result indicates that this algorithm can detect the chest-shape target in natural environments indoors or outdoors. The true positive rate of detection reaches 76% with many hard samples, and the false positive rate approaches 0. Running on a PC (Intel(R)Core(TM) i5-4200H CPU) with C++ language, the detection time of images with the resolution of 640 × 480 is 2.093s. According to TI company run library about image pyramid and convolution for DM642 and other hardware, our detection algorithm is expected to be implemented on hardware platform, and it has application prospect in actual system.

  7. Modeling the impact of hepatitis C viral clearance on end-stage liver disease in an HIV co-infected cohort with targeted maximum likelihood estimation.

    Science.gov (United States)

    Schnitzer, Mireille E; Moodie, Erica E M; van der Laan, Mark J; Platt, Robert W; Klein, Marina B

    2014-03-01

    Despite modern effective HIV treatment, hepatitis C virus (HCV) co-infection is associated with a high risk of progression to end-stage liver disease (ESLD) which has emerged as the primary cause of death in this population. Clinical interest lies in determining the impact of clearance of HCV on risk for ESLD. In this case study, we examine whether HCV clearance affects risk of ESLD using data from the multicenter Canadian Co-infection Cohort Study. Complications in this survival analysis arise from the time-dependent nature of the data, the presence of baseline confounders, loss to follow-up, and confounders that change over time, all of which can obscure the causal effect of interest. Additional challenges included non-censoring variable missingness and event sparsity. In order to efficiently estimate the ESLD-free survival probabilities under a specific history of HCV clearance, we demonstrate the double-robust and semiparametric efficient method of Targeted Maximum Likelihood Estimation (TMLE). Marginal structural models (MSM) can be used to model the effect of viral clearance (expressed as a hazard ratio) on ESLD-free survival and we demonstrate a way to estimate the parameters of a logistic model for the hazard function with TMLE. We show the theoretical derivation of the efficient influence curves for the parameters of two different MSMs and how they can be used to produce variance approximations for parameter estimates. Finally, the data analysis evaluating the impact of HCV on ESLD was undertaken using multiple imputations to account for the non-monotone missing data.

  8. Estimating nonlinear dynamic equilibrium economies: a likelihood approach

    OpenAIRE

    2004-01-01

    This paper presents a framework to undertake likelihood-based inference in nonlinear dynamic equilibrium economies. The authors develop a sequential Monte Carlo algorithm that delivers an estimate of the likelihood function of the model using simulation methods. This likelihood can be used for parameter estimation and for model comparison. The algorithm can deal both with nonlinearities of the economy and with the presence of non-normal shocks. The authors show consistency of the estimate and...

  9. Automatic versus manual model differentiation to compute sensitivities and solve non-linear inverse problems

    Science.gov (United States)

    Elizondo, D.; Cappelaere, B.; Faure, Ch.

    2002-04-01

    Emerging tools for automatic differentiation (AD) of computer programs should be of great benefit for the implementation of many derivative-based numerical methods such as those used for inverse modeling. The Odyssée software, one such tool for Fortran 77 codes, has been tested on a sample model that solves a 2D non-linear diffusion-type equation. Odyssée offers both the forward and the reverse differentiation modes, that produce the tangent and the cotangent models, respectively. The two modes have been implemented on the sample application. A comparison is made with a manually-produced differentiated code for this model (MD), obtained by solving the adjoint equations associated with the model's discrete state equations. Following a presentation of the methods and tools and of their relative advantages and drawbacks, the performances of the codes produced by the manual and automatic methods are compared, in terms of accuracy and of computing efficiency (CPU and memory needs). The perturbation method (finite-difference approximation of derivatives) is also used as a reference. Based on the test of Taylor, the accuracy of the two AD modes proves to be excellent and as high as machine precision permits, a good indication of Odyssée's capability to produce error-free codes. In comparison, the manually-produced derivatives (MD) sometimes appear to be slightly biased, which is likely due to the fact that a theoretical model (state equations) and a practical model (computer program) do not exactly coincide, while the accuracy of the perturbation method is very uncertain. The MD code largely outperforms all other methods in computing efficiency, a subject of current research for the improvement of AD tools. Yet these tools can already be of considerable help for the computer implementation of many numerical methods, avoiding the tedious task of hand-coding the differentiation of complex algorithms.

  10. Automatic procedure for realistic 3D finite element modelling of human brain for bioelectromagnetic computations

    Energy Technology Data Exchange (ETDEWEB)

    Aristovich, K Y; Khan, S H, E-mail: kirill.aristovich.1@city.ac.u [School of Engineering and Mathematical Sciences, City University London, Northampton Square, London EC1V 0HB (United Kingdom)

    2010-07-01

    Realistic computer modelling of biological objects requires building of very accurate and realistic computer models based on geometric and material data, type, and accuracy of numerical analyses. This paper presents some of the automatic tools and algorithms that were used to build accurate and realistic 3D finite element (FE) model of whole-brain. These models were used to solve the forward problem in magnetic field tomography (MFT) based on Magnetoencephalography (MEG). The forward problem involves modelling and computation of magnetic fields produced by human brain during cognitive processing. The geometric parameters of the model were obtained from accurate Magnetic Resonance Imaging (MRI) data and the material properties - from those obtained from Diffusion Tensor MRI (DTMRI). The 3D FE models of the brain built using this approach has been shown to be very accurate in terms of both geometric and material properties. The model is stored on the computer in Computer-Aided Parametrical Design (CAD) format. This allows the model to be used in a wide a range of methods of analysis, such as finite element method (FEM), Boundary Element Method (BEM), Monte-Carlo Simulations, etc. The generic model building approach presented here could be used for accurate and realistic modelling of human brain and many other biological objects.

  11. Semi-Automatic Building Models and FAÇADE Texture Mapping from Mobile Phone Images

    Science.gov (United States)

    Jeong, J.; Kim, T.

    2016-06-01

    Research on 3D urban modelling has been actively carried out for a long time. Recently the need of 3D urban modelling research is increased rapidly due to improved geo-web services and popularized smart devices. Nowadays 3D urban models provided by, for example, Google Earth use aerial photos for 3D urban modelling but there are some limitations: immediate update for the change of building models is difficult, many buildings are without 3D model and texture, and large resources for maintaining and updating are inevitable. To resolve the limitations mentioned above, we propose a method for semi-automatic building modelling and façade texture mapping from mobile phone images and analyze the result of modelling with actual measurements. Our method consists of camera geometry estimation step, image matching step, and façade mapping step. Models generated from this method were compared with actual measurement value of real buildings. Ratios of edge length of models and measurements were compared. Result showed 5.8% average error of length ratio. Through this method, we could generate a simple building model with fine façade textures without expensive dedicated tools and dataset.

  12. Modeling Technology for Automatic Test System Software Based on Automatic Test Markup Language Standard%基于ATML标准的ATS软件建模技术

    Institute of Scientific and Technical Information of China (English)

    杨占才; 王红; 范利花; 张桂英; 杨小辉

    2013-01-01

      论述了国外ATML标准体系结构和构成ATML标准所有子模型的描述方法,提出了在现有ATS软件平台基础上,实现兼容ATML标准所需的建模流程设计、模型识别及模型运行流程设计等技术途径,为实现ATS软件平台的通用性、开放性及武器装备各种维护级别的测试资源的共享奠定了技术基础。%  all the model definition method, system architecture and the expression manner for ATML standard are discussed. Several major technology problems for the existing automatic test system software platform compatible with ATML standard are presented, such as the design for modeling flow, model identification and model running flow. All the Technology Foundation is supplied for resolving the general and open issues of the automatic test system software platform, and the testing resources share for all the maintenance level.

  13. A Parallel Interval Computation Model for Global Optimization with Automatic Load Balancing

    Institute of Scientific and Technical Information of China (English)

    Yong Wu; Arun Kumar

    2012-01-01

    In this paper,we propose a decentralized parallel computation model for global optimization using interval analysis.The model is adaptive to any number of processors and the workload is automatically and evenly distributed among all processors by alternative message passing.The problems received by each processor are processed based on their local dominance properties,which avoids unnecessary interval evaluations.Further,the problem is treated as a whole at the beginning of computation so that no initial decomposition scheme is required.Numerical experiments indicate that the model works well and is stable with different number of parallel processors,distributes the load evenly among the processors,and provides an impressive speedup,especially when the problem is time-consuming to solve.

  14. Automatic Generation of Building Models with Levels of Detail 1-3

    Science.gov (United States)

    Nguatem, W.; Drauschke, M.; Mayer, H.

    2016-06-01

    We present a workflow for the automatic generation of building models with levels of detail (LOD) 1 to 3 according to the CityGML standard (Gröger et al., 2012). We start with orienting unsorted image sets employing (Mayer et al., 2012), we compute depth maps using semi-global matching (SGM) (Hirschmüller, 2008), and fuse these depth maps to reconstruct dense 3D point clouds (Kuhn et al., 2014). Based on planes segmented from these point clouds, we have developed a stochastic method for roof model selection (Nguatem et al., 2013) and window model selection (Nguatem et al., 2014). We demonstrate our workflow up to the export into CityGML.

  15. AN EXPERT APPROACH ON AUTOMATIC SOLID MODEL RECONSTRUCTION FROM 2D PROJECTIONS

    Directory of Open Access Journals (Sweden)

    İsmail ŞAHİN

    2008-02-01

    Full Text Available This paper examines how to automatically reconstruct three dimentions (3D models from their orthographic two and three views and explains a new approach developed for that purpose. The approach is based on the identification of geometric features with the interpretation of 2B views, their volumetric intersections and reconstruction of solid models. A number of rules have been defined for this goal and they implemented on a prototype software with the approach of expert systems. The developed software allows determination of some features efficiently such as slot, holes, blind holes, closed prismatic holes, etc. Another contrubition of this research is to reconstruct solid models from their full section and half section views that is almost noneexistend in the releated literature.

  16. Learning to Automatically Detect Features for Mobile Robots Using Second-Order Hidden Markov Models

    Directory of Open Access Journals (Sweden)

    Richard Washington

    2008-11-01

    Full Text Available In this paper, we propose a new method based on Hidden Markov Models to interpret temporal sequences of sensor data from mobile robots to automatically detect features. Hidden Markov Models have been used for a long time in pattern recognition, especially in speech recognition. Their main advantages over other methods (such as neural networks are their ability to model noisy temporal signals of variable length. We show in this paper that this approach is well suited for interpretation of temporal sequences of mobile-robot sensor data. We present two distinct experiments and results: the first one in an indoor environment where a mobile robot learns to detect features like open doors or T- intersections, the second one in an outdoor environment where a different mobile robot has to identify situations like climbing a hill or crossing a rock.

  17. Evaluation of Bayesian source estimation methods with Prairie Grass observations and Gaussian plume model: A comparison of likelihood functions and distance measures

    Science.gov (United States)

    Wang, Yan; Huang, Hong; Huang, Lida; Ristic, Branko

    2017-03-01

    Source term estimation for atmospheric dispersion deals with estimation of the emission strength and location of an emitting source using all available information, including site description, meteorological data, concentration observations and prior information. In this paper, Bayesian methods for source term estimation are evaluated using Prairie Grass field observations. The methods include those that require the specification of the likelihood function and those which are likelihood free, also known as approximate Bayesian computation (ABC) methods. The performances of five different likelihood functions in the former and six different distance measures in the latter case are compared for each component of the source parameter vector based on Nemenyi test over all the 68 data sets available in the Prairie Grass field experiment. Several likelihood functions and distance measures are introduced to source term estimation for the first time. Also, ABC method is improved in many aspects. Results show that discrepancy measures which refer to likelihood functions and distance measures collectively have significant influence on source estimation. There is no single winning algorithm, but these methods can be used collectively to provide more robust estimates.

  18. Automatic detection of the belt-like region in an image with variational PDE model

    Institute of Scientific and Technical Information of China (English)

    Shoutao Li; Xiaomao Li; Yandong Tang

    2007-01-01

    In this paper, we propose a novel method to automatically detect the belt-like object, such as highway,river, etc., in a given image based on Mumford-Shah function and the evolution of two phase curves. The method can automatically detect two curves that are the boundaries of the belt-like object. In fact, this is a partition problem and we model it as an energy minimization of a Mumford-Shah function based minimal partition problem like active contour model. With Eulerian formulation the partial differential equations (PDEs) of curve evolution are given and the two curves will stop on the desired boundary. The stop term does not depend on the gradient of the image and the initial curves can be anywhere in the image. We also give a numerical algorithm using finite differences and present various experimental results. Compared with other methods, our method can directly detect the boundaries of belt-like object as two continuous curves, even if the image is very noisy.

  19. Lightning Protection Performance Assessment of Transmission Line Based on ATP model Automatic Generation

    Directory of Open Access Journals (Sweden)

    Luo Hanwu

    2016-01-01

    Full Text Available This paper presents a novel method to solve the initial lightning breakdown current by combing ATP and MATLAB simulation software effectively, with the aims to evaluate the lightning protection performance of transmission line. Firstly, the executable ATP simulation model is generated automatically according to the required information such as power source parameters, tower parameters, overhead line parameters, grounding resistance and lightning current parameters, etc. through an interface program coded by MATLAB. Then, the data are extracted from the generated LIS files which can be obtained by executing the ATP simulation model, the occurrence of transmission lie breakdown can be determined by the relative data in LIS file. The lightning current amplitude should be reduced when the breakdown occurs, and vice the verse. Thus the initial lightning breakdown current of a transmission line with given parameters can be determined accurately by continuously changing the lightning current amplitude, which is realized by a loop computing algorithm that is coded by MATLAB software. The method proposed in this paper can generate the ATP simulation program automatically, and facilitates the lightning protection performance assessment of transmission line.

  20. Automatic lung tumor segmentation on PET/CT images using fuzzy Markov random field model.

    Science.gov (United States)

    Guo, Yu; Feng, Yuanming; Sun, Jian; Zhang, Ning; Lin, Wang; Sa, Yu; Wang, Ping

    2014-01-01

    The combination of positron emission tomography (PET) and CT images provides complementary functional and anatomical information of human tissues and it has been used for better tumor volume definition of lung cancer. This paper proposed a robust method for automatic lung tumor segmentation on PET/CT images. The new method is based on fuzzy Markov random field (MRF) model. The combination of PET and CT image information is achieved by using a proper joint posterior probability distribution of observed features in the fuzzy MRF model which performs better than the commonly used Gaussian joint distribution. In this study, the PET and CT simulation images of 7 non-small cell lung cancer (NSCLC) patients were used to evaluate the proposed method. Tumor segmentations with the proposed method and manual method by an experienced radiation oncologist on the fused images were performed, respectively. Segmentation results obtained with the two methods were similar and Dice's similarity coefficient (DSC) was 0.85 ± 0.013. It has been shown that effective and automatic segmentations can be achieved with this method for lung tumors which locate near other organs with similar intensities in PET and CT images, such as when the tumors extend into chest wall or mediastinum.

  1. Automatic Lung Tumor Segmentation on PET/CT Images Using Fuzzy Markov Random Field Model

    Directory of Open Access Journals (Sweden)

    Yu Guo

    2014-01-01

    Full Text Available The combination of positron emission tomography (PET and CT images provides complementary functional and anatomical information of human tissues and it has been used for better tumor volume definition of lung cancer. This paper proposed a robust method for automatic lung tumor segmentation on PET/CT images. The new method is based on fuzzy Markov random field (MRF model. The combination of PET and CT image information is achieved by using a proper joint posterior probability distribution of observed features in the fuzzy MRF model which performs better than the commonly used Gaussian joint distribution. In this study, the PET and CT simulation images of 7 non-small cell lung cancer (NSCLC patients were used to evaluate the proposed method. Tumor segmentations with the proposed method and manual method by an experienced radiation oncologist on the fused images were performed, respectively. Segmentation results obtained with the two methods were similar and Dice’s similarity coefficient (DSC was 0.85 ± 0.013. It has been shown that effective and automatic segmentations can be achieved with this method for lung tumors which locate near other organs with similar intensities in PET and CT images, such as when the tumors extend into chest wall or mediastinum.

  2. Maximum likelihood estimation for integrated diffusion processes

    DEFF Research Database (Denmark)

    Baltazar-Larios, Fernando; Sørensen, Michael

    EM-algorithm to obtain maximum likelihood estimates of the parameters in the diffusion model. As part of the algorithm, we use a recent simple method for approximate simulation of diffusion bridges. In simulation studies for the Ornstein-Uhlenbeck process and the CIR process the proposed method works...

  3. Maximum likelihood estimation of fractionally cointegrated systems

    DEFF Research Database (Denmark)

    Lasak, Katarzyna

    In this paper we consider a fractionally cointegrated error correction model and investigate asymptotic properties of the maximum likelihood (ML) estimators of the matrix of the cointe- gration relations, the degree of fractional cointegration, the matrix of the speed of adjustment...

  4. Focusing on media body ideal images triggers food intake among restrained eaters: a test of restraint theory and the elaboration likelihood model.

    Science.gov (United States)

    Boyce, Jessica A; Kuijer, Roeline G

    2014-04-01

    Although research consistently shows that images of thin women in the media (media body ideals) affect women negatively (e.g., increased weight dissatisfaction and food intake), this effect is less clear among restrained eaters. The majority of experiments demonstrate that restrained eaters - identified with the Restraint Scale - consume more food than do other participants after viewing media body ideal images; whereas a minority of experiments suggest that such images trigger restrained eaters' dietary restraint. Weight satisfaction and mood results are just as variable. One reason for these inconsistent results might be that different methods of image exposure (e.g., slideshow vs. film) afford varying levels of attention. Therefore, we manipulated attention levels and measured participants' weight satisfaction and food intake. We based our hypotheses on the elaboration likelihood model and on restraint theory. We hypothesised that advertent (i.e., processing the images via central routes of persuasion) and inadvertent (i.e., processing the images via peripheral routes of persuasion) exposure would trigger differing degrees of weight dissatisfaction and dietary disinhibition among restrained eaters (cf. restraint theory). Participants (N = 174) were assigned to one of four conditions: advertent or inadvertent exposure to media or control images. The dependent variables were measured in a supposedly unrelated study. Although restrained eaters' weight satisfaction was not significantly affected by either media exposure condition, advertent (but not inadvertent) media exposure triggered restrained eaters' eating. These results suggest that teaching restrained eaters how to pay less attention to media body ideal images might be an effective strategy in media-literary interventions.

  5. Slow Dynamics Model of Compressed Air Energy Storage and Battery Storage Technologies for Automatic Generation Control

    Energy Technology Data Exchange (ETDEWEB)

    Krishnan, Venkat; Das, Trishna

    2016-05-01

    Increasing variable generation penetration and the consequent increase in short-term variability makes energy storage technologies look attractive, especially in the ancillary market for providing frequency regulation services. This paper presents slow dynamics model for compressed air energy storage and battery storage technologies that can be used in automatic generation control studies to assess the system frequency response and quantify the benefits from storage technologies in providing regulation service. The paper also represents the slow dynamics model of the power system integrated with storage technologies in a complete state space form. The storage technologies have been integrated to the IEEE 24 bus system with single area, and a comparative study of various solution strategies including transmission enhancement and combustion turbine have been performed in terms of generation cycling and frequency response performance metrics.

  6. Out-of-Bounds Array Access Fault Model and Automatic Testing Method Study

    Institute of Scientific and Technical Information of China (English)

    GAO Chuanping; DUAN Miyi; TAN Liqun; GONG Yunzhan

    2007-01-01

    Out-of-bounds array access(OOB) is one of the fault models commonly employed in the objectoriented programming language. At present, the technology of code insertion and optimization is widely used in the world to detect and fix this kind of fault. Although this method can examine some of the faults in OOB programs, it cannot test programs thoroughly, neither to find the faults correctly. The way of code insertion makes the test procedures so inefficient that the test becomes costly and time-consuming. This paper, uses a kind of special static test technology to realize the fault detection in OOB programs. We first establish the fault models in OOB program, and then develop an automatic test tool to detect the faults. Some experiments have exercised and the results show that the method proposed in the paper is efficient and feasible in practical applications.

  7. Empirical likelihood inference for diffusion processes with jumps

    Institute of Scientific and Technical Information of China (English)

    2010-01-01

    In this paper, we consider the empirical likelihood inference for the jump-diffusion model. We construct the confidence intervals based on the empirical likelihood for the infinitesimal moments in the jump-diffusion models. They are better than the confidence intervals which are based on the asymptotic normality of point estimates.

  8. Automatic corpus callosum segmentation using a deformable active Fourier contour model

    Science.gov (United States)

    Vachet, Clement; Yvernault, Benjamin; Bhatt, Kshamta; Smith, Rachel G.; Gerig, Guido; Cody Hazlett, Heather; Styner, Martin

    2012-03-01

    The corpus callosum (CC) is a structure of interest in many neuroimaging studies of neuro-developmental pathology such as autism. It plays an integral role in relaying sensory, motor and cognitive information from homologous regions in both hemispheres. We have developed a framework that allows automatic segmentation of the corpus callosum and its lobar subdivisions. Our approach employs constrained elastic deformation of flexible Fourier contour model, and is an extension of Szekely's 2D Fourier descriptor based Active Shape Model. The shape and appearance model, derived from a large mixed population of 150+ subjects, is described with complex Fourier descriptors in a principal component shape space. Using MNI space aligned T1w MRI data, the CC segmentation is initialized on the mid-sagittal plane using the tissue segmentation. A multi-step optimization strategy, with two constrained steps and a final unconstrained step, is then applied. If needed, interactive segmentation can be performed via contour repulsion points. Lobar connectivity based parcellation of the corpus callosum can finally be computed via the use of a probabilistic CC subdivision model. Our analysis framework has been integrated in an open-source, end-to-end application called CCSeg both with a command line and Qt-based graphical user interface (available on NITRC). A study has been performed to quantify the reliability of the semi-automatic segmentation on a small pediatric dataset. Using 5 subjects randomly segmented 3 times by two experts, the intra-class correlation coefficient showed a superb reliability (0.99). CCSeg is currently applied to a large longitudinal pediatric study of brain development in autism.

  9. AUTOMATIC TAGGING OF PERSIAN WEB PAGES BASED ON N-GRAM LANGUAGE MODELS USING MAPREDUCE

    Directory of Open Access Journals (Sweden)

    Saeed Shahrivari

    2015-07-01

    Full Text Available Page tagging is one of the most important facilities for increasing the accuracy of information retrieval in the web. Tags are simple pieces of data that usually consist of one or several words, and briefly describe a page. Tags provide useful information about a page and can be used for boosting the accuracy of searching, document clustering, and result grouping. The most accurate solution to page tagging is using human experts. However, when the number of pages is large, humans cannot be used, and some automatic solutions should be used instead. We propose a solution called PerTag which can automatically tag a set of Persian web pages. PerTag is based on n-gram models and uses the tf-idf method plus some effective Persian language rules to select proper tags for each web page. Since our target is huge sets of web pages, PerTag is built on top of the MapReduce distributed computing framework. We used a set of more than 500 million Persian web pages during our experiments, and extracted tags for each page using a cluster of 40 machines. The experimental results show that PerTag is both fast and accurate

  10. Model design and simulation of automatic sorting machine using proximity sensor

    Directory of Open Access Journals (Sweden)

    Bankole I. Oladapo

    2016-09-01

    Full Text Available The automatic sorting system has been reported to be complex and a global problem. This is because of the inability of sorting machines to incorporate flexibility in their design concept. This research therefore designed and developed an automated sorting object of a conveyor belt. The developed automated sorting machine is able to incorporate flexibility and separate species of non-ferrous metal objects and at the same time move objects automatically to the basket as defined by the regulation of the Programmable Logic Controllers (PLC with a capacitive proximity sensor to detect a value range of objects. The result obtained shows that plastic, wood, and steel were sorted into their respective and correct position with an average, sorting, time of 9.903 s, 14.072 s and 18.648 s respectively. The proposed developed model of this research could be adopted at any institution or industries, whose practices are based on mechatronics engineering systems. This is to guide the industrial sector in sorting of object and teaching aid to institutions and hence produce the list of classified materials according to the enabled sorting program commands.

  11. Composite likelihood estimation of demographic parameters

    Directory of Open Access Journals (Sweden)

    Garrigan Daniel

    2009-11-01

    Full Text Available Abstract Background Most existing likelihood-based methods for fitting historical demographic models to DNA sequence polymorphism data to do not scale feasibly up to the level of whole-genome data sets. Computational economies can be achieved by incorporating two forms of pseudo-likelihood: composite and approximate likelihood methods. Composite likelihood enables scaling up to large data sets because it takes the product of marginal likelihoods as an estimator of the likelihood of the complete data set. This approach is especially useful when a large number of genomic regions constitutes the data set. Additionally, approximate likelihood methods can reduce the dimensionality of the data by summarizing the information in the original data by either a sufficient statistic, or a set of statistics. Both composite and approximate likelihood methods hold promise for analyzing large data sets or for use in situations where the underlying demographic model is complex and has many parameters. This paper considers a simple demographic model of allopatric divergence between two populations, in which one of the population is hypothesized to have experienced a founder event, or population bottleneck. A large resequencing data set from human populations is summarized by the joint frequency spectrum, which is a matrix of the genomic frequency spectrum of derived base frequencies in two populations. A Bayesian Metropolis-coupled Markov chain Monte Carlo (MCMCMC method for parameter estimation is developed that uses both composite and likelihood methods and is applied to the three different pairwise combinations of the human population resequence data. The accuracy of the method is also tested on data sets sampled from a simulated population model with known parameters. Results The Bayesian MCMCMC method also estimates the ratio of effective population size for the X chromosome versus that of the autosomes. The method is shown to estimate, with reasonable

  12. Analytic Methods for Cosmological Likelihoods

    OpenAIRE

    Taylor, A. N.; Kitching, T. D.

    2010-01-01

    We present general, analytic methods for Cosmological likelihood analysis and solve the "many-parameters" problem in Cosmology. Maxima are found by Newton's Method, while marginalization over nuisance parameters, and parameter errors and covariances are estimated by analytic marginalization of an arbitrary likelihood function with flat or Gaussian priors. We show that information about remaining parameters is preserved by marginalization. Marginalizing over all parameters, we find an analytic...

  13. Non-parametric iterative model constraint graph min-cut for automatic kidney segmentation.

    Science.gov (United States)

    Freiman, M; Kronman, A; Esses, S J; Joskowicz, L; Sosna, J

    2010-01-01

    We present a new non-parametric model constraint graph min-cut algorithm for automatic kidney segmentation in CT images. The segmentation is formulated as a maximum a-posteriori estimation of a model-driven Markov random field. A non-parametric hybrid shape and intensity model is treated as a latent variable in the energy functional. The latent model and labeling map that minimize the energy functional are then simultaneously computed with an expectation maximization approach. The main advantages of our method are that it does not assume a fixed parametric prior model, which is subjective to inter-patient variability and registration errors, and that it combines both the model and the image information into a unified graph min-cut based segmentation framework. We evaluated our method on 20 kidneys from 10 CT datasets with and without contrast agent for which ground-truth segmentations were generated by averaging three manual segmentations. Our method yields an average volumetric overlap error of 10.95%, and average symmetric surface distance of 0.79 mm. These results indicate that our method is accurate and robust for kidney segmentation.

  14. Using Historical Data and Quasi-Likelihood Logistic Regression Modeling to Test Spatial Patterns of Channel Response to Peak Flows in a Mountain Watershed

    Science.gov (United States)

    Faustini, J. M.; Jones, J. A.

    2001-12-01

    This study used an empirical modeling approach to explore landscape controls on spatial variations in reach-scale channel response to peak flows in a mountain watershed. We used historical cross-section surveys spanning 20 years at five sites on 2nd to 5th-order channels and stream gaging records spanning up to 50 years. We related the observed proportion of cross-sections at a site exhibiting detectable change between consecutive surveys to the recurrence interval of the largest peak flow during the corresponding period using a quasi-likelihood logistic regression model. Stream channel response was linearly related to flood size or return period through the logit function, but the shape of the response function varied according to basin size, bed material, and the presence or absence of large wood. At the watershed scale, we hypothesized that the spatial scale and frequency of channel adjustment should increase in the downstream direction as sediment supply increases relative to transport capacity, resulting in more transportable sediment in the channel and hence increased bed mobility. Consistent with this hypothesis, cross sections from the 4th and 5th-order main stem channels exhibit more frequent detectable changes than those at two steep third-order tributary sites. Peak flows able to mobilize bed material sufficiently to cause detectable changes in 50% of cross-section profiles had an estimated recurrence interval of 3 years for the 4th and 5th-order channels and 4 to 6 years for the 3rd-order sites. This difference increased for larger magnitude channel changes; peak flows with recurrence intervals of about 7 years produced changes in 90% of cross sections at the main stem sites, but flows able to produce the same level of response at tributary sites were three times less frequent. At finer scales, this trend of increasing bed mobility in the downstream direction is modified by variations in the degree of channel confinement by bedrock and landforms, the

  15. Vestige: Maximum likelihood phylogenetic footprinting

    Directory of Open Access Journals (Sweden)

    Maxwell Peter

    2005-05-01

    Full Text Available Abstract Background Phylogenetic footprinting is the identification of functional regions of DNA by their evolutionary conservation. This is achieved by comparing orthologous regions from multiple species and identifying the DNA regions that have diverged less than neutral DNA. Vestige is a phylogenetic footprinting package built on the PyEvolve toolkit that uses probabilistic molecular evolutionary modelling to represent aspects of sequence evolution, including the conventional divergence measure employed by other footprinting approaches. In addition to measuring the divergence, Vestige allows the expansion of the definition of a phylogenetic footprint to include variation in the distribution of any molecular evolutionary processes. This is achieved by displaying the distribution of model parameters that represent partitions of molecular evolutionary substitutions. Examination of the spatial incidence of these effects across regions of the genome can identify DNA segments that differ in the nature of the evolutionary process. Results Vestige was applied to a reference dataset of the SCL locus from four species and provided clear identification of the known conserved regions in this dataset. To demonstrate the flexibility to use diverse models of molecular evolution and dissect the nature of the evolutionary process Vestige was used to footprint the Ka/Ks ratio in primate BRCA1 with a codon model of evolution. Two regions of putative adaptive evolution were identified illustrating the ability of Vestige to represent the spatial distribution of distinct molecular evolutionary processes. Conclusion Vestige provides a flexible, open platform for phylogenetic footprinting. Underpinned by the PyEvolve toolkit, Vestige provides a framework for visualising the signatures of evolutionary processes across the genome of numerous organisms simultaneously. By exploiting the maximum-likelihood statistical framework, the complex interplay between mutational

  16. Contour-based automatic crater recognition using digital elevation models from Chang'E missions

    Science.gov (United States)

    Zuo, Wei; Zhang, Zhoubin; Li, Chunlai; Wang, Rongwu; Yu, Linjie; Geng, Liang

    2016-12-01

    In order to provide fundamental information for exploration and related scientific research on the Moon and other planets, we propose a new automatic method to recognize craters on the lunar surface based on contour data extracted from a digital elevation model (DEM). Through DEM and image processing, this method can be used to reconstruct contour surfaces, extract and combine contour lines, set the characteristic parameters of crater morphology, and establish a crater pattern recognition program. The method has been tested and verified with DEM data from Chang'E-1 (CE-1) and Chang'E-2 (CE-2), showing a strong crater recognition ability with high detection rate, high robustness, and good adaptation to recognize various craters with different diameter and morphology. The method has been used to identify craters with high precision and accuracy on the Moon. The results meet requirements for supporting exploration and related scientific research for the Moon and planets.

  17. Hierarchical Model-Based Activity Recognition With Automatic Low-Level State Discovery

    Directory of Open Access Journals (Sweden)

    Justin Muncaster

    2007-09-01

    Full Text Available Activity recognition in video streams is increasingly important for both the computer vision and artificial intelligence communities. Activity recognition has many applications in security and video surveillance. Ultimately in such applications one wishes to recognize complex activities, which can be viewed as combination of simple activities. In this paper, we present a general framework of a Dlevel dynamic Bayesian network to perform complex activity recognition. The levels of the network are constrained to enforce state hierarchy while the Dth level models the duration of simplest event. Moreover, in this paper we propose to use the deterministic annealing clustering method to automatically define the simple activities, which corresponds to the low level states of observable levels in a Dynamic Bayesian Networks. We used real data sets for experiments. The experimental results show the effectiveness of our proposed method.

  18. Automatic Detection of Repetitive Components in 3D Mechanical Engineering Models

    Directory of Open Access Journals (Sweden)

    Laixiang Wen

    2013-01-01

    Full Text Available We present an intelligent method to automatically detect repetitive components in 3D mechanical engineering models. In our work, a new Voxel-based Shape Descriptor (VSD is proposed for effective matching, based on which a similarity function is defined. It uses the voxels intersecting with 3D outline of mechanical components as the feature descriptor. Because each mechanical component may have different poses, the alignment before the matching is needed. For the alignment, we adopt the genetic algorithm to search for optimal solution where the maximum global similarity is the objective. Two components are the same if the maximum global similarity is over a certain threshold. Note that the voxelization of component during feature extraction and the genetic algorithm for searching maximum global similarity are entirely implemented on GPU; the efficiency is improved significantly than with CPU. Experimental results show that our method is more effective and efficient than that existing methods for repetitive components detection.

  19. A semi-automatic multiple view texture mapping for the surface model extracted by laser scanning

    Science.gov (United States)

    Zhang, Zhichao; Huang, Xianfeng; Zhang, Fan; Chang, Yongmin; Li, Deren

    2008-12-01

    Laser scanning is an effective way to acquire geometry data of the cultural heritage with complex architecture. After generating the 3D model of the object, it's difficult to do the exactly texture mapping for the real object. we take effort to create seamless texture maps for a virtual heritage of arbitrary topology. Texture detail is acquired directly from the real object in a light condition as uniform as we can make. After preprocessing, images are then registered on the 3D mesh by a semi-automatic way. Then we divide the mesh into mesh patches overlapped with each other according to the valid texture area of each image. An optimal correspondence between mesh patches and sections of the acquired images is built. Then, a smoothing approach is proposed to erase the seam between different images that map on adjacent mesh patches, based on texture blending. The obtained result with a Buddha of Dunhuang Mogao Grottoes is presented and discussed.

  20. Modeling Earthen Dike Stability: Sensitivity Analysis and Automatic Calibration of Diffusivities Based on Live Sensor Data

    CERN Document Server

    Melnikova, N B; Sloot, P M A

    2012-01-01

    The paper describes concept and implementation details of integrating a finite element module for dike stability analysis Virtual Dike into an early warning system for flood protection. The module operates in real-time mode and includes fluid and structural sub-models for simulation of porous flow through the dike and for dike stability analysis. Real-time measurements obtained from pore pressure sensors are fed into the simulation module, to be compared with simulated pore pressure dynamics. Implementation of the module has been performed for a real-world test case - an earthen levee protecting a sea-port in Groningen, the Netherlands. Sensitivity analysis and calibration of diffusivities have been performed for tidal fluctuations. An algorithm for automatic diffusivities calibration for a heterogeneous dike is proposed and studied. Analytical solutions describing tidal propagation in one-dimensional saturated aquifer are employed in the algorithm to generate initial estimates of diffusivities.

  1. EMPIRICAL LIKELIHOOD DIMENSION REDUCTION INFERENCE IN NONLINEAR EV MODELS WITH VALIDATION DATA%核实数据下非线性EV模型中经验似然降维推断

    Institute of Scientific and Technical Information of China (English)

    方连娣; 胡凤霞

    2012-01-01

    In the article, we consider the nonlinear error-in-response models with the help of validation data. Using semiparametric dimension reduction to construct the estimated empirical likelihood and adjusted empirical likelihood of the unkown parameter, it is shown that the estimated empirical log-likelihood has the asymptotics weighted sum of chi-square variables distribution and adjusted empirical log-likelihood has the asymptotic standard chi-square distribution. The result can be used to construct the confidence regions of the unknown parameter.%本文研究了响应变量有误差的非线性模型.应用半参数降维技术构造未知参数的被估计经验似然及调整的经验似然,证明了所提出的被估计的经验对数似然与其调整的经验对数似然分别渐近于独立卡方变量加权和的分布与标准卡方分布,所得结果可用来构造未知参数的置信域.

  2. Fitmunk: improving protein structures by accurate, automatic modeling of side-chain conformations.

    Science.gov (United States)

    Porebski, Przemyslaw Jerzy; Cymborowski, Marcin; Pasenkiewicz-Gierula, Marta; Minor, Wladek

    2016-02-01

    Improvements in crystallographic hardware and software have allowed automated structure-solution pipelines to approach a near-`one-click' experience for the initial determination of macromolecular structures. However, in many cases the resulting initial model requires a laborious, iterative process of refinement and validation. A new method has been developed for the automatic modeling of side-chain conformations that takes advantage of rotamer-prediction methods in a crystallographic context. The algorithm, which is based on deterministic dead-end elimination (DEE) theory, uses new dense conformer libraries and a hybrid energy function derived from experimental data and prior information about rotamer frequencies to find the optimal conformation of each side chain. In contrast to existing methods, which incorporate the electron-density term into protein-modeling frameworks, the proposed algorithm is designed to take advantage of the highly discriminatory nature of electron-density maps. This method has been implemented in the program Fitmunk, which uses extensive conformational sampling. This improves the accuracy of the modeling and makes it a versatile tool for crystallographic model building, refinement and validation. Fitmunk was extensively tested on over 115 new structures, as well as a subset of 1100 structures from the PDB. It is demonstrated that the ability of Fitmunk to model more than 95% of side chains accurately is beneficial for improving the quality of crystallographic protein models, especially at medium and low resolutions. Fitmunk can be used for model validation of existing structures and as a tool to assess whether side chains are modeled optimally or could be better fitted into electron density. Fitmunk is available as a web service at http://kniahini.med.virginia.edu/fitmunk/server/ or at http://fitmunk.bitbucket.org/.

  3. Multilevel spatial semantic model for urban house information extraction automatically from QuickBird imagery

    Science.gov (United States)

    Guan, Li; Wang, Ping; Liu, Xiangnan

    2006-10-01

    Based on the introduction to the characters and constructing flow of space semantic model, the feature space and context of house information in high resolution remote sensing image are analyzed, and the house semantic network model of Quick Bird image is also constructed. Furthermore, the accuracy and practicability of space semantic model are checked up through extracting house information automatically from Quick Bird image after extracting candidate semantic nodes to the image by taking advantage of grey division method, window threshold value method and Hough transformation. Sample result indicates that its type coherence, shape coherence and area coherence are 96.75%, 89.5 % and 88 % respectively. Thereinto the effect of the extraction of the houses with rectangular roof is the best and that with herringbone and the polygonal roofs is just ideal. However, the effect of the extraction of the houses with round roof is not satisfied and thus they need the further perfection to the semantic model to make them own higher applied value.

  4. A marked point process of rectangles and segments for automatic analysis of digital elevation models.

    Science.gov (United States)

    Ortner, Mathias; Descombe, Xavier; Zerubia, Josiane

    2008-01-01

    This work presents a framework for automatic feature extraction from images using stochastic geometry. Features in images are modeled as realizations of a spatial point process of geometrical shapes. This framework allows the incorporation of a priori knowledge on the spatial repartition of features. More specifically, we present a model based on the superposition of a process of segments and a process of rectangles. The former is dedicated to the detection of linear networks of discontinuities, while the latter aims at segmenting homogeneous areas. An energy is defined, favoring connections of segments, alignments of rectangles, as well as a relevant interaction between both types of objects. The estimation is performed by minimizing the energy using a simulated annealing algorithm. The proposed model is applied to the analysis of Digital Elevation Models (DEMs). These images are raster data representing the altimetry of a dense urban area. We present results on real data provided by the IGN (French National Geographic Institute) consisting in low quality DEMs of various types.

  5. Fuzzy Time Series Forecasting Model Based on Automatic Clustering Techniques and Generalized Fuzzy Logical Relationship

    Directory of Open Access Journals (Sweden)

    Wangren Qiu

    2015-01-01

    Full Text Available In view of techniques for constructing high-order fuzzy time series models, there are three types which are based on advanced algorithms, computational method, and grouping the fuzzy logical relationships. The last type of models is easy to be understood by the decision maker who does not know anything about fuzzy set theory or advanced algorithms. To deal with forecasting problems, this paper presented novel high-order fuzz time series models denoted as GTS (M, N based on generalized fuzzy logical relationships and automatic clustering. This paper issued the concept of generalized fuzzy logical relationship and an operation for combining the generalized relationships. Then, the procedure of the proposed model was implemented on forecasting enrollment data at the University of Alabama. To show the considerable outperforming results, the proposed approach was also applied to forecasting the Shanghai Stock Exchange Composite Index. Finally, the effects of parameters M and N, the number of order, and concerned principal fuzzy logical relationships, on the forecasting results were also discussed.

  6. Multiobjective Optimal Algorithm for Automatic Calibration of Daily Streamflow Forecasting Model

    Directory of Open Access Journals (Sweden)

    Yi Liu

    2016-01-01

    Full Text Available Single-objection function cannot describe the characteristics of the complicated hydrologic system. Consequently, it stands to reason that multiobjective functions are needed for calibration of hydrologic model. The multiobjective algorithms based on the theory of nondominate are employed to solve this multiobjective optimal problem. In this paper, a novel multiobjective optimization method based on differential evolution with adaptive Cauchy mutation and Chaos searching (MODE-CMCS is proposed to optimize the daily streamflow forecasting model. Besides, to enhance the diversity performance of Pareto solutions, a more precise crowd distance assigner is presented in this paper. Furthermore, the traditional generalized spread metric (SP is sensitive with the size of Pareto set. A novel diversity performance metric, which is independent of Pareto set size, is put forward in this research. The efficacy of the new algorithm MODE-CMCS is compared with the nondominated sorting genetic algorithm II (NSGA-II on a daily streamflow forecasting model based on support vector machine (SVM. The results verify that the performance of MODE-CMCS is superior to the NSGA-II for automatic calibration of hydrologic model.

  7. AUTOMATIC TOPOLOGY DERIVATION FROM IFC BUILDING MODEL FOR IN-DOOR INTELLIGENT NAVIGATION

    Directory of Open Access Journals (Sweden)

    S. J. Tang

    2015-05-01

    Full Text Available With the goal to achieve an accuracy navigation within the building environment, it is critical to explore a feasible way for building the connectivity relationships among 3D geographical features called in-building topology network. Traditional topology construction approaches for indoor space always based on 2D maps or pure geometry model, which remained information insufficient problem. Especially, an intelligent navigation for different applications depends mainly on the precise geometry and semantics of the navigation network. The trouble caused by existed topology construction approaches can be smoothed by employing IFC building model which contains detailed semantic and geometric information. In this paper, we present a method which combined a straight media axis transformation algorithm (S-MAT with IFC building model to reconstruct indoor geometric topology network. This derived topology aimed at facilitating the decision making for different in-building navigation. In this work, we describe a multi-step deviation process including semantic cleaning, walkable features extraction, Multi-Storey 2D Mapping and S-MAT implementation to automatically generate topography information from existing indoor building model data given in IFC.

  8. Sufficient Conditions of Weak Consistency of MQLE in Quasi-Likelihood Nonlinear Models%拟似然非线性模型中MQLE的不相合性的充分条件

    Institute of Scientific and Technical Information of China (English)

    夏天; 李友光; 王学仁

    2011-01-01

    拟似然非线性模型包括广义线性模型作为一个特殊情形.给出了拟似然非线性模型中极大拟似然估计的弱相合性的一些充分条件,其中矩的条件要弱于文献中极大拟似然估计的强相合性的条件.%Quasi-likelihood nonlinear models (QLNM) include generalized linear models as a special case. This paper proposes some sufficient conditions of weak consistency of maximum quasi- likelihood estimator (MQLE) in QLNM, in which the condition of the moment is weaker than that of strong consistency of MQLE in the existing literature.

  9. Modelling Diverse Soil Attributes with Visible to Longwave Infrared Spectroscopy Using PLSR Employed by an Automatic Modelling Engine

    Directory of Open Access Journals (Sweden)

    Veronika Kopačková

    2017-02-01

    Full Text Available The study tested a data mining engine (PARACUDA® to predict various soil attributes (BC, CEC, BS, pH, Corg, Pb, Hg, As, Zn and Cu using reflectance data acquired for both optical and thermal infrared regions. The engine was designed to utilize large data in parallel and automatic processing to build and process hundreds of diverse models in a unified manner while avoiding bias and deviations caused by the operator(s. The system is able to systematically assess the effect of diverse preprocessing techniques; additionally, it analyses other parameters, such as different spectral resolutions and spectral coverages that affect soil properties. Accordingly, the system was used to extract models across both optical and thermal infrared spectral regions, which holds significant chromophores. In total, 2880 models were evaluated where each model was generated with a different preprocessing scheme of the input spectral data. The models were assessed using statistical parameters such as coefficient of determination (R2, square error of prediction (SEP, relative percentage difference (RPD and by physical explanation (spectral assignments. It was found that the smoothing procedure is the most beneficial preprocessing stage, especially when combined with spectral derivation (1st or 2nd derivatives. Automatically and without the need of an operator, the data mining engine enabled the best prediction models to be found from all the combinations tested. Furthermore, the data mining approach used in this study and its processing scheme proved to be efficient tools for getting a better understanding of the geochemical properties of the samples studied (e.g., mineral associations.

  10. 详尽可能性模式在电影《达芬奇密码》中的体现%Elaboration Likelihood Model in The Da Vinci Code

    Institute of Scientific and Technical Information of China (English)

    刘一琛

    2014-01-01

    Elaboration Likelihood Model has been a leading theory of persuasion and attitude change. The Da Vinci Code is a film which met largely negative critical response upon its release and also has been studied in different angles. This thesis analysis the persuasion style in The Da Vinci Code in details and figures out how the code was understood through both central and peripheral routes. In our daily life, we could make good use of them to do persuasion.

  11. Automatic calibration of a global flow routing model in the Amazon basin using virtual SWOT data

    Science.gov (United States)

    Rogel, P. Y.; Mouffe, M.; Getirana, A.; Ricci, S. M.; Lion, C.; Mognard, N. M.; Biancamaria, S.; Boone, A.

    2012-12-01

    The Surface Water and Ocean Topography (SWOT) wide swath altimetry mission will provide a global coverage of surface water elevation, which will be used to help correct water height and discharge prediction from hydrological models. Here, the aim is to investigate the use of virtually generated SWOT data to improve water height and discharge simulation using calibration of model parameters (like river width, river depth and roughness coefficient). In this work, we use the HyMAP model to estimate water height and discharge on the Amazon catchment area. Before reaching the river network, surface and subsurface runoff are delayed by a set of linear and independent reservoirs. The flow routing is performed by the kinematic wave equation.. Since the SWOT mission has not yet been launched, virtual SWOT data are generated with a set of true parameters for HyMAP as well as measurement errors from a SWOT data simulator (i.e. a twin experiment approach is implemented). These virtual observations are used to calibrate key parameters of HyMAP through the minimization of a cost function defining the difference between the simulated and observed water heights over a one-year simulation period. The automatic calibration procedure is achieved using the MOCOM-UA multicriteria global optimization algorithm as well as the local optimization algorithm BC-DFO that is considered as a computational cost saving alternative. First, to reduce the computational cost of the calibration procedure, each spatially distributed parameter (Manning coefficient, river width and river depth) is corrupted through the multiplication of a spatially uniform factor that is the only factor optimized. In this case, it is shown that, when the measurement errors are small, the true water heights and discharges are easily retrieved. Because of equifinality, the true parameters are not always identified. A spatial correction of the model parameters is then investigated and the domain is divided into 4 regions

  12. Dynamic Data Driven Applications Systems (DDDAS) modeling for automatic target recognition

    Science.gov (United States)

    Blasch, Erik; Seetharaman, Guna; Darema, Frederica

    2013-05-01

    The Dynamic Data Driven Applications System (DDDAS) concept uses applications modeling, mathematical algorithms, and measurement systems to work with dynamic systems. A dynamic systems such as Automatic Target Recognition (ATR) is subject to sensor, target, and the environment variations over space and time. We use the DDDAS concept to develop an ATR methodology for multiscale-multimodal analysis that seeks to integrated sensing, processing, and exploitation. In the analysis, we use computer vision techniques to explore the capabilities and analogies that DDDAS has with information fusion. The key attribute of coordination is the use of sensor management as a data driven techniques to improve performance. In addition, DDDAS supports the need for modeling from which uncertainty and variations are used within the dynamic models for advanced performance. As an example, we use a Wide-Area Motion Imagery (WAMI) application to draw parallels and contrasts between ATR and DDDAS systems that warrants an integrated perspective. This elementary work is aimed at triggering a sequence of deeper insightful research towards exploiting sparsely sampled piecewise dense WAMI measurements - an application where the challenges of big-data with regards to mathematical fusion relationships and high-performance computations remain significant and will persist. Dynamic data-driven adaptive computations are required to effectively handle the challenges with exponentially increasing data volume for advanced information fusion systems solutions such as simultaneous target tracking and ATR.

  13. Modelling the adoption of automatic milking systems in Noord-Holland

    Directory of Open Access Journals (Sweden)

    Matteo Floridi

    2013-05-01

    Full Text Available Innovation and new technology adoption represent two central elements for the business and industry development process in agriculture. One of the most relevant innovations in dairy farms is the robotisation of the milking process through the adoption of Automatic Milking Systems (AMS. The purpose of this paper is to assess the impact of selected Common Agricultural Policy measures on the adoption of AMS in dairy farms. The model developed is a dynamic farm-household model that is able to simulate the adoption of AMS taking into account the allocation of productive factors between on-farm and off-farm activities. The model simulates the decision to replace a traditional milking system with AMS using a Real Options approach that allows farmers to choose the optimal timing of investments. Results show that the adoption of AMS, and the timing of such a decision, is strongly affected by policy uncertainty and market conditions. The effect of this uncertainty is to postpone the decision to adopt the new technology until farmers have gathered enough information to reduce the negative effects of the technological lock-in. AMS adoption results in an increase in farm size and herd size due to the reduction in the labour required for milking operations.

  14. EXPERIMENTS WITH UAS IMAGERY FOR AUTOMATIC MODELING OF POWER LINE 3D GEOMETRY

    Directory of Open Access Journals (Sweden)

    G. Jóźków

    2015-08-01

    Full Text Available The ideal mapping technology for transmission line inspection is the airborne LiDAR executed from helicopter platforms. It allows for full 3D geometry extraction in highly automated manner. Large scale aerial images can be also used for this purpose, however, automation is possible only for finding transmission line positions (2D geometry, and the sag needs to be estimated manually. For longer lines, these techniques are less expensive than ground surveys, yet they are still expensive. UAS technology has the potential to reduce these costs, especially if using inexpensive platforms with consumer grade cameras. This study investigates the potential of using high resolution UAS imagery for automatic modeling of transmission line 3D geometry. The key point of this experiment was to employ dense matching algorithms to appropriately acquired UAS images to have points created also on wires. This allowed to model the 3D geometry of transmission lines similarly to LiDAR acquired point clouds. Results showed that the transmission line modeling is possible with a high internal accuracy for both, horizontal and vertical directions, even when wires were represented by a partial (sparse point cloud.

  15. Automatic Sex Determination of Skulls Based on a Statistical Shape Model

    Directory of Open Access Journals (Sweden)

    Li Luo

    2013-01-01

    Full Text Available Sex determination from skeletons is an important research subject in forensic medicine. Previous skeletal sex assessments are through subjective visual analysis by anthropologists or metric analysis of sexually dimorphic features. In this work, we present an automatic sex determination method for 3D digital skulls, in which a statistical shape model for skulls is constructed, which projects the high-dimensional skull data into a low-dimensional shape space, and Fisher discriminant analysis is used to classify skulls in the shape space. This method combines the advantages of metrical and morphological methods. It is easy to use without professional qualification and tedious manual measurement. With a group of Chinese skulls including 127 males and 81 females, we choose 92 males and 58 females to establish the discriminant model and validate the model with the other skulls. The correct rate is 95.7% and 91.4% for females and males, respectively. Leave-one-out test also shows that the method has a high accuracy.

  16. A computer program to automatically generate state equations and macro-models. [for network analysis and design

    Science.gov (United States)

    Garrett, S. J.; Bowers, J. C.; Oreilly, J. E., Jr.

    1978-01-01

    A computer program, PROSE, that produces nonlinear state equations from a simple topological description of an electrical or mechanical network is described. Unnecessary states are also automatically eliminated, so that a simplified terminal circuit model is obtained. The program also prints out the eigenvalues of a linearized system and the sensitivities of the eigenvalue of largest magnitude.

  17. Automatic 3D modelling of metal frame connections from LiDAR data for structural engineering purposes

    Science.gov (United States)

    Cabaleiro, M.; Riveiro, B.; Arias, P.; Caamaño, J. C.; Vilán, J. A.

    2014-10-01

    The automatic generation of 3D as-built models from LiDAR data is a topic where significant progress has been made in recent years. This paper describes a new method for the detection and automatic 3D modelling of frame connections and the formation of profiles comprising a metal frame from LiDAR data. The method has been developed using an approach to create 2.5D density images for subsequent processing using the Hough transform. The structure connections can be automatically identified after selecting areas in the point cloud. As a result, the coordinates of the connection centre, composition (profiles, size and shape of the haunch) and direction of their profiles are extracted. A standard file is generated with the data obtained from the geometric and semantic characterisation of the connections. The 3D model of connections and metal frames, which are suitable for processing software for structural engineering applications, are generated automatically based on this file. The algorithm presented in this paper has been tested under laboratory conditions and also with several industrial portal frames, achieving promising results. Finally, 3D models were generated, and structural calculations were performed.

  18. On the likelihood of forests

    Science.gov (United States)

    Shang, Yilun

    2016-08-01

    How complex a network is crucially impacts its function and performance. In many modern applications, the networks involved have a growth property and sparse structures, which pose challenges to physicists and applied mathematicians. In this paper, we introduce the forest likelihood as a plausible measure to gauge how difficult it is to construct a forest in a non-preferential attachment way. Based on the notions of admittable labeling and path construction, we propose algorithms for computing the forest likelihood of a given forest. Concrete examples as well as the distributions of forest likelihoods for all forests with some fixed numbers of nodes are presented. Moreover, we illustrate the ideas on real-life networks, including a benzenoid tree, a mathematical family tree, and a peer-to-peer network.

  19. Piloted Simulation Evaluation of a Model-Predictive Automatic Recovery System to Prevent Vehicle Loss of Control on Approach

    Science.gov (United States)

    Litt, Jonathan S.; Liu, Yuan; Sowers, Thomas S.; Owen, A. Karl; Guo, Ten-Huei

    2014-01-01

    This paper describes a model-predictive automatic recovery system for aircraft on the verge of a loss-of-control situation. The system determines when it must intervene to prevent an imminent accident, resulting from a poor approach. It estimates the altitude loss that would result from a go-around maneuver at the current flight condition. If the loss is projected to violate a minimum altitude threshold, the maneuver is automatically triggered. The system deactivates to allow landing once several criteria are met. Piloted flight simulator evaluation showed the system to provide effective envelope protection during extremely unsafe landing attempts. The results demonstrate how flight and propulsion control can be integrated to recover control of the vehicle automatically and prevent a potential catastrophe.

  20. Calibration of the Hydrological Simulation Program Fortran (HSPF) model using automatic calibration and geographical information systems

    Science.gov (United States)

    Al-Abed, N. A.; Whiteley, H. R.

    2002-11-01

    Calibrating a comprehensive, multi-parameter conceptual hydrological model, such as the Hydrological Simulation Program Fortran model, is a major challenge. This paper describes calibration procedures for water-quantity parameters of the HSPF version 10·11 using the automatic-calibration parameter estimator model coupled with a geographical information system (GIS) approach for spatially averaged properties. The study area was the Grand River watershed, located in southern Ontario, Canada, between 79° 30 and 80° 57W longitude and 42° 51 and 44° 31N latitude. The drainage area is 6965 km2. Calibration efforts were directed to those model parameters that produced large changes in model response during sensitivity tests run prior to undertaking calibration. A GIS was used extensively in this study. It was first used in the watershed segmentation process. During calibration, the GIS data were used to establish realistic starting values for the surface and subsurface zone parameters LZSN, UZSN, COVER, and INFILT and physically reasonable ratios of these parameters among watersheds were preserved during calibration with the ratios based on the known properties of the subwatersheds determined using GIS. This calibration procedure produced very satisfactory results; the percentage difference between the simulated and the measured yearly discharge ranged between 4 to 16%, which is classified as good to very good calibration. The average simulated daily discharge for the watershed outlet at Brantford for the years 1981-85 was 67 m3 s-1 and the average measured discharge at Brantford was 70 m3 s-1. The coupling of a GIS with automatice calibration produced a realistic and accurate calibration for the HSPF model with much less effort and subjectivity than would be required for unassisted calibration.

  1. Support Vector Machine Model for Automatic Detection and Classification of Seismic Events

    Science.gov (United States)

    Barros, Vesna; Barros, Lucas

    2016-04-01

    The automated processing of multiple seismic signals to detect, localize and classify seismic events is a central tool in both natural hazards monitoring and nuclear treaty verification. However, false detections and missed detections caused by station noise and incorrect classification of arrivals are still an issue and the events are often unclassified or poorly classified. Thus, machine learning techniques can be used in automatic processing for classifying the huge database of seismic recordings and provide more confidence in the final output. Applied in the context of the International Monitoring System (IMS) - a global sensor network developed for the Comprehensive Nuclear-Test-Ban Treaty (CTBT) - we propose a fully automatic method for seismic event detection and classification based on a supervised pattern recognition technique called the Support Vector Machine (SVM). According to Kortström et al., 2015, the advantages of using SVM are handleability of large number of features and effectiveness in high dimensional spaces. Our objective is to detect seismic events from one IMS seismic station located in an area of high seismicity and mining activity and classify them as earthquakes or quarry blasts. It is expected to create a flexible and easily adjustable SVM method that can be applied in different regions and datasets. Taken a step further, accurate results for seismic stations could lead to a modification of the model and its parameters to make it applicable to other waveform technologies used to monitor nuclear explosions such as infrasound and hydroacoustic waveforms. As an authorized user, we have direct access to all IMS data and bulletins through a secure signatory account. A set of significant seismic waveforms containing different types of events (e.g. earthquake, quarry blasts) and noise is being analysed to train the model and learn the typical pattern of the signal from these events. Moreover, comparing the performance of the support

  2. Artificial neural networks for automatic modelling of the pectus excavatum corrective prosthesis

    Science.gov (United States)

    Rodrigues, Pedro L.; Moreira, António H. J.; Rodrigues, Nuno F.; Pinho, ACM; Fonseca, Jaime C.; Correia-Pinto, Jorge; Vilaça, João. L.

    2014-03-01

    Pectus excavatum is the most common deformity of the thorax and usually comprises Computed Tomography (CT) examination for pre-operative diagnosis. Aiming at the elimination of the high amounts of CT radiation exposure, this work presents a new methodology for the replacement of CT by a laser scanner (radiation-free) in the treatment of pectus excavatum using personally modeled prosthesis. The complete elimination of CT involves the determination of ribs external outline, at the maximum sternum depression point for prosthesis placement, based on chest wall skin surface information, acquired by a laser scanner. The developed solution resorts to artificial neural networks trained with data vectors from 165 patients. Scaled Conjugate Gradient, Levenberg-Marquardt, Resilient Back propagation and One Step Secant gradient learning algorithms were used. The training procedure was performed using the soft tissue thicknesses, determined using image processing techniques that automatically segment the skin and rib cage. The developed solution was then used to determine the ribs outline in data from 20 patient scanners. Tests revealed that ribs position can be estimated with an average error of about 6.82+/-5.7 mm for the left and right side of the patient. Such an error range is well below current prosthesis manual modeling (11.7+/-4.01 mm) even without CT imagiology, indicating a considerable step forward towards CT replacement by a 3D scanner for prosthesis personalization.

  3. CRYPTOGRAPHIC SECURE CLOUD STORAGE MODEL WITH ANONYMOUS AUTHENTICATION AND AUTOMATIC FILE RECOVERY

    Directory of Open Access Journals (Sweden)

    Sowmiya Murthy

    2014-10-01

    Full Text Available We propose a secure cloud storage model that addresses security and storage issues for cloud computing environments. Security is achieved by anonymous authentication which ensures that cloud users remain anonymous while getting duly authenticated. For achieving this goal, we propose a digital signature based authentication scheme with a decentralized architecture for distributed key management with multiple Key Distribution Centers. Homomorphic encryption scheme using Paillier public key cryptosystem is used for encrypting the data that is stored in the cloud. We incorporate a query driven approach for validating the access policies defined by an individual user for his/her data i.e. the access is granted to a requester only if his credentials matches with the hidden access policy. Further, since data is vulnerable to losses or damages due to the vagaries of the network, we propose an automatic retrieval mechanism where lost data is recovered by data replication and file replacement with string matching algorithm. We describe a prototype implementation of our proposed model.

  4. 离散化发现过程模型的极大似然估计与贝叶斯估计之对比%Comparisons of Maximum Likelihood Estimates and Bayesian Estimates for the Discretized Discovery Process Model

    Institute of Scientific and Technical Information of China (English)

    2005-01-01

    A Bayesian approach using Markov chain Monte Carlo algorithms has been developed to analyze Smith's discretized version of the discovery process model. It avoids the problems involved in the maximum likelihood method by effectively making use of the information from the prior distribution and that from the discovery sequence according to posterior probabilities. All statistical inferences about the parameters of the model and total resources can be quantified by drawing samples directly from the joint posterior distribution. In addition, statistical errors of the samples can be easily assessed and the convergence properties can be monitored during the sampling. Because the information contained in a discovery sequence is not enough to estimate all parameters, especially the number of fields, geologically justified prior information is crucial to the estimation. The Bayesian approach allows the analyst to specify his subjective estimates of the required parameters and his degree of uncertainty about the estimates in a clearly identified fashion throughout the analysis. As an example, this approach is applied to the same data of the North Sea on which Smith demonstrated his maximum likelihood method. For this case, the Bayesian approach has really improved the overly pessimistic results and downward bias of the maximum likelihood procedure.

  5. 纵向数据单指标模型的广义经验似然统计推断%Generalized Empirical Likelihood Inference for Single-index Models

    Institute of Scientific and Technical Information of China (English)

    杨随根; 薛留根

    2015-01-01

    Based on the generalized estimation equations ( GEE ) and the quadratic inference functions ( QIF ) methods, a bias-corrected generalized empirical likelihood was proposed to make statistical inference for the single-index model with longitudinal data. The maximum empirical likelihood estimator and the bias-corrected generalized empirical log-likelihood ratio statistics for the unknown index parameter in the model were obtained. It is proved that the maximum empirical likelihood estimator is asymptotically normal and the proposed statistics are asymptotically chi-square distributed under certain conditions, and hence they can be applied to construct the confidence region of the index parameter.%基于广义估计方程和二次推断函数方法,提出了纠偏的广义经验似然方法对纵向数据单指标模型进行统计推断,获得了模型中指标参数分量的极大经验似然估计和纠偏的广义经验对数似然比统计量。证明了相关估计量在一定条件下具有渐近正态性,且纠偏的广义经验对数似然比统计量依分布收敛于χ2分布,利用所得结果,可以构造未知参数的置信域及相关的假设检验。

  6. Automatically inferred Markov network models for classification of chromosomal band pattern structures.

    Science.gov (United States)

    Granum, E; Thomason, M G

    1990-01-01

    A structural pattern recognition approach to the analysis and classification of metaphase chromosome band patterns is presented. An operational method of representing band pattern profiles as sharp edged idealized profiles is outlined. These profiles are nonlinearly scaled to a few, but fixed number of "density" levels. Previous experience has shown that profiles of six levels are appropriate and that the differences between successive bands in these profiles are suitable for classification. String representations, which focuses on the sequences of transitions between local band pattern levels, are derived from such "difference profiles." A method of syntactic analysis of the band transition sequences by dynamic programming for optimal (maximal probability) string-to-network alignments is described. It develops automatic data-driven inference of band pattern models (Markov networks) per class, and uses these models for classification. The method does not use centromere information, but assumes the p-q-orientation of the band pattern profiles to be known a priori. It is experimentally established that the method can build Markov network models, which, when used for classification, show a recognition rate of about 92% on test data. The experiments used 200 samples (chromosome profiles) for each of the 22 autosome chromosome types and are designed to also investigate various classifier design problems. It is found that the use of a priori knowledge of Denver Group assignment only improved classification by 1 or 2%. A scheme for typewise normalization of the class relationship measures prove useful, partly through improvements on average results and partly through a more evenly distributed error pattern. The choice of reference of the p-q-orientation of the band patterns is found to be unimportant, and results of timing of the execution time of the analysis show that recent and efficient implementations can process one cell in less than 1 min on current standard

  7. Multi-Channel Maximum Likelihood Pitch Estimation

    DEFF Research Database (Denmark)

    Christensen, Mads Græsbøll

    2012-01-01

    In this paper, a method for multi-channel pitch estimation is proposed. The method is a maximum likelihood estimator and is based on a parametric model where the signals in the various channels share the same fundamental frequency but can have different amplitudes, phases, and noise characteristics....... This essentially means that the model allows for different conditions in the various channels, like different signal-to-noise ratios, microphone characteristics and reverberation. Moreover, the method does not assume that a certain array structure is used but rather relies on a more general model and is hence...

  8. Efficient Smoothing for Boundary Value Models

    Science.gov (United States)

    1989-12-29

    IEEE Transactions on Automatic Control , vol. 29, pp. 803-821, 1984. [2] A. Bagchi and H. Westdijk, "Smoothing...and likelihood ratio for Gaussian boundary value processes," IEEE Transactions on Automatic Control , vol. 34, pp. 954-962, 1989. [3] R. Nikoukhah et...77-96, 1988. [6] H. L. Weinert and U. B. Desai, "On complementary models and fixed- interval smoothing," IEEE Transactions on Automatic Control ,

  9. DEVELOPMENT OF THE MODEL OF AN AUTOMATIC GENERATION OF TOTAL AMOUNTS OF COMMISSIONS IN INTERNATIONAL INTERBANK PAYMENTS

    Directory of Open Access Journals (Sweden)

    Dmitry N. Bolotov

    2013-01-01

    Full Text Available The article deals with the main form of international payment - bank transfer and features when it is charging by banks correspondent fees for transit funds in their correspondent accounts. In order to optimize the cost of expenses for international money transfers there is a need to develop models and toolkit of automatic generation of the total amount of commissions in international interbank settlements. Accordingly, based on graph theory, approach to the construction of the model was developed.

  10. A Computer Model of the Evaporator for the Development of an Automatic Control System

    Science.gov (United States)

    Kozin, K. A.; Efremov, E. V.; Kabrysheva, O. P.; Grachev, M. I.

    2016-08-01

    For the implementation of a closed nuclear fuel cycle it is necessary to carry out a series of experimental studies to justify the choice of technology. In addition, the operation of the radiochemical plant is impossible without high-quality automatic control systems. In the technologies of spent nuclear fuel reprocessing, the method of continuous evaporation is often used for a solution conditioning. Therefore, the effective continuous technological process will depend on the operation of the evaporation equipment. Its essential difference from similar devices is a small size. In this paper the method of mathematic simulation is applied for the investigation of one-effect evaporator with an external heating chamber. Detailed modelling is quite difficult because the phase equilibrium dynamics of the evaporation process is not described. Moreover, there is a relationship with the other process units. The results proved that the study subject is a MIMO plant, nonlinear over separate control channels and not selfbalancing. Adequacy was tested using the experimental data obtained at the laboratory evaporation unit.

  11. Automatic weight determination in nonlinear model predictive control of wind turbines using swarm optimization technique

    Science.gov (United States)

    Tofighi, Elham; Mahdizadeh, Amin

    2016-09-01

    This paper addresses the problem of automatic tuning of weighting coefficients for the nonlinear model predictive control (NMPC) of wind turbines. The choice of weighting coefficients in NMPC is critical due to their explicit impact on efficiency of the wind turbine control. Classically, these weights are selected based on intuitive understanding of the system dynamics and control objectives. The empirical methods, however, may not yield optimal solutions especially when the number of parameters to be tuned and the nonlinearity of the system increase. In this paper, the problem of determining weighting coefficients for the cost function of the NMPC controller is formulated as a two-level optimization process in which the upper- level PSO-based optimization computes the weighting coefficients for the lower-level NMPC controller which generates control signals for the wind turbine. The proposed method is implemented to tune the weighting coefficients of a NMPC controller which drives the NREL 5-MW wind turbine. The results are compared with similar simulations for a manually tuned NMPC controller. Comparison verify the improved performance of the controller for weights computed with the PSO-based technique.

  12. 纵向数据下部分非线性模型的广义经验似然推断%GENERALIZED EMPIRICAL LIKELIHOOD INFERENCE FOR PARTIALLY NONLINEAR MODELS WITH LONGITUDINAL DATA

    Institute of Scientific and Technical Information of China (English)

    肖燕婷; 孙晓青; 孙瑾

    2016-01-01

    In this paper, we study the construction of confidence region for unknown parameter in partially nonlinear models with longitudinal data. By empirical likelihood method, the generalized empirical log-likelihood ratio for parameter in nonlinear function is proposed and shown to be asymptotically chi-square distribution. At the same time, the maximum empirical likelihood estimator of the parameter in nonlinear function is obtained and asymptotic normality is proved.%本文研究了纵向数据下部分非线性模型中未知参数的置信域的构造。利用经验似然方法,构造了非线性函数中未知参数的广义对数经验似然比统计量,证明了其渐近于卡方分布。同时,得到了未知参数的最大经验似然估计,并证明了其渐近正态性。

  13. 协变量随机缺失下线性模型的经验似然推断及其应用%Empirical Likelihood for Linear Models with Covariate Data Missing at Random

    Institute of Scientific and Technical Information of China (English)

    杨宜平

    2011-01-01

    Linear models with covariate data missing at random are considered, the weighted empirical likelihood and the imputed empirical likelihood are proposed. The proposed empirical log-likelihood ratios are proven to be asymptotically chi-squared, and the corresponding confidence regions for the regression coefficients are then constructed. The finite sample behavior of the proposed method is evaluated with simulation study, and a real example is analyzed.%考虑协变量带有缺失的线性模型,提出了加权的经验似然方法和借补的经验似然方法,证明了所提出的经验对数似然比渐近于x^2分布,由此构造回归系数的置信域。模拟研究了所提出方法的有限样本性质,并进行了实例分析。

  14. Automatic sequences

    CERN Document Server

    Haeseler, Friedrich

    2003-01-01

    Automatic sequences are sequences which are produced by a finite automaton. Although they are not random they may look as being random. They are complicated, in the sense of not being not ultimately periodic, they may look rather complicated, in the sense that it may not be easy to name the rule by which the sequence is generated, however there exists a rule which generates the sequence. The concept automatic sequences has special applications in algebra, number theory, finite automata and formal languages, combinatorics on words. The text deals with different aspects of automatic sequences, in particular:· a general introduction to automatic sequences· the basic (combinatorial) properties of automatic sequences· the algebraic approach to automatic sequences· geometric objects related to automatic sequences.

  15. Modified likelihood ratio test for homogeneity in bivariate normal mixtures with presence of a structural parameter

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    This paper investigates the asymptotic properties of the modified likelihood ratio statistic for testing homogeneity in bivariate normal mixture models with an unknown structural parameter. It is shown that the modified likelihood ratio statistic has χ22 null limiting distribution.

  16. Automatic detection of alpine rockslides in continuous seismic data using hidden Markov models

    Science.gov (United States)

    Dammeier, Franziska; Moore, Jeffrey R.; Hammer, Conny; Haslinger, Florian; Loew, Simon

    2016-02-01

    Data from continuously recording permanent seismic networks can contain information about rockslide occurrence and timing complementary to eyewitness observations and thus aid in construction of robust event catalogs. However, detecting infrequent rockslide signals within large volumes of continuous seismic waveform data remains challenging and often requires demanding manual intervention. We adapted an automatic classification method using hidden Markov models to detect rockslide signals in seismic data from two stations in central Switzerland. We first processed 21 known rockslides, with event volumes spanning 3 orders of magnitude and station event distances varying by 1 order of magnitude, which resulted in 13 and 19 successfully classified events at the two stations. Retraining the models to incorporate seismic noise from the day of the event improved the respective results to 16 and 19 successful classifications. The missed events generally had low signal-to-noise ratio and small to medium volumes. We then processed nearly 14 years of continuous seismic data from the same two stations to detect previously unknown events. After postprocessing, we classified 30 new events as rockslides, of which we could verify three through independent observation. In particular, the largest new event, with estimated volume of 500,000 m3, was not generally known within the Swiss landslide community, highlighting the importance of regional seismic data analysis even in densely populated mountainous regions. Our method can be easily implemented as part of existing earthquake monitoring systems, and with an average event detection rate of about two per month, manual verification would not significantly increase operational workload.

  17. Likelihood methods and classical burster repetition

    CERN Document Server

    Graziani, C; Graziani, Carlo; Lamb, Donald Q

    1995-01-01

    We develop a likelihood methodology which can be used to search for evidence of burst repetition in the BATSE catalog, and to study the properties of the repetition signal. We use a simplified model of burst repetition in which a number N_{\\rm r} of sources which repeat a fixed number of times N_{\\rm rep} are superposed upon a number N_{\\rm nr} of non-repeating sources. The instrument exposure is explicitly taken into account. By computing the likelihood for the data, we construct a probability distribution in parameter space that may be used to infer the probability that a repetition signal is present, and to estimate the values of the repetition parameters. The likelihood function contains contributions from all the bursts, irrespective of the size of their positional errors --- the more uncertain a burst's position is, the less constraining is its contribution. Thus this approach makes maximal use of the data, and avoids the ambiguities of sample selection associated with data cuts on error circle size. We...

  18. Accurate structural correlations from maximum likelihood superpositions.

    Directory of Open Access Journals (Sweden)

    Douglas L Theobald

    2008-02-01

    Full Text Available The cores of globular proteins are densely packed, resulting in complicated networks of structural interactions. These interactions in turn give rise to dynamic structural correlations over a wide range of time scales. Accurate analysis of these complex correlations is crucial for understanding biomolecular mechanisms and for relating structure to function. Here we report a highly accurate technique for inferring the major modes of structural correlation in macromolecules using likelihood-based statistical analysis of sets of structures. This method is generally applicable to any ensemble of related molecules, including families of nuclear magnetic resonance (NMR models, different crystal forms of a protein, and structural alignments of homologous proteins, as well as molecular dynamics trajectories. Dominant modes of structural correlation are determined using principal components analysis (PCA of the maximum likelihood estimate of the correlation matrix. The correlations we identify are inherently independent of the statistical uncertainty and dynamic heterogeneity associated with the structural coordinates. We additionally present an easily interpretable method ("PCA plots" for displaying these positional correlations by color-coding them onto a macromolecular structure. Maximum likelihood PCA of structural superpositions, and the structural PCA plots that illustrate the results, will facilitate the accurate determination of dynamic structural correlations analyzed in diverse fields of structural biology.

  19. CMB Power Spectrum Likelihood with ILC

    CERN Document Server

    Dick, Jason; Delabrouille, Jacques

    2012-01-01

    We extend the ILC method in harmonic space to include the error in its CMB estimate. This allows parameter estimation routines to take into account the effect of the foregrounds as well as the errors in their subtraction in conjunction with the ILC method. Our method requires the use of a model of the foregrounds which we do not develop here. The reduction of the foreground level makes this method less sensitive to unaccounted for errors in the foreground model. Simulations are used to validate the calculations and approximations used in generating this likelihood function.

  20. Automatic detection of volcano-seismic events by modeling state and event duration in hidden Markov models

    Science.gov (United States)

    Bhatti, Sohail Masood; Khan, Muhammad Salman; Wuth, Jorge; Huenupan, Fernando; Curilem, Millaray; Franco, Luis; Yoma, Nestor Becerra

    2016-09-01

    In this paper we propose an automatic volcano event detection system based on Hidden Markov Model (HMM) with state and event duration models. Since different volcanic events have different durations, therefore the state and whole event durations learnt from the training data are enforced on the corresponding state and event duration models within the HMM. Seismic signals from the Llaima volcano are used to train the system. Two types of events are employed in this study, Long Period (LP) and Volcano-Tectonic (VT). Experiments show that the standard HMMs can detect the volcano events with high accuracy but generates false positives. The results presented in this paper show that the incorporation of duration modeling can lead to reductions in false positive rate in event detection as high as 31% with a true positive accuracy equal to 94%. Further evaluation of the false positives indicate that the false alarms generated by the system were mostly potential events based on the signal-to-noise ratio criteria recommended by a volcano expert.

  1. Parametric likelihood inference for interval censored competing risks data.

    Science.gov (United States)

    Hudgens, Michael G; Li, Chenxi; Fine, Jason P

    2014-03-01

    Parametric estimation of the cumulative incidence function (CIF) is considered for competing risks data subject to interval censoring. Existing parametric models of the CIF for right censored competing risks data are adapted to the general case of interval censoring. Maximum likelihood estimators for the CIF are considered under the assumed models, extending earlier work on nonparametric estimation. A simple naive likelihood estimator is also considered that utilizes only part of the observed data. The naive estimator enables separate estimation of models for each cause, unlike full maximum likelihood in which all models are fit simultaneously. The naive likelihood is shown to be valid under mixed case interval censoring, but not under an independent inspection process model, in contrast with full maximum likelihood which is valid under both interval censoring models. In simulations, the naive estimator is shown to perform well and yield comparable efficiency to the full likelihood estimator in some settings. The methods are applied to data from a large, recent randomized clinical trial for the prevention of mother-to-child transmission of HIV.

  2. Comparison of function approximation, heuristic, and derivative-based methods for automatic calibration of computationally expensive groundwater bioremediation models

    Science.gov (United States)

    Mugunthan, Pradeep; Shoemaker, Christine A.; Regis, Rommel G.

    2005-11-01

    The performance of function approximation (FA) methods is compared to heuristic and derivative-based nonlinear optimization methods for automatic calibration of biokinetic parameters of a groundwater bioremediation model of chlorinated ethenes on a hypothetical and a real field case. For the hypothetical case, on the basis of 10 trials on two different objective functions, the FA methods had the lowest mean and smaller deviation of the objective function among all algorithms for a combined Nash-Sutcliffe objective and among all but the derivative-based algorithm for a total squared error objective. The best algorithms in the hypothetical case were applied to calibrate eight parameters to data obtained from a site in California. In three trials the FA methods outperformed heuristic and derivative-based methods for both objective functions. This study indicates that function approximation methods could be a more efficient alternative to heuristic and derivative-based methods for automatic calibration of computationally expensive bioremediation models.

  3. MATHEMATICAL MODEL OF AUTOMATIC FLIGHT OF POLIKOPTER UAV NAU PKF "AURORA"

    Directory of Open Access Journals (Sweden)

    Wang Bo

    2016-12-01

    Full Text Available Purpose: Development of mathematical and experimental models of polikopter UAV NAU PKF "Aurora" of oktakopter scheme for experimental flights in manual, semi-automatic and unmanned mode.                  Methods: 14/03/2016 - 21/03/2016 held a serіe of experiental flights (10 flights of 10 rats on altitude 700 meters on polіkopter (oktakopter NAU PKF "Aurora" in germetic kabіn with the study of his somatic,  nevrologіcal status after the flight. Flights also carried out with experimental animals on board for such a safety assessment. Results: The obtained logs of 'black box' of the autopilot indicate very small (almost invisible fluctuations in pitch, roll and yaw during the flight, minor variations on altitude during almost stationary hovering of polikopter at different altitudes, and fully adequate to movements and maneuvers of aircraft vibrations and parameters of these sensors. Discussion: In the course of these studies demonstrated experimentally the possibility of completely safe flight of the mammals (rats on polikopter vehicle, even in the open cockpit. With appropriate refinement possible in the future to raise the issue of the development and construction of passenger polikopter flyers for totally safe air transportation of people [6,7,8]. In terms of adverse mechanical effects on the human body (acceleration overload fluctuations, vibrations polikopter transport is safer and less harmful to the passengers than road transport, which is particularly important in the delivery of patient of neurosurgical, politravmatological, cardiologycal and critical care profile at critical condition in intensive care units and operating hospitals and medical centers.

  4. SU-E-T-50: Automatic Validation of Megavoltage Beams Modeled for Clinical Use in Radiation Therapy

    Energy Technology Data Exchange (ETDEWEB)

    Melchior, M [Terapia Radiante S.A., La Plata, Buenos Aires (Argentina); Salinas Aranda, F [Vidt Centro Medico, Ciudad Autonoma De Buenos Aires (Argentina); 21st Century Oncology, Ft. Myers, FL (United States); Sciutto, S [Universidad Nacional de La Plata, La Plata, Buenos Aires (Argentina); Dodat, D [Centro Medico Privado Dean Funes, La Plata, Buenos Aires (Argentina); Larragueta, N [Universidad Nacional de La Plata, La Plata, Buenos Aires (Argentina); Centro Medico Privado Dean Funes, La Plata, Buenos Aires (Argentina)

    2014-06-01

    Purpose: To automatically validate megavoltage beams modeled in XiO™ 4.50 (Elekta, Stockholm, Sweden) and Varian Eclipse™ Treatment Planning Systems (TPS) (Varian Associates, Palo Alto, CA, USA), reducing validation time before beam-on for clinical use. Methods: A software application that can automatically read and analyze DICOM RT Dose and W2CAD files was developed using MatLab integrated development environment.TPS calculated dose distributions, in DICOM RT Dose format, and dose values measured in different Varian Clinac beams, in W2CAD format, were compared. Experimental beam data used were those acquired for beam commissioning, collected on a water phantom with a 2D automatic beam scanning system.Two methods were chosen to evaluate dose distributions fitting: gamma analysis and point tests described in Appendix E of IAEA TECDOC-1583. Depth dose curves and beam profiles were evaluated for both open and wedged beams. Tolerance parameters chosen for gamma analysis are 3% and 3 mm dose and distance, respectively.Absolute dose was measured independently at points proposed in Appendix E of TECDOC-1583 to validate software results. Results: TPS calculated depth dose distributions agree with measured beam data under fixed precision values at all depths analyzed. Measured beam dose profiles match TPS calculated doses with high accuracy in both open and wedged beams. Depth and profile dose distributions fitting analysis show gamma values < 1. Relative errors at points proposed in Appendix E of TECDOC-1583 meet therein recommended tolerances.Independent absolute dose measurements at points proposed in Appendix E of TECDOC-1583 confirm software results. Conclusion: Automatic validation of megavoltage beams modeled for their use in the clinic was accomplished. The software tool developed proved efficient, giving users a convenient and reliable environment to decide whether to accept or not a beam model for clinical use. Validation time before beam-on for clinical use

  5. Empirical Likelihood Inference of Parameters in a Censored Nonlinear Semiparametric Regression Model%删失数据下非线性半参数回归模型中参数的经验似然推断

    Institute of Scientific and Technical Information of China (English)

    侯文; 宋立新; 黄玉洁

    2012-01-01

    考察了响应变量在随机删失情形下的非线性半参数回归模型,构造了未知参数的经验对数似然比统计量和调整经验对数似然比统计量,证明在一定条件下,所构造的经验似然比统计量渐近于x2分布,并由此构造出未知参数的置信域.此外,又构造了未知参数的最小二乘估计量,证明了它的渐近性质.通过模拟研究表明,经验似然方法在置信域的覆盖概率以及精度方面要优于最小二乘法.%In this paper, a censored nonlinear semiparametric regression model is investigated. Empirical log-likelihood ratio statistics and adjust empirical log-likelihood ratio statistics for the unknown parameters in the model are suggested. It is shown that the proposed statistics have asymptotically chi-squared distribution under some mild conditions, and hence it can be used to construct the confidence region of the unknown parameter. In addition the least squares estimator of unknown parameter is constructed, and its asymptotic behavior is proved. A simulation study is carried out to show the empirical likelihood methods appears to be better than the least-squares method in terms of the confidence regions and its coverage probabilities.

  6. Risk assessment models in genetics clinic for array comparative genomic hybridization: Clinical information can be used to predict the likelihood of an abnormal result in patients.

    Science.gov (United States)

    Marano, Rachel M; Mercurio, Laura; Kanter, Rebecca; Doyle, Richard; Abuelo, Dianne; Morrow, Eric M; Shur, Natasha

    2013-03-01

    Array comparative genomic hybridization (aCGH) testing can diagnose chromosomal microdeletions and duplications too small to be detected by conventional cytogenetic techniques. We need to consider which patients are more likely to receive a diagnosis from aCGH testing versus patients that have lower likelihood and may benefit from broader genome wide scanning. We retrospectively reviewed charts of a population of 200 patients, 117 boys and 83 girls, who underwent aCGH testing in Genetics Clinic at Rhode Island hospital between 1 January/2008 and 31 December 2010. Data collected included sex, age at initial clinical presentation, aCGH result, history of seizures, autism, dysmorphic features, global developmental delay/intellectual disability, hypotonia and failure to thrive. aCGH analysis revealed abnormal results in 34 (17%) and variants of unknown significance in 24 (12%). Patients with three or more clinical diagnoses had a 25.0% incidence of abnormal aCGH findings, while patients with two or fewer clinical diagnoses had a 12.5% incidence of abnormal aCGH findings. Currently, we provide families with a range of 10-30% of a diagnosis with aCGH testing. With increased clinical complexity, patients have an increased probability of having an abnormal aCGH result. With this, we can provide individualized risk estimates for each patient.

  7. Maximum Likelihood Identification of Nonlinear Model for High-speed Train%高速列车非线性模型的极大似然辨识

    Institute of Scientific and Technical Information of China (English)

    衷路生; 李兵; 龚锦红; 张永贤; 祝振敏

    2014-01-01

    提出高速列车非线性模型的极大似然(Maximum likelihood, ML)辨识方法,适合于高速列车在非高斯噪声干扰下的非线性模型的参数估计.首先,构建了描述高速列车单质点力学行为的随机离散非线性状态空间模型,并将高速列车参数的极大似然(ML)估计问题转化为期望极大(Expectation maximization,EM)的优化问题;然后,给出高速列车状态估计的粒子滤波器和粒子平滑器的设计方法,据此构造列车的条件数学期望,并给出最大化该数学期望的梯度搜索方法,进而得到列车参数的辨识算法,分析了算法的收敛速度;最后,进行了高速列车阻力系数估计的数值对比实验.结果表明,所提出的辨识方法的有效性.

  8. Color Image Segmentation Based on Different Color Space Models Using Automatic GrabCut

    Directory of Open Access Journals (Sweden)

    Dina Khattab

    2014-01-01

    Full Text Available This paper presents a comparative study using different color spaces to evaluate the performance of color image segmentation using the automatic GrabCut technique. GrabCut is considered as one of the semiautomatic image segmentation techniques, since it requires user interaction for the initialization of the segmentation process. The automation of the GrabCut technique is proposed as a modification of the original semiautomatic one in order to eliminate the user interaction. The automatic GrabCut utilizes the unsupervised Orchard and Bouman clustering technique for the initialization phase. Comparisons with the original GrabCut show the efficiency of the proposed automatic technique in terms of segmentation, quality, and accuracy. As no explicit color space is recommended for every segmentation problem, automatic GrabCut is applied with RGB, HSV, CMY, XYZ, and YUV color spaces. The comparative study and experimental results using different color images show that RGB color space is the best color space representation for the set of the images used.

  9. Performance Modelling of Automatic Identification System with Extended Field of View

    DEFF Research Database (Denmark)

    Lauersen, Troels; Mortensen, Hans Peter; Pedersen, Nikolaj Bisgaard

    2010-01-01

    This paper deals with AIS (Automatic Identification System) behavior, to investigate the severity of packet collisions in an extended field of view (FOV). This is an important issue for satellite-based AIS, and the main goal is a feasibility study to find out to what extent an increased FOV...

  10. 76 FR 8917 - Special Conditions: Gulfstream Model GVI Airplane; Automatic Speed Protection for Design Dive Speed

    Science.gov (United States)

    2011-02-16

    ...; Automatic Speed Protection for Design Dive Speed AGENCY: Federal Aviation Administration (FAA), DOT. ACTION... design features include a high speed protection system. These proposed special conditions contain the... Design Features The GVI is equipped with a high speed protection system that limits nose down...

  11. 76 FR 31454 - Special Conditions: Gulfstream Model GVI Airplane; Automatic Speed Protection for Design Dive Speed

    Science.gov (United States)

    2011-06-01

    ... for Gulfstream GVI airplanes was published in the Federal Register on February 16, 2011 (76 FR 8917...; Automatic Speed Protection for Design Dive Speed AGENCY: Federal Aviation Administration (FAA), DOT. ACTION... high speed protection system. These special conditions contain the additional safety standards that...

  12. Fusing moving average model and stationary wavelet decomposition for automatic incident detection: case study of Tokyo Expressway

    Directory of Open Access Journals (Sweden)

    Qinghua Liu

    2014-12-01

    Full Text Available Traffic congestion is a growing problem in urban areas all over the world. The transport sector has been in full swing event study on intelligent transportation system for automatic detection. The functionality of automatic incident detection on expressways is a primary objective of advanced traffic management system. In order to save lives and prevent secondary incidents, accurate and prompt incident detection is necessary. This paper presents a methodology that integrates moving average (MA model with stationary wavelet decomposition for automatic incident detection, in which parameters of layer coefficient are extracted from the difference between the upstream and downstream occupancy. Unlike other wavelet-based method presented before, firstly it smooths the raw data with MA model. Then it uses stationary wavelet to decompose, which can achieve accurate reconstruction of the signal, and does not shift the signal transfer coefficients. Thus, it can detect the incidents more accurately. The threshold to trigger incident alarm is also adjusted according to normal traffic condition with congestion. The methodology is validated with real data from Tokyo Expressway ultrasonic sensors. Experimental results show that it is accurate and effective, and that it can differentiate traffic accident from other condition such as recurring traffic congestion.

  13. Estimating dynamic equilibrium economies: linear versus nonlinear likelihood

    OpenAIRE

    2004-01-01

    This paper compares two methods for undertaking likelihood-based inference in dynamic equilibrium economies: a sequential Monte Carlo filter proposed by Fernández-Villaverde and Rubio-Ramírez (2004) and the Kalman filter. The sequential Monte Carlo filter exploits the nonlinear structure of the economy and evaluates the likelihood function of the model by simulation methods. The Kalman filter estimates a linearization of the economy around the steady state. The authors report two main results...

  14. Sieve likelihood ratio inference on general parameter space

    Institute of Scientific and Technical Information of China (English)

    SHEN Xiaotong; SHI Jian

    2005-01-01

    In this paper,a theory on sieve likelihood ratio inference on general parameter spaces(including infinite dimensional) is studied.Under fairly general regularity conditions,the sieve log-likelihood ratio statistic is proved to be asymptotically x2 distributed,which can be viewed as a generalization of the well-known Wilks' theorem.As an example,a emiparametric partial linear model is investigated.

  15. Smoothed log-concave maximum likelihood estimation with applications

    CERN Document Server

    Chen, Yining

    2011-01-01

    We study the smoothed log-concave maximum likelihood estimator of a probability distribution on $\\mathbb{R}^d$. This is a fully automatic nonparametric density estimator, obtained as a canonical smoothing of the log-concave maximum likelihood estimator. We demonstrate its attractive features both through an analysis of its theoretical properties and a simulation study. Moreover, we show how the estimator can be used as an intermediate stage of more involved procedures, such as constructing a classifier or estimating a functional of the density. Here again, the use of the estimator can be justified both on theoretical grounds and through its finite sample performance, and we illustrate its use in a breast cancer diagnosis (classification) problem.

  16. Chinese Word Segmentation Cognitive Model Based on Maximum Likelihood Optimization EM Algorithm%极大似然优化EM算法的汉语分词认知模型

    Institute of Scientific and Technical Information of China (English)

    赵越; 李红

    2016-01-01

    针对标准EM算法在汉语分词的应用中还存在收敛性能不好、分词准确性不高的问题,本文提出了一种基于极大似然估计规则优化EM算法的汉语分词认知模型,首先使用当前词的概率值计算每个可能切分的可能性,对切分可能性进行“归一化”处理,并对每种切分进行词计数,然后针对标准EM算法得到的估计值只能保证收敛到似然函数的一个稳定点,并不能使其保证收敛到全局最大值点或者局部最大值点的问题,采用极大似然估计规则对其进行优化,从而可以使用非线性最优化中的有效方法进行求解达到加速收敛的目的。仿真试验结果表明,本文提出的基于极大似然估计规则优化EM算法的汉语分词认知模型收敛性能更好,且在汉语分词的精确性较高。%In view of bad convergence and inaccurate word segmentation of standard EM algorithm in Chinese words segmentation, this paper put forward a cognitive model based on optimized EM algorithm by maximum likelihood estimation rule. Firstly, it uses the probability of current word to calculate the possibility of each possible segmentation and normalize them. Each segmentation is counted by words. Standard EM algorithm cannot make sure converging to a stable point of likelihood function, and converging to a global or local maximum point. Therefore, the maximum likelihood estimation rule is adopted to optimize it so as to use an effective method in nonlinear optimization and accelerate the convergence. the simulation experiments show that the optimized EM algorithm by maximum likelihood estimation rule has better convergence performance in the Chinese words cognitive model and more accurate in the words segmentation.

  17. Phylogenetic estimation with partial likelihood tensors

    CERN Document Server

    Sumner, J G

    2008-01-01

    We present an alternative method for calculating likelihoods in molecular phylogenetics. Our method is based on partial likelihood tensors, which are generalizations of partial likelihood vectors, as used in Felsenstein's approach. Exploiting a lexicographic sorting and partial likelihood tensors, it is possible to obtain significant computational savings. We show this on a range of simulated data by enumerating all numerical calculations that are required by our method and the standard approach.

  18. Workshop on Likelihoods for the LHC Searches

    CERN Document Server

    2013-01-01

    The primary goal of this 3‐day workshop is to educate the LHC community about the scientific utility of likelihoods. We shall do so by describing and discussing several real‐world examples of the use of likelihoods, including a one‐day in‐depth examination of likelihoods in the Higgs boson studies by ATLAS and CMS.

  19. Automatic single questionnaire intensity (SQI, EMS98 scale) estimation using ranking models built on the existing BCSF database

    Science.gov (United States)

    Schlupp, A.; Sira, C.; Schmitt, K.; Schaming, M.

    2013-12-01

    In charge of intensity estimations in France, BCSF has collected and manually analyzed more than 47000 online individual macroseismic questionnaires since 2000 up to intensity VI. These macroseismic data allow us to estimate one SQI value (Single Questionnaire Intensity) for each form following the EMS98 scale. The reliability of the automatic intensity estimation is important as they are today used for automatic shakemaps communications and crisis management. Today, the automatic intensity estimation at BCSF is based on the direct use of thumbnails selected on a menu by the witnesses. Each thumbnail corresponds to an EMS-98 intensity value, allowing us to quickly issue an intensity map of the communal intensity by averaging the SQIs at each city. Afterwards an expert, to determine a definitive SQI, manually analyzes each form. This work is time consuming and not anymore suitable considering the increasing number of testimonies at BCSF. Nevertheless, it can take into account incoherent answers. We tested several automatic methods (USGS algorithm, Correlation coefficient, Thumbnails) (Sira et al. 2013, IASPEI) and compared them with 'expert' SQIs. These methods gave us medium score (between 50 to 60% of well SQI determined and 35 to 40% with plus one or minus one intensity degree). The best fit was observed with the thumbnails. Here, we present new approaches based on 3 statistical ranking methods as 1) Multinomial logistic regression model, 2) Discriminant analysis DISQUAL and 3) Support vector machines (SVMs). The two first methods are standard methods, while the third one is more recent. Theses methods could be applied because the BCSF has already in his database more then 47000 forms and because their questions and answers are well adapted for a statistical analysis. The ranking models could then be used as automatic method constrained on expert analysis. The performance of the automatic methods and the reliability of the estimated SQI can be evaluated thanks to

  20. Automatic iterative segmentation of multiple sclerosis lesions using Student's t mixture models and probabilistic anatomical atlases in FLAIR images.

    Science.gov (United States)

    Freire, Paulo G L; Ferrari, Ricardo J

    2016-06-01

    Multiple sclerosis (MS) is a demyelinating autoimmune disease that attacks the central nervous system (CNS) and affects more than 2 million people worldwide. The segmentation of MS lesions in magnetic resonance imaging (MRI) is a very important task to assess how a patient is responding to treatment and how the disease is progressing. Computational approaches have been proposed over the years to segment MS lesions and reduce the amount of time spent on manual delineation and inter- and intra-rater variability and bias. However, fully-automatic segmentation of MS lesions still remains an open problem. In this work, we propose an iterative approach using Student's t mixture models and probabilistic anatomical atlases to automatically segment MS lesions in Fluid Attenuated Inversion Recovery (FLAIR) images. Our technique resembles a refinement approach by iteratively segmenting brain tissues into smaller classes until MS lesions are grouped as the most hyperintense one. To validate our technique we used 21 clinical images from the 2015 Longitudinal Multiple Sclerosis Lesion Segmentation Challenge dataset. Evaluation using Dice Similarity Coefficient (DSC), True Positive Ratio (TPR), False Positive Ratio (FPR), Volume Difference (VD) and Pearson's r coefficient shows that our technique has a good spatial and volumetric agreement with raters' manual delineations. Also, a comparison between our proposal and the state-of-the-art shows that our technique is comparable and, in some cases, better than some approaches, thus being a viable alternative for automatic MS lesion segmentation in MRI.

  1. LIKEDM: Likelihood calculator of dark matter detection

    Science.gov (United States)

    Huang, Xiaoyuan; Tsai, Yue-Lin Sming; Yuan, Qiang

    2017-04-01

    With the large progress in searches for dark matter (DM) particles with indirect and direct methods, we develop a numerical tool that enables fast calculations of the likelihoods of specified DM particle models given a number of observational data, such as charged cosmic rays from space-borne experiments (e.g., PAMELA, AMS-02), γ-rays from the Fermi space telescope, and underground direct detection experiments. The purpose of this tool - LIKEDM, likelihood calculator for dark matter detection - is to bridge the gap between a particle model of DM and the observational data. The intermediate steps between these two, including the astrophysical backgrounds, the propagation of charged particles, the analysis of Fermi γ-ray data, as well as the DM velocity distribution and the nuclear form factor, have been dealt with in the code. We release the first version (v1.0) focusing on the constraints from indirect detection of DM with charged cosmic and gamma rays. Direct detection will be implemented in the next version. This manual describes the framework, usage, and related physics of the code.

  2. Regional Image Features Model for Automatic Classification between Normal and Glaucoma in Fundus and Scanning Laser Ophthalmoscopy (SLO) Images.

    Science.gov (United States)

    Haleem, Muhammad Salman; Han, Liangxiu; Hemert, Jano van; Fleming, Alan; Pasquale, Louis R; Silva, Paolo S; Song, Brian J; Aiello, Lloyd Paul

    2016-06-01

    Glaucoma is one of the leading causes of blindness worldwide. There is no cure for glaucoma but detection at its earliest stage and subsequent treatment can aid patients to prevent blindness. Currently, optic disc and retinal imaging facilitates glaucoma detection but this method requires manual post-imaging modifications that are time-consuming and subjective to image assessment by human observers. Therefore, it is necessary to automate this process. In this work, we have first proposed a novel computer aided approach for automatic glaucoma detection based on Regional Image Features Model (RIFM) which can automatically perform classification between normal and glaucoma images on the basis of regional information. Different from all the existing methods, our approach can extract both geometric (e.g. morphometric properties) and non-geometric based properties (e.g. pixel appearance/intensity values, texture) from images and significantly increase the classification performance. Our proposed approach consists of three new major contributions including automatic localisation of optic disc, automatic segmentation of disc, and classification between normal and glaucoma based on geometric and non-geometric properties of different regions of an image. We have compared our method with existing approaches and tested it on both fundus and Scanning laser ophthalmoscopy (SLO) images. The experimental results show that our proposed approach outperforms the state-of-the-art approaches using either geometric or non-geometric properties. The overall glaucoma classification accuracy for fundus images is 94.4% and accuracy of detection of suspicion of glaucoma in SLO images is 93.9 %.

  3. MATHEMATICAL AND COMPUTER MODELING OF AUTOMATIC CONTROL SYSTEM FOR HYDROSTATIC BEARING

    Directory of Open Access Journals (Sweden)

    N. A. Pelevin

    2016-09-01

    Full Text Available The paper presents simulation results of hydrostatic bearing dynamics in spindle assembly of standard flexible production module with throttled circuit. The necessity of dynamic quality increase for automatic control system of the hydrostatic bearing with the use of correcting means in the form of RC-chains is shown. The features of correction parameters choice coming from the existence of the crossing connections in automatic control system structure are noted. We propose the block diagram of automatic control system of the hydrostatic bearing in Simulink working field and cyclic algorithm for determination program of RC-chain parameters implemented in MATLAB taking into account typical thermal processes for the finishing treatment. Graphic-analytical method for the correction parameters choice is presented based on the stability stock phase gradient for dynamic quality determination of automatic control system. Researches of the method estimability in case of using the standard metal bellow valve as the hydrocapacity for RC-chain are also carried out. Recommendations for the bellow valve choice are formulated. The check of dynamic quality indicators concerning transition processes calculated by means of the appropriate programs developed for MATLAB is performed. Examples are given for phase stability factor gradient schedules with partition of various areas of hydrostatic bearing dynamic quality for different frequencies of spindle rotation and procedure description of data cursor function application on MATLAB toolbar. Improvement of hydrostatic bearing dynamics under typical low loadings for finishing treatment is noted. Also, decrease of dynamic indicators for high loadings treatment in case of roughing treatment is marked.

  4. Parameter likelihood of intrinsic ellipticity correlations

    CERN Document Server

    Capranico, Federica; Schaefer, Bjoern Malte

    2012-01-01

    Subject of this paper are the statistical properties of ellipticity alignments between galaxies evoked by their coupled angular momenta. Starting from physical angular momentum models, we bridge the gap towards ellipticity correlations, ellipticity spectra and derived quantities such as aperture moments, comparing the intrinsic signals with those generated by gravitational lensing, with the projected galaxy sample of EUCLID in mind. We investigate the dependence of intrinsic ellipticity correlations on cosmological parameters and show that intrinsic ellipticity correlations give rise to non-Gaussian likelihoods as a result of nonlinear functional dependencies. Comparing intrinsic ellipticity spectra to weak lensing spectra we quantify the magnitude of their contaminating effect on the estimation of cosmological parameters and find that biases on dark energy parameters are very small in an angular-momentum based model in contrast to the linear alignment model commonly used. Finally, we quantify whether intrins...

  5. An automatic modeling system of the reaction mechanisms for chemical vapor deposition processes using real-coded genetic algorithms.

    Science.gov (United States)

    Takahashi, Takahiro; Nakai, Hiroyuki; Kinpara, Hiroki; Ema, Yoshinori

    2011-09-01

    The identification of appropriate reaction models is very helpful for developing chemical vapor deposition (CVD) processes. In this study, we have developed an automatic system to model reaction mechanisms in the CVD processes by analyzing the experimental results, which are cross-sectional shapes of the deposited films on substrates with micrometer- or nanometer-sized trenches. We designed the inference engine to model the reaction mechanism in the system by the use of real-coded genetic algorithms (RCGAs). We studied the dependence of the system performance on two methods using simple genetic algorithms (SGAs) and the RCGAs; the one involves the conventional GA operators and the other involves the blend crossover operator (BLX-alpha). Although we demonstrated that the systems using both the methods could successfully model the reaction mechanisms, the RCGAs showed the better performance with respect to the accuracy and the calculation cost for identifying the models.

  6. 指数多项式模型中参数最大似然估计的收敛速度%Convergence rate for maximum likelihood estimation of parameters in exponential polynomial model

    Institute of Scientific and Technical Information of China (English)

    房祥忠; 陈家鼎

    2011-01-01

    强度随时间变化的非齐次Possion过程在很多领域应用广泛.对一类非常广泛的非齐次Poisson过程—指数多项式模型,得到了当观测时间趋于无穷大时,参数的最大似然估计的“最优”收敛速度.%The model of nonhomogeneous Poisson processes with varying intensity function is applied in many fields. The best convergence rate for the maximum likelihood estimate ( MLE ) of exponential polynomial model, which is a kind of wide used nonhomogeneous Poisson processes, is given when time going to infinity.

  7. Systematic Identification of Two-compartment Model based on the Maximum Likelihood Method%基于极大似然法的二房室模型系统辨识

    Institute of Scientific and Technical Information of China (English)

    张应云; 张榆锋; 王勇; 李敬敬; 施心陵

    2014-01-01

    A approach according to the Maximum Likelihood method was presented in this paper to identify the parameters of the Two-compartment Model.To verify the performance of this method, the estimation parameters of the Two-compartment Model ob-tained from it and their absolute errors were compared with those obtained from the methods based on recursive augmented least -squares algorithm.It could be seen that the accuracy and feasibility of the identification-parameters of the nonlinear two-compart-ment model received by Maximum Likelihood method were obviously better than those from the recursive augmented least-squares method.So those parameters with smaller deviations can be used in correlative clinical trial to improve the practicability of the nonlinear two-compartment model.%提出一种基于极大似然法的二房室模型参数辨识方法。为验证本方法的有效性,我们比较了基于极大似然法和递推增广最小二乘法估计得到的常用二房室模型的参数值及其绝对误差。结果表明,基于极大似然法的非线性二房室模型参数辨识准确性和可行性明显优于递推增广最小二乘法。通过极大似然法获得的较小误差的非线性二房室模型参数估计值可用于相关临床试验,有助于提高建立非线性二房室模型的实用性。

  8. Employee Likelihood of Purchasing Health Insurance using Fuzzy Inference System

    Directory of Open Access Journals (Sweden)

    Lazim Abdullah

    2012-01-01

    Full Text Available Many believe that employees health and economic factors plays an important role in their likelihood to purchase health insurance. However decision to purchase health insurance is not trivial matters as many risk factors that influence decision. This paper presents a decision model using fuzzy inference system to identify the likelihoods of purchasing health insurance based on the selected risk factors. To build the likelihoods, data from one hundred and twenty eight employees at five organizations under the purview of Kota Star Municipality Malaysia were collected to provide input data. Three risk factors were considered as the input of the system including age, salary and risk of having illness. The likelihoods of purchasing health insurance was the output of the system and defined in three linguistic terms of Low, Medium and High. Input and output data were governed by the Mamdani inference rules of the system to decide the best linguistic term. The linguistic terms that describe the likelihoods of purchasing health insurance were identified by the system based on the three risk factors. It is found that twenty seven employees were likely to purchase health insurance at Low level and fifty six employees show their likelihoods at High level. The usage of fuzzy inference system would offer possible justifications to set a new approach in identifying prospective health insurance purchasers.

  9. Nearly Efficient Likelihood Ratio Tests of the Unit Root Hypothesis

    DEFF Research Database (Denmark)

    Jansson, Michael; Nielsen, Morten Ørregaard

    Seemingly absent from the arsenal of currently available "nearly efficient" testing procedures for the unit root hypothesis, i.e. tests whose local asymptotic power functions are indistinguishable from the Gaussian power envelope, is a test admitting a (quasi-)likelihood ratio interpretation. We...... show that the likelihood ratio unit root test derived in a Gaussian AR(1) model with standard normal innovations is nearly efficient in that model. Moreover, these desirable properties carry over to more complicated models allowing for serially correlated and/or non-Gaussian innovations....

  10. Seizure detection in adult ICU patients based on changes in EEG synchronization likelihood

    NARCIS (Netherlands)

    Slooter, A. J. C.; Vriens, E. M.; Spijkstra, J. J.; Girbes, A. R. J.; van Huffelen, A. C.; Stam, C. J.

    2006-01-01

    Introduction: Seizures are common in Intensive Care Unit (ICU) patients, and may increase neuronal injury. Purpose: To explore the possible value of synchronization likelihood (SL) for the automatic detection of seizures in adult ICU patients. Methods: We included EEGs from ICU patients with a varie

  11. Pedestrians' intention to jaywalk: Automatic or planned? A study based on a dual-process model in China.

    Science.gov (United States)

    Xu, Yaoshan; Li, Yongjuan; Zhang, Feng

    2013-01-01

    The present study investigates the determining factors of Chinese pedestrians' intention to violate traffic laws using a dual-process model. This model divides the cognitive processes of intention formation into controlled analytical processes and automatic associative processes. Specifically, the process explained by the augmented theory of planned behavior (TPB) is controlled, whereas the process based on past behavior is automatic. The results of a survey conducted on 323 adult pedestrian respondents showed that the two added TPB variables had different effects on the intention to violate, i.e., personal norms were significantly related to traffic violation intention, whereas descriptive norms were non-significant predictors. Past behavior significantly but uniquely predicted the intention to violate: the results of the relative weight analysis indicated that the largest percentage of variance in pedestrians' intention to violate was explained by past behavior (42%). According to the dual-process model, therefore, pedestrians' intention formation relies more on habit than on cognitive TPB components and social norms. The implications of these findings for the development of intervention programs are discussed.

  12. Automatic parametrization of non-polar implicit solvent models for the blind prediction of solvation free energies

    Science.gov (United States)

    Wang, Bao; Zhao, Zhixiong; Wei, Guo-Wei

    2016-09-01

    In this work, a systematic protocol is proposed to automatically parametrize the non-polar part of implicit solvent models with polar and non-polar components. The proposed protocol utilizes either the classical Poisson model or the Kohn-Sham density functional theory based polarizable Poisson model for modeling polar solvation free energies. Four sets of radius parameters are combined with four sets of charge force fields to arrive at a total of 16 different parametrizations for the polar component. For the non-polar component, either the standard model of surface area, molecular volume, and van der Waals interactions or a model with atomic surface areas and molecular volume is employed. To automatically parametrize a non-polar model, we develop scoring and ranking algorithms to classify solute molecules. The their non-polar parametrization is obtained based on the assumption that similar molecules have similar parametrizations. A large database with 668 experimental data is collected and employed to validate the proposed protocol. The lowest leave-one-out root mean square (RMS) error for the database is 1.33 kcal/mol. Additionally, five subsets of the database, i.e., SAMPL0-SAMPL4, are employed to further demonstrate that the proposed protocol. The optimal RMS errors are 0.93, 2.82, 1.90, 0.78, and 1.03 kcal/mol, respectively, for SAMPL0, SAMPL1, SAMPL2, SAMPL3, and SAMPL4 test sets. The corresponding RMS errors for the polarizable Poisson model with the Amber Bondi radii are 0.93, 2.89, 1.90, 1.16, and 1.07 kcal/mol, respectively.

  13. The Distribution Model of CNC System to Failure Based on Maximum Likelihood Estimation%基于极大似然估计的数控系统故障分布模型

    Institute of Scientific and Technical Information of China (English)

    2015-01-01

    威布尔分布被广泛用于可靠性工程和寿命数据的分析中。针对两参数威布尔分布,建立基于极大似然法的参数估计模型,采用二阶收敛 Newton-Raphson 迭代法求解威布尔分布的尺寸参数和形状参数。迭代求解过程中,利用 Matlab 图形,初步选取似然函数曲线在零值点附近的区域作为初始值的区间,并根据 Newton-Raphson 迭代法收敛的充分条件进一步确定迭代初值的选取范围。通过 matlab 绘制迭代趋势三维图,证明与迭代计算结果相符。通过比较,证实本参数估计模型和 Newton-Raphson 迭代求解法更加精确有效。%Owing to the fact that the Weibull distribution is frequently applied for reliability engineering and lifespan data analysis,the paper established the parameter estimation model using maximum likelihood estimation for the dual-parametric Weibull distribution.it then used second-order convergent Newton-Raphson iteration method to solve the MLE of two-parameter Weibull distribution,which has scale param-eter and shape parameter.In the iteration process,the area around the zero point of likelihood function curve was preliminarily selected as the range of the initial value based on the likelihood function image which was plotted by Matlab,and according to the sufficient conditions for the convergence of Newton-Raphson iteration method to further determine the scope of the iterative initial value.Iteration trend three-dimensional image which was plotted by Matlab proves to be consistent with the results of iterative calcula-tion.Finally,by comparison,this parameter estimation model and the Newton-Raphson iterative solution method were proved to be more accurate and efficient.

  14. Automatic video summarization driven by a spatio-temporal attention model

    Science.gov (United States)

    Barland, R.; Saadane, A.

    2008-02-01

    According to the literature, automatic video summarization techniques can be classified in two parts, following the output nature: "video skims", which are generated using portions of the original video and "key-frame sets", which correspond to the images, selected from the original video, having a significant semantic content. The difference between these two categories is reduced when we consider automatic procedures. Most of the published approaches are based on the image signal and use either pixel characterization or histogram techniques or image decomposition by blocks. However, few of them integrate properties of the Human Visual System (HVS). In this paper, we propose to extract keyframes for video summarization by studying the variations of salient information between two consecutive frames. For each frame, a saliency map is produced simulating the human visual attention by a bottom-up (signal-dependent) approach. This approach includes three parallel channels for processing three early visual features: intensity, color and temporal contrasts. For each channel, the variations of the salient information between two consecutive frames are computed. These outputs are then combined to produce the global saliency variation which determines the key-frames. Psychophysical experiments have been defined and conducted to analyze the relevance of the proposed key-frame extraction algorithm.

  15. 拟似然非线性模型中的置信域:几何法%Confidence Regions in Quasi-Likelihood Nonlinear Models: A Geometric Approach

    Institute of Scientific and Technical Information of China (English)

    唐年胜; 王学仁

    2000-01-01

    A modification of the geometric framework of Bates & Watts is proposedfor quasi-likelihood nonlinear models in Euclidean innerproduct space. Within this geometric framework, this paper provides three kinds of improved approximate confidence regions for the parameters and parameter subsets in terms of curvatures. The work extends the previous results of Hamilton et al. (1982), Hamilton (1986) and Wei (1994, 1998).%对拟似然非线性模型在欧氏内积空间建立了修改的Bates & Watts几何结构.基于此几何结构,导出了参数和子集参数的与统计曲率有关的三种近似置信域,进一步推广和发展了Hamilton et al. (1982), Hamilton (1986)和Wei (1994,1998)等人的相应结果.

  16. Consistency and Asymptotic Normality of the Maximum Likelihood Estimator in Exponential Family Nonlinear Models%指数族非线性模型最大似然估计的相合性和渐近正态性

    Institute of Scientific and Technical Information of China (English)

    夏天; 孔繁超

    2008-01-01

    本文我们提出了一些正则条件,这些条件减弱了Zhu and Wei(1997)文的条件.基于所提的正则条件,我们证明了指数族非线性模型参数最大似然估计的相合性和渐近正态性.我们的结果可被认为是Zhu and Wei(1997)工作的进一步改进.%This paper proposes some regularity conditions which weaken those given by Zhu & Wei (1997).On the basis of the proposed regularity conditions,the existence,the strong consistency and the asymptotic normality of maximum likelihood estimation(MLE)are proved in exponential family nonlinear models(EFNMs).Our results may be regarded as a further improvement of the work of Zhu & Wei(1997).

  17. Conceptual Model for Automatic Early Warning Information System of Infectious Diseases Based on Internet Reporting Surveillance System

    Institute of Scientific and Technical Information of China (English)

    JIA-QI MA; LI-PING WANG; XUAO-PENG QI; XIAO-MING SHI; GONG-HUAN YANG

    2007-01-01

    Objective To establish a conceptual model of automatic early warning of infectious diseases based on internet reporting surveillance system,with a view to realizing an automated warning system on a daily basis and timely identifying potential outbreaks of infectious diseases. Methods The statistic conceptual model was established using historic surveillance data with movable percentile method.Results Based on the infectious disease surveillance information platform,the conceptualmodelfor early warning was established.The parameter,threshold,and revised sensitivity and specificity of early warning value were changed to realize dynamic alert of infectious diseases on a daily basis.Conclusion The instructive conceptual model of dynamic alert can be used as a validating tool in institutions of infectious disease surveillance in different districts.

  18. Automatic construction of 3D basic-semantic models of inhabited interiors using laser scanners and RFID sensors.

    Science.gov (United States)

    Valero, Enrique; Adan, Antonio; Cerrada, Carlos

    2012-01-01

    This paper is focused on the automatic construction of 3D basic-semantic models of inhabited interiors using laser scanners with the help of RFID technologies. This is an innovative approach, in whose field scarce publications exist. The general strategy consists of carrying out a selective and sequential segmentation from the cloud of points by means of different algorithms which depend on the information that the RFID tags provide. The identification of basic elements of the scene, such as walls, floor, ceiling, windows, doors, tables, chairs and cabinets, and the positioning of their corresponding models can then be calculated. The fusion of both technologies thus allows a simplified 3D semantic indoor model to be obtained. This method has been tested in real scenes under difficult clutter and occlusion conditions, and has yielded promising results.

  19. Maximum likelihood estimation of the attenuated ultrasound pulse

    DEFF Research Database (Denmark)

    Rasmussen, Klaus Bolding

    1994-01-01

    The attenuated ultrasound pulse is divided into two parts: a stationary basic pulse and a nonstationary attenuation pulse. A standard ARMA model is used for the basic pulse, and a nonstandard ARMA model is derived for the attenuation pulse. The maximum likelihood estimator of the attenuated...

  20. Profile likelihood maps of a 15-dimensional MSSM

    NARCIS (Netherlands)

    Strege, C.; Bertone, G.; Besjes, G.J.; Caron, S.; Ruiz de Austri, R.; Strubig, A.; Trotta, R.

    2014-01-01

    We present statistically convergent profile likelihood maps obtained via global fits of a phenomenological Minimal Supersymmetric Standard Model with 15 free parameters (the MSSM-15), based on over 250M points. We derive constraints on the model parameters from direct detection limits on dark matter

  1. Composite likelihood and two-stage estimation in family studies

    DEFF Research Database (Denmark)

    Andersen, Elisabeth Anne Wreford

    2004-01-01

    In this paper register based family studies provide the motivation for linking a two-stage estimation procedure in copula models for multivariate failure time data with a composite likelihood approach. The asymptotic properties of the estimators in both parametric and semi-parametric models are d...

  2. The Laplace Likelihood Ratio Test for Heteroscedasticity

    Directory of Open Access Journals (Sweden)

    J. Martin van Zyl

    2011-01-01

    Full Text Available It is shown that the likelihood ratio test for heteroscedasticity, assuming the Laplace distribution, gives good results for Gaussian and fat-tailed data. The likelihood ratio test, assuming normality, is very sensitive to any deviation from normality, especially when the observations are from a distribution with fat tails. Such a likelihood test can also be used as a robust test for a constant variance in residuals or a time series if the data is partitioned into groups.

  3. Automatic calibration of a parsimonious ecohydrological model in a sparse basin using the spatio-temporal variation of the NDVI

    Science.gov (United States)

    Ruiz-Pérez, Guiomar; Manfreda, Salvatore; Caylor, Kelly; Francés, Félix

    2016-04-01

    Drylands are extensive, covering 30% of the Earth's land surface and 50% of Africa. In these water-controlled areas, vegetation plays a key role in the water cycle. Ecohydrological models provide a tool to investigate the relationships between vegetation and water resources. However, studies in Africa often face the problem that many ecohydrological models have quite extensive parametrical requirements, while available data are scarce. Therefore, there is a need for searching new sources of information such as satellite data. The advantages of the use of satellite data in dry regions has been deeply demonstrated and studied. But, the use of this kind of data forces to introduce the concept of spatio-temporal information. In this context, we have to deal with the fact that there is a lack in terms of statistics and methodologies to incorporate the spatio-temporal data during the calibration and validation processes. This research wants to be a contribution in that sense. The used ecohydrological model was calibrated in the Upper Ewaso river basin in Kenya only using NDVI (Normalized Difference Vegetation Index) data from MODIS. An automatic calibration methodology based on Singular Value Decomposition techniques was proposed in order to calibrate the model taking into account the temporal variation and, also, the spatial pattern of the observed NDVI and the simulated LAI. The obtained results have demonstrated: (1) the satellite data is an extraordinary useful tool of information and it can be used to implement ecohydrological models in dry regions; (2) the proposed model calibrated only using satellite data is able to reproduce the vegetation dynamics (in time and in space) and, also, the observed discharge at the outlet point; and (3) the proposed automatic calibration methodology works satisfactorily and it includes spatio-temporal data, in other words, it takes into account the temporal variation and the spatial pattern of the analyzed data.

  4. Neural networks for action representation underlying automatic mimicry: A functional magnetic-resonance imaging and dynamic causal modeling study

    Directory of Open Access Journals (Sweden)

    Akihiro T Sasaki

    2012-08-01

    Full Text Available Automatic mimicry is based on the tight linkage between motor and perception action representations in which internal models play a key role. Based on the anatomical connection, we hypothesized that the direct effective connectivity from the posterior superior temporal sulcus (pSTS to the ventral premotor area (PMv formed an inverse internal model, converting visual representation into a motor plan, and that reverse connectivity formed a forward internal model, converting the motor plan into a sensory outcome of action. To test this hypothesis, we employed dynamic causal-modeling analysis with functional magnetic-resonance imaging. Twenty-four normal participants underwent a change-detection task involving two visually-presented balls that were either manually rotated by the investigator’s right hand (‘Hand’ or automatically rotated. The effective connectivity from the pSTS to the PMv was enhanced by hand observation and suppressed by execution, corresponding to the inverse model. Opposite effects were observed from the PMv to the pSTS, suggesting the forward model. Additionally, both execution and hand observation commonly enhanced the effective connectivity from the pSTS to the inferior parietal lobule (IPL, the IPL to the primary sensorimotor cortex (S/M1, the PMv to the IPL, and the PMv to the S/M1. Representation of the hand action therefore was implemented in the motor system including the S/M1. During hand observation, effective connectivity toward the pSTS was suppressed whereas that toward the PMv and S/M1 was enhanced. Thus the action-representation network acted as a dynamic feedback-control system during action observation.

  5. A Maximum Likelihood Estimator of a Markov Model for Disease Activity in Crohn's Disease and Ulcerative Colitis for Annually Aggregated Partial Observations

    DEFF Research Database (Denmark)

    Borg, Søren; Persson, U.; Jess, T.;

    2010-01-01

    Hospital, Copenhagen, Denmark, during 1991 to 1993. The data were aggregated over calendar years; for each year, the number of relapses and the number of surgical operations were recorded. Our aim was to estimate Markov models for disease activity in CD and UC, in terms of relapse and remission...

  6. Comparison of the frequentist properties of Bayes and the maximum likelihood estimators in an age-structured fish stock assessment model

    DEFF Research Database (Denmark)

    Nielsen, Anders; Lewy, Peter

    2002-01-01

    A simulation study was carried out for a separable fish stock assessment model including commercial and survey catch-at-age and effort data. All catches are considered stochastic variables subject to sampling and process variations. The results showed that the Bayes estimator of spawning biomass ...

  7. A semi-automatic image-based close range 3D modeling pipeline using a multi-camera configuration.

    Science.gov (United States)

    Rau, Jiann-Yeou; Yeh, Po-Chia

    2012-01-01

    The generation of photo-realistic 3D models is an important task for digital recording of cultural heritage objects. This study proposes an image-based 3D modeling pipeline which takes advantage of a multi-camera configuration and multi-image matching technique that does not require any markers on or around the object. Multiple digital single lens reflex (DSLR) cameras are adopted and fixed with invariant relative orientations. Instead of photo-triangulation after image acquisition, calibration is performed to estimate the exterior orientation parameters of the multi-camera configuration which can be processed fully automatically using coded targets. The calibrated orientation parameters of all cameras are applied to images taken using the same camera configuration. This means that when performing multi-image matching for surface point cloud generation, the orientation parameters will remain the same as the calibrated results, even when the target has changed. Base on this invariant character, the whole 3D modeling pipeline can be performed completely automatically, once the whole system has been calibrated and the software was seamlessly integrated. Several experiments were conducted to prove the feasibility of the proposed system. Images observed include that of a human being, eight Buddhist statues, and a stone sculpture. The results for the stone sculpture, obtained with several multi-camera configurations were compared with a reference model acquired by an ATOS-I 2M active scanner. The best result has an absolute accuracy of 0.26 mm and a relative accuracy of 1:17,333. It demonstrates the feasibility of the proposed low-cost image-based 3D modeling pipeline and its applicability to a large quantity of antiques stored in a museum.

  8. A Semi-Automatic Image-Based Close Range 3D Modeling Pipeline Using a Multi-Camera Configuration

    Directory of Open Access Journals (Sweden)

    Po-Chia Yeh

    2012-08-01

    Full Text Available The generation of photo-realistic 3D models is an important task for digital recording of cultural heritage objects. This study proposes an image-based 3D modeling pipeline which takes advantage of a multi-camera configuration and multi-image matching technique that does not require any markers on or around the object. Multiple digital single lens reflex (DSLR cameras are adopted and fixed with invariant relative orientations. Instead of photo-triangulation after image acquisition, calibration is performed to estimate the exterior orientation parameters of the multi-camera configuration which can be processed fully automatically using coded targets. The calibrated orientation parameters of all cameras are applied to images taken using the same camera configuration. This means that when performing multi-image matching for surface point cloud generation, the orientation parameters will remain the same as the calibrated results, even when the target has changed. Base on this invariant character, the whole 3D modeling pipeline can be performed completely automatically, once the whole system has been calibrated and the software was seamlessly integrated. Several experiments were conducted to prove the feasibility of the proposed system. Images observed include that of a human being, eight Buddhist statues, and a stone sculpture. The results for the stone sculpture, obtained with several multi-camera configurations were compared with a reference model acquired by an ATOS-I 2M active scanner. The best result has an absolute accuracy of 0.26 mm and a relative accuracy of 1:17,333. It demonstrates the feasibility of the proposed low-cost image-based 3D modeling pipeline and its applicability to a large quantity of antiques stored in a museum.

  9. A hybrid model for automatic identification of risk factors for heart disease.

    Science.gov (United States)

    Yang, Hui; Garibaldi, Jonathan M

    2015-12-01

    Coronary artery disease (CAD) is the leading cause of death in both the UK and worldwide. The detection of related risk factors and tracking their progress over time is of great importance for early prevention and treatment of CAD. This paper describes an information extraction system that was developed to automatically identify risk factors for heart disease in medical records while the authors participated in the 2014 i2b2/UTHealth NLP Challenge. Our approaches rely on several nature language processing (NLP) techniques such as machine learning, rule-based methods, and dictionary-based keyword spotting to cope with complicated clinical contexts inherent in a wide variety of risk factors. Our system achieved encouraging performance on the challenge test data with an overall micro-averaged F-measure of 0.915, which was competitive to the best system (F-measure of 0.927) of this challenge task.

  10. Generic method for automatic bladder segmentation on cone beam CT using a patient-specific bladder shape model

    Energy Technology Data Exchange (ETDEWEB)

    Schoot, A. J. A. J. van de, E-mail: a.j.schootvande@amc.uva.nl; Schooneveldt, G.; Wognum, S.; Stalpers, L. J. A.; Rasch, C. R. N.; Bel, A. [Department of Radiation Oncology, Academic Medical Center, University of Amsterdam, Meibergdreef 9, 1105 AZ Amsterdam (Netherlands); Hoogeman, M. S. [Department of Radiation Oncology, Daniel den Hoed Cancer Center, Erasmus Medical Center, Groene Hilledijk 301, 3075 EA Rotterdam (Netherlands); Chai, X. [Department of Radiation Oncology, Stanford University School of Medicine, 875 Blake Wilbur Drive, Palo Alto, California 94305 (United States)

    2014-03-15

    Purpose: The aim of this study is to develop and validate a generic method for automatic bladder segmentation on cone beam computed tomography (CBCT), independent of gender and treatment position (prone or supine), using only pretreatment imaging data. Methods: Data of 20 patients, treated for tumors in the pelvic region with the entire bladder visible on CT and CBCT, were divided into four equally sized groups based on gender and treatment position. The full and empty bladder contour, that can be acquired with pretreatment CT imaging, were used to generate a patient-specific bladder shape model. This model was used to guide the segmentation process on CBCT. To obtain the bladder segmentation, the reference bladder contour was deformed iteratively by maximizing the cross-correlation between directional grey value gradients over the reference and CBCT bladder edge. To overcome incorrect segmentations caused by CBCT image artifacts, automatic adaptations were implemented. Moreover, locally incorrect segmentations could be adapted manually. After each adapted segmentation, the bladder shape model was expanded and new shape patterns were calculated for following segmentations. All available CBCTs were used to validate the segmentation algorithm. The bladder segmentations were validated by comparison with the manual delineations and the segmentation performance was quantified using the Dice similarity coefficient (DSC), surface distance error (SDE) and SD of contour-to-contour distances. Also, bladder volumes obtained by manual delineations and segmentations were compared using a Bland-Altman error analysis. Results: The mean DSC, mean SDE, and mean SD of contour-to-contour distances between segmentations and manual delineations were 0.87, 0.27 cm and 0.22 cm (female, prone), 0.85, 0.28 cm and 0.22 cm (female, supine), 0.89, 0.21 cm and 0.17 cm (male, supine) and 0.88, 0.23 cm and 0.17 cm (male, prone), respectively. Manual local adaptations improved the segmentation

  11. AMME: an Automatic Mental Model Evaluation to analyse user behaviour traced in a finite, discrete state space.

    Science.gov (United States)

    Rauterberg, M

    1993-11-01

    To support the human factors engineer in designing a good user interface, a method has been developed to analyse the empirical data of the interactive user behaviour traced in a finite discrete state space. The sequences of actions produced by the user contain valuable information about the mental model of this user, the individual problem solution strategies for a given task and the hierarchical structure of the task-subtasks relationships. The presented method, AMME, can analyse the action sequences and automatically generate (1) a net description of the task dependent model of the user, (2) a complete state transition matrix, and (3) various quantitative measures of the user's task solving process. The behavioural complexity of task-solving processes carried out by novices has been found to be significantly larger than the complexity of task-solving processes carried out by experts.

  12. Automatic brain matter segmentation of computed tomography images using a statistical model: A tool to gain working time!

    Science.gov (United States)

    Bertè, Francesco; Lamponi, Giuseppe; Bramanti, Placido; Calabrò, Rocco S

    2015-10-01

    Brain computed tomography (CT) is useful diagnostic tool for the evaluation of several neurological disorders due to its accuracy, reliability, safety and wide availability. In this field, a potentially interesting research topic is the automatic segmentation and recognition of medical regions of interest (ROIs). Herein, we propose a novel automated method, based on the use of the active appearance model (AAM) for the segmentation of brain matter in CT images to assist radiologists in the evaluation of the images. The method described, that was applied to 54 CT images coming from a sample of outpatients affected by cognitive impairment, enabled us to obtain the generation of a model overlapping with the original image with quite good precision. Since CT neuroimaging is in widespread use for detecting neurological disease, including neurodegenerative conditions, the development of automated tools enabling technicians and physicians to reduce working time and reach a more accurate diagnosis is needed.

  13. An active contour-based atlas registration model applied to automatic subthalamic nucleus targeting on MRI: method and validation.

    Science.gov (United States)

    Duay, Valérie; Bresson, Xavier; Castro, Javier Sanchez; Pollo, Claudio; Cuadra, Meritxell Bach; Thiran, Jean-Philippe

    2008-01-01

    This paper presents a new non parametric atlas registration framework, derived from the optical flow model and the active contour theory, applied to automatic subthalamic nucleus (STN) targeting in deep brain stimulation (DBS) surgery. In a previous work, we demonstrated that the STN position can be predicted based on the position of surrounding visible structures, namely the lateral and third ventricles. A STN targeting process can thus be obtained by registering these structures of interest between a brain atlas and the patient image. Here we aim to improve the results of the state of the art targeting methods and at the same time to reduce the computational time. Our simultaneous segmentation and registration model shows mean STN localization errors statistically similar to the most performing registration algorithms tested so far and to the targeting expert's variability. Moreover, the computational time of our registration method is much lower, which is a worthwhile improvement from a clinical point of view.

  14. Language modeling for automatic speech recognition of inflective languages an applications-oriented approach using lexical data

    CERN Document Server

    Donaj, Gregor

    2017-01-01

    This book covers language modeling and automatic speech recognition for inflective languages (e.g. Slavic languages), which represent roughly half of the languages spoken in Europe. These languages do not perform as well as English in speech recognition systems and it is therefore harder to develop an application with sufficient quality for the end user. The authors describe the most important language features for the development of a speech recognition system. This is then presented through the analysis of errors in the system and the development of language models and their inclusion in speech recognition systems, which specifically address the errors that are relevant for targeted applications. The error analysis is done with regard to morphological characteristics of the word in the recognized sentences. The book is oriented towards speech recognition with large vocabularies and continuous and even spontaneous speech. Today such applications work with a rather small number of languages compared to the nu...

  15. Towards SWOT data assimilation for hydrology : automatic calibration of global flow routing model parameters in the Amazon basin

    Science.gov (United States)

    Mouffe, M.; Getirana, A.; Ricci, S. M.; Lion, C.; Biancamaria, S.; Boone, A.; Mognard, N. M.; Rogel, P.

    2011-12-01

    The Surface Water and Ocean Topography (SWOT) mission is a swath mapping radar interferometer that will provide global measurements of water surface elevation (WSE). The revisit time depends upon latitude and varies from two (low latitudes) to ten (high latitudes) per 22-day orbit repeat period. The high resolution and the global coverage of the SWOT data open the way for new hydrology studies. Here, the aim is to investigate the use of virtually generated SWOT data to improve discharge simulation using data assimilation techniques. In the framework of the SWOT virtual mission (VM), this study presents the first results of the automatic calibration of a global flow routing (GFR) scheme using SWOT VM measurements for the Amazon basin. The Hydrological Modeling and Analysis Platform (HyMAP) is used along with the MOCOM-UA multi-criteria global optimization algorithm. HyMAP has a 0.25-degree spatial resolution and runs at the daily time step to simulate discharge, water levels and floodplains. The surface runoff and baseflow drainage derived from the Interactions Sol-Biosphère-Atmosphère (ISBA) model are used as inputs for HyMAP. Previous works showed that the use of ENVISAT data enables the reduction of the uncertainty on some of the hydrological model parameters, such as river width and depth, Manning roughness coefficient and groundwater time delay. In the framework of the SWOT preparation work, the automatic calibration procedure was applied using SWOT VM measurements. For this Observing System Experiment (OSE), the synthetical data were obtained applying an instrument simulator (representing realistic SWOT errors) for one hydrological year to HYMAP simulated WSE using a "true" set of parameters. Only pixels representing rivers larger than 100 meters within the Amazon basin are considered to produce SWOT VM measurements. The automatic calibration procedure leads to the estimation of optimal parametersminimizing objective functions that formulate the difference

  16. Asymptotic behavior of the likelihood function of covariance matrices of spatial Gaussian processes

    DEFF Research Database (Denmark)

    Zimmermann, Ralf

    2010-01-01

    The covariance structure of spatial Gaussian predictors (aka Kriging predictors) is generally modeled by parameterized covariance functions; the associated hyperparameters in turn are estimated via the method of maximum likelihood. In this work, the asymptotic behavior of the maximum likelihood......: optimally trained nondegenerate spatial Gaussian processes cannot feature arbitrary ill-conditioned correlation matrices. The implication of this theorem on Kriging hyperparameter optimization is exposed. A nonartificial example is presented, where maximum likelihood-based Kriging model training...

  17. Likelihood analysis of earthquake focal mechanism distributions

    CERN Document Server

    Kagan, Y Y

    2014-01-01

    In our paper published earlier we discussed forecasts of earthquake focal mechanism and ways to test the forecast efficiency. Several verification methods were proposed, but they were based on ad-hoc, empirical assumptions, thus their performance is questionable. In this work we apply a conventional likelihood method to measure a skill of forecast. The advantage of such an approach is that earthquake rate prediction can in principle be adequately combined with focal mechanism forecast, if both are based on the likelihood scores, resulting in a general forecast optimization. To calculate the likelihood score we need to compare actual forecasts or occurrences of predicted events with the null hypothesis that the mechanism's 3-D orientation is random. For double-couple source orientation the random probability distribution function is not uniform, which complicates the calculation of the likelihood value. To better understand the resulting complexities we calculate the information (likelihood) score for two rota...

  18. Automatic Verification of Biochemical Network Using Model Checking Method%基于模型校核的生化网络自动辨别方法

    Institute of Scientific and Technical Information of China (English)

    Jinkyung Kim; Younghee Lee; Il Moon

    2008-01-01

    This study focuses on automatic searching and verifying methods for the reachability, transition logics and hierarchical structure in all possible paths of biological processes using model checking. The automatic search and verification for alternative paths within complex and large networks in biological process can provide a consid-erable amount of solutions, which is difficult to handle manually. Model checking is an automatic method for veri-fying if a circuit or a condition, expressed as a concurrent transition system, satisfies a set of properties expressed ina temporal logic, such as computational tree logic (CTL). This article represents that model checking is feasible in biochemical network verification and it shows certain advantages over simulation for querying and searching of special behavioral properties in biochemical processes.

  19. Automatic delineation of geomorphological slope units with r.slopeunits v1.0 and their optimization for landslide susceptibility modeling

    Science.gov (United States)

    Alvioli, Massimiliano; Marchesini, Ivan; Reichenbach, Paola; Rossi, Mauro; Ardizzone, Francesca; Fiorucci, Federica; Guzzetti, Fausto

    2016-11-01

    Automatic subdivision of landscapes into terrain units remains a challenge. Slope units are terrain units bounded by drainage and divide lines, but their use in hydrological and geomorphological studies is limited because of the lack of reliable software for their automatic delineation. We present the r.slopeunits software for the automatic delineation of slope units, given a digital elevation model and a few input parameters. We further propose an approach for the selection of optimal parameters controlling the terrain subdivision for landslide susceptibility modeling. We tested the software and the optimization approach in central Italy, where terrain, landslide, and geo-environmental information was available. The software was capable of capturing the variability of the landscape and partitioning the study area into slope units suited for landslide susceptibility modeling and zonation. We expect r.slopeunits to be used in different physiographical settings for the production of reliable and reproducible landslide susceptibility zonations.

  20. Engineering model of the electric drives of separation device for simulation of automatic control systems of reactive power compensation by means of serially connected capacitors

    Science.gov (United States)

    Juromskiy, V. M.

    2016-09-01

    It is developed a mathematical model for an electric drive of high-speed separation device in terms of the modeling dynamic systems Simulink, MATLAB. The model is focused on the study of the automatic control systems of the power factor (Cosφ) of an actuator by compensating the reactive component of the total power by switching a capacitor bank in series with the actuator. The model is based on the methodology of the structural modeling of dynamic processes.

  1. Automatic generation of groundwater model hydrostratigraphy from AEM resistivity and boreholes

    DEFF Research Database (Denmark)

    Marker, Pernille Aabye; Foged, N.; Christiansen, A. V.;

    2014-01-01

    Regional hydrological models are important tools in water resources management. Model prediction uncertainty is primarily due to structural (geological) non-uniqueness which makes sampling of the structural model space necessary to estimate prediction uncertainties. Geological structures and hete......Regional hydrological models are important tools in water resources management. Model prediction uncertainty is primarily due to structural (geological) non-uniqueness which makes sampling of the structural model space necessary to estimate prediction uncertainties. Geological structures...

  2. Maximum likelihood estimation for life distributions with competing failure modes

    Science.gov (United States)

    Sidik, S. M.

    1979-01-01

    The general model for the competing failure modes assuming that location parameters for each mode are expressible as linear functions of the stress variables and the failure modes act independently is presented. The general form of the likelihood function and the likelihood equations are derived for the extreme value distributions, and solving these equations using nonlinear least squares techniques provides an estimate of the asymptotic covariance matrix of the estimators. Monte-Carlo results indicate that, under appropriate conditions, the location parameters are nearly unbiased, the scale parameter is slightly biased, and the asymptotic covariances are rapidly approached.

  3. Likelihood-based inference for clustered line transect data

    DEFF Research Database (Denmark)

    Waagepetersen, Rasmus; Schweder, Tore

    2006-01-01

    The uncertainty in estimation of spatial animal density from line transect surveys depends on the degree of spatial clustering in the animal population. To quantify the clustering we model line transect data as independent thinnings of spatial shot-noise Cox processes. Likelihood-based inference...... is implemented using markov chain Monte Carlo (MCMC) methods to obtain efficient estimates of spatial clustering parameters. Uncertainty is addressed using parametric bootstrap or by consideration of posterior distributions in a Bayesian setting. Maximum likelihood estimation and Bayesian inference are compared...

  4. Semiparametric maximum likelihood for nonlinear regression with measurement errors.

    Science.gov (United States)

    Suh, Eun-Young; Schafer, Daniel W

    2002-06-01

    This article demonstrates semiparametric maximum likelihood estimation of a nonlinear growth model for fish lengths using imprecisely measured ages. Data on the species corvina reina, found in the Gulf of Nicoya, Costa Rica, consist of lengths and imprecise ages for 168 fish and precise ages for a subset of 16 fish. The statistical problem may therefore be classified as nonlinear errors-in-variables regression with internal validation data. Inferential techniques are based on ideas extracted from several previous works on semiparametric maximum likelihood for errors-in-variables problems. The illustration of the example clarifies practical aspects of the associated computational, inferential, and data analytic techniques.

  5. Penalized maximum likelihood estimation and variable selection in geostatistics

    CERN Document Server

    Chu, Tingjin; Wang, Haonan; 10.1214/11-AOS919

    2012-01-01

    We consider the problem of selecting covariates in spatial linear models with Gaussian process errors. Penalized maximum likelihood estimation (PMLE) that enables simultaneous variable selection and parameter estimation is developed and, for ease of computation, PMLE is approximated by one-step sparse estimation (OSE). To further improve computational efficiency, particularly with large sample sizes, we propose penalized maximum covariance-tapered likelihood estimation (PMLE$_{\\mathrm{T}}$) and its one-step sparse estimation (OSE$_{\\mathrm{T}}$). General forms of penalty functions with an emphasis on smoothly clipped absolute deviation are used for penalized maximum likelihood. Theoretical properties of PMLE and OSE, as well as their approximations PMLE$_{\\mathrm{T}}$ and OSE$_{\\mathrm{T}}$ using covariance tapering, are derived, including consistency, sparsity, asymptotic normality and the oracle properties. For covariance tapering, a by-product of our theoretical results is consistency and asymptotic normal...

  6. IMPROVING VOICE ACTIVITY DETECTION VIA WEIGHTING LIKELIHOOD AND DIMENSION REDUCTION

    Institute of Scientific and Technical Information of China (English)

    Wang Huanliang; Han Jiqing; Li Haifeng; Zheng Tieran

    2008-01-01

    The performance of the traditional Voice Activity Detection (VAD) algorithms declines sharply in lower Signal-to-Noise Ratio (SNR) environments. In this paper, a feature weighting likelihood method is proposed for noise-robust VAD. The contribution of dynamic features to likelihood score can be increased via the method, which improves consequently the noise robustness of VAD.Divergence based dimension reduction method is proposed for saving computation, which reduces these feature dimensions with smaller divergence value at the cost of degrading the performance a little.Experimental results on Aurora Ⅱ database show that the detection performance in noise environments can remarkably be improved by the proposed method when the model trained in clean data is used to detect speech endpoints. Using weighting likelihood on the dimension-reduced features obtains comparable, even better, performance compared to original full-dimensional feature.

  7. Penalized maximum likelihood estimation for generalized linear point processes

    DEFF Research Database (Denmark)

    Hansen, Niels Richard

    2010-01-01

    A generalized linear point process is specified in terms of an intensity that depends upon a linear predictor process through a fixed non-linear function. We present a framework where the linear predictor is parametrized by a Banach space and give results on Gateaux differentiability of the log-likelihood....... Of particular interest is when the intensity is expressed in terms of a linear filter parametrized by a Sobolev space. Using that the Sobolev spaces are reproducing kernel Hilbert spaces we derive results on the representation of the penalized maximum likelihood estimator in a special case and the gradient...... of the negative log-likelihood in general. The latter is used to develop a descent algorithm in the Sobolev space. We conclude the paper by extensions to multivariate and additive model specifications. The methods are implemented in the R-package ppstat....

  8. Adaptive Parallel Tempering for Stochastic Maximum Likelihood Learning of RBMs

    CERN Document Server

    Desjardins, Guillaume; Bengio, Yoshua

    2010-01-01

    Restricted Boltzmann Machines (RBM) have attracted a lot of attention of late, as one the principle building blocks of deep networks. Training RBMs remains problematic however, because of the intractibility of their partition function. The maximum likelihood gradient requires a very robust sampler which can accurately sample from the model despite the loss of ergodicity often incurred during learning. While using Parallel Tempering in the negative phase of Stochastic Maximum Likelihood (SML-PT) helps address the issue, it imposes a trade-off between computational complexity and high ergodicity, and requires careful hand-tuning of the temperatures. In this paper, we show that this trade-off is unnecessary. The choice of optimal temperatures can be automated by minimizing average return time (a concept first proposed by [Katzgraber et al., 2006]) while chains can be spawned dynamically, as needed, thus minimizing the computational overhead. We show on a synthetic dataset, that this results in better likelihood ...

  9. Fully automatic recognition of the temporal phases of facial actions.

    Science.gov (United States)

    Valstar, Michel F; Pantic, Maja

    2012-02-01

    Past work on automatic analysis of facial expressions has focused mostly on detecting prototypic expressions of basic emotions like happiness and anger. The method proposed here enables the detection of a much larger range of facial behavior by recognizing facial muscle actions [action units (AUs)] that compound expressions. AUs are agnostic, leaving the inference about conveyed intent to higher order decision making (e.g., emotion recognition). The proposed fully automatic method not only allows the recognition of 22 AUs but also explicitly models their temporal characteristics (i.e., sequences of temporal segments: neutral, onset, apex, and offset). To do so, it uses a facial point detector based on Gabor-feature-based boosted classifiers to automatically localize 20 facial fiducial points. These points are tracked through a sequence of images using a method called particle filtering with factorized likelihoods. To encode AUs and their temporal activation models based on the tracking data, it applies a combination of GentleBoost, support vector machines, and hidden Markov models. We attain an average AU recognition rate of 95.3% when tested on a benchmark set of deliberately displayed facial expressions and 72% when tested on spontaneous expressions.

  10. GPU accelerated likelihoods for stereo-based articulated tracking

    DEFF Research Database (Denmark)

    Friborg, Rune Møllegaard; Hauberg, Søren; Erleben, Kenny

    2010-01-01

    For many years articulated tracking has been an active research topic in the computer vision community. While working solutions have been suggested, computational time is still problematic. We present a GPU implementation of a ray-casting based likelihood model that is orders of magnitude faster...

  11. GPU Accelerated Likelihoods for Stereo-Based Articulated Tracking

    DEFF Research Database (Denmark)

    Friborg, Rune Møllegaard; Hauberg, Søren; Erleben, Kenny

    For many years articulated tracking has been an active research topic in the computer vision community. While working solutions have been suggested, computational time is still problematic. We present a GPU implementation of a ray-casting based likelihood model that is orders of magnitude faster...

  12. Likelihood Analysis of Supersymmetric SU(5) GUTs

    Energy Technology Data Exchange (ETDEWEB)

    Bagnaschi, E. [DESY; Costa, J. C. [Imperial Coll., London; Sakurai, K. [Warsaw U.; Borsato, M. [Santiago de Compostela U.; Buchmueller, O. [Imperial Coll., London; Cavanaugh, R. [Illinois U., Chicago; Chobanova, V. [Santiago de Compostela U.; Citron, M. [Imperial Coll., London; De Roeck, A. [Antwerp U.; Dolan, M. J. [Melbourne U.; Ellis, J. R. [King' s Coll. London; Flächer, H. [Bristol U.; Heinemeyer, S. [Madrid, IFT; Isidori, G. [Zurich U.; Lucio, M. [Santiago de Compostela U.; Martínez Santos, D. [Santiago de Compostela U.; Olive, K. A. [Minnesota U., Theor. Phys. Inst.; Richards, A. [Imperial Coll., London; de Vries, K. J. [Imperial Coll., London; Weiglein, G. [DESY

    2016-10-31

    We perform a likelihood analysis of the constraints from accelerator experiments and astrophysical observations on supersymmetric (SUSY) models with SU(5) boundary conditions on soft SUSY-breaking parameters at the GUT scale. The parameter space of the models studied has 7 parameters: a universal gaugino mass $m_{1/2}$, distinct masses for the scalar partners of matter fermions in five- and ten-dimensional representations of SU(5), $m_5$ and $m_{10}$, and for the $\\mathbf{5}$ and $\\mathbf{\\bar 5}$ Higgs representations $m_{H_u}$ and $m_{H_d}$, a universal trilinear soft SUSY-breaking parameter $A_0$, and the ratio of Higgs vevs $\\tan \\beta$. In addition to previous constraints from direct sparticle searches, low-energy and flavour observables, we incorporate constraints based on preliminary results from 13 TeV LHC searches for jets + MET events and long-lived particles, as well as the latest PandaX-II and LUX searches for direct Dark Matter detection. In addition to previously-identified mechanisms for bringing the supersymmetric relic density into the range allowed by cosmology, we identify a novel ${\\tilde u_R}/{\\tilde c_R} - \\tilde{\\chi}^0_1$ coannihilation mechanism that appears in the supersymmetric SU(5) GUT model and discuss the role of ${\\tilde \

  13. Automatic 3D City Modeling Using a Digital Map and Panoramic Images from a Mobile Mapping System

    Directory of Open Access Journals (Sweden)

    Hyungki Kim

    2014-01-01

    Full Text Available Three-dimensional city models are becoming a valuable resource because of their close geospatial, geometrical, and visual relationship with the physical world. However, ground-oriented applications in virtual reality, 3D navigation, and civil engineering require a novel modeling approach, because the existing large-scale 3D city modeling methods do not provide rich visual information at ground level. This paper proposes a new framework for generating 3D city models that satisfy both the visual and the physical requirements for ground-oriented virtual reality applications. To ensure its usability, the framework must be cost-effective and allow for automated creation. To achieve these goals, we leverage a mobile mapping system that automatically gathers high-resolution images and supplements sensor information such as the position and direction of the captured images. To resolve problems stemming from sensor noise and occlusions, we develop a fusion technique to incorporate digital map data. This paper describes the major processes of the overall framework and the proposed techniques for each step and presents experimental results from a comparison with an existing 3D city model.

  14. 缺失数据下线性EV模型均值的经验似然推断%Empirical Likelihood Inference for the Mean of a Linear EV Models with Missing Data

    Institute of Scientific and Technical Information of China (English)

    魏成花; 胡锡健

    2014-01-01

    The missing response problem in the linear EV (error-in-varibles)models is investigated. Using the inverse probability-Weighted method,the empirical log-likelihood ratio statistics for the Mean in the model is constructed.It is proved that the proposed statistics are asymptotically chi-square under some suitable conditions.With this results,the confidence region for the Mean can be constructed.%考虑了响应变量随机缺失情形下的线性EV模型,利用逆概率加权的方法构造响应变量均值的经验对数似然比统计量,在一定条件下证明了所构造的经验对数似然比统计量渐近于卡方分布,利用这个结果可构造均值的置信域。

  15. 缺失数据下非线性EV模型参数的经验似然置信域%Empirical Likelihood Confidence Regions of Parameters in Nonlinear EV Models under Missing Data

    Institute of Scientific and Technical Information of China (English)

    刘强; 薛留根

    2012-01-01

    The missing response problem in the nonlinear EV(error-in-variables) models is considered, where the explanatory variate X is erroneously measured. With the help of validation data, two empirical log-likelihood ratio statistics for the unknown parameters in the model are proposed. It is proved that the proposed statistics are asymptotically chi-square distribution under some mild conditions, and hence can be used to constructing the confidence regions of the parameters.%考虑解释变量带有测量误差且响应变量随机缺失情形下的非线性EV模型.通过利用核实数据,构造了未知参数的两种经验对数似然比统计量.证明了所构造统计量的分布渐近于x2分布,所得结果可以用来构造未知参数的渐近置信域.

  16. RELM (the Working Group for the Development of Region Earthquake Likelihood Models) and the Development of new, Open-Source, Java-Based (Object Oriented) Code for Probabilistic Seismic Hazard Analysis

    Science.gov (United States)

    Field, E. H.

    2001-12-01

    Given problems with virtually all previous earthquake-forecast models for southern California, and a current lack of consensus on how such models should be constructed, a joint SCEC-USGS sponsored working group for the development of Regional Earthquake Likelihood Models (RELM) has been established (www.relm.org). The goals are as follows: 1) To develop and test a range of viable earthquake-potential models for southern California (not just one "consensus" model); 2) To examine and compare the implications of each model with respect to probabilistic seismic-hazard estimates (which will not only quantify existing hazard uncertainties, but will also indicate how future research should be focused in order to reduce the uncertainties); and 3) To design and document conclusive tests of each model with respect to existing and future geophysical observations. The variety of models under development reflects the variety of geophysical constraints available; these include geological fault information, historical seismicity, geodetic observations, stress-transfer interactions, and foreshock/aftershock statistics. One reason for developing and testing a range of models is to evaluate the extent to which any one can be exported to another region where the options are more limited. RELM is not intended to be a one-time effort. Rather, we are building an infrastructure that will facilitate an ongoing incorporation of new scientific findings into seismic-hazard models. The effort involves the development of several community models and databases, one of which is new Java-based code for probabilistic seismic hazard analysis (PSHA). Although several different PSHA codes presently exist, none are open source, well documented, and written in an object-oriented programming language (which is ideally suited for PSHA). Furthermore, we need code that is flexible enough to accommodate the wide range of models currently under development in RELM. The new code is being developed under

  17. Automatic Reading

    Institute of Scientific and Technical Information of China (English)

    胡迪

    2007-01-01

    <正>Reading is the key to school success and,like any skill,it takes practice.A child learns to walk by practising until he no longer has to think about how to put one foot in front of the other.The great athlete practises until he can play quickly,accurately and without thinking.Ed- ucators call it automaticity.

  18. Improved Likelihood Function in Particle-based IR Eye Tracking

    DEFF Research Database (Denmark)

    Satria, R.; Sorensen, J.; Hammoud, R.

    2005-01-01

    In this paper we propose a log likelihood-ratio function of foreground and background models used in a particle filter to track the eye region in dark-bright pupil image sequences. This model fuses information from both dark and bright pupil images and their difference image into one model. Our...... performance in challenging sequences with test subjects showing large head movements and under significant light conditions....

  19. A statistically based seasonal precipitation forecast model with automatic predictor selection and its application to central and south Asia

    Science.gov (United States)

    Gerlitz, Lars; Vorogushyn, Sergiy; Apel, Heiko; Gafurov, Abror; Unger-Shayesteh, Katy; Merz, Bruno

    2016-11-01

    The study presents a statistically based seasonal precipitation forecast model, which automatically identifies suitable predictors from globally gridded sea surface temperature (SST) and climate variables by means of an extensive data-mining procedure and explicitly avoids the utilization of typical large-scale climate indices. This leads to an enhanced flexibility of the model and enables its automatic calibration for any target area without any prior assumption concerning adequate predictor variables. Potential predictor variables are derived by means of a cell-wise correlation analysis of precipitation anomalies with gridded global climate variables under consideration of varying lead times. Significantly correlated grid cells are subsequently aggregated to predictor regions by means of a variability-based cluster analysis. Finally, for every month and lead time, an individual random-forest-based forecast model is constructed, by means of the preliminary generated predictor variables. Monthly predictions are aggregated to running 3-month periods in order to generate a seasonal precipitation forecast. The model is applied and evaluated for selected target regions in central and south Asia. Particularly for winter and spring in westerly-dominated central Asia, correlation coefficients between forecasted and observed precipitation reach values up to 0.48, although the variability of precipitation rates is strongly underestimated. Likewise, for the monsoonal precipitation amounts in the south Asian target area, correlations of up to 0.5 were detected. The skill of the model for the dry winter season over south Asia is found to be low. A sensitivity analysis with well-known climate indices, such as the El Niño- Southern Oscillation (ENSO), the North Atlantic Oscillation (NAO) and the East Atlantic (EA) pattern, reveals the major large-scale controlling mechanisms of the seasonal precipitation climate for each target area. For the central Asian target areas, both

  20. On the existence of maximum likelihood estimates for presence-only data

    Science.gov (United States)

    Hefley, Trevor J.; Hooten, Mevin B.

    2015-01-01

    Presence-only data can be used to determine resource selection and estimate a species’ distribution. Maximum likelihood is a common parameter estimation method used for species distribution models. Maximum likelihood estimates, however, do not always exist for a commonly used species distribution model – the Poisson point process.