WorldWideScience

Sample records for automatic likelihood model

  1. In all likelihood statistical modelling and inference using likelihood

    CERN Document Server

    Pawitan, Yudi

    2001-01-01

    Based on a course in the theory of statistics this text concentrates on what can be achieved using the likelihood/Fisherian method of taking account of uncertainty when studying a statistical problem. It takes the concept ot the likelihood as providing the best methods for unifying the demands of statistical modelling and the theory of inference. Every likelihood concept is illustrated by realistic examples, which are not compromised by computational problems. Examples range from asimile comparison of two accident rates, to complex studies that require generalised linear or semiparametric mode

  2. Model Fit after Pairwise Maximum Likelihood.

    Science.gov (United States)

    Barendse, M T; Ligtvoet, R; Timmerman, M E; Oort, F J

    2016-01-01

    Maximum likelihood factor analysis of discrete data within the structural equation modeling framework rests on the assumption that the observed discrete responses are manifestations of underlying continuous scores that are normally distributed. As maximizing the likelihood of multivariate response patterns is computationally very intensive, the sum of the log-likelihoods of the bivariate response patterns is maximized instead. Little is yet known about how to assess model fit when the analysis is based on such a pairwise maximum likelihood (PML) of two-way contingency tables. We propose new fit criteria for the PML method and conduct a simulation study to evaluate their performance in model selection. With large sample sizes (500 or more), PML performs as well the robust weighted least squares analysis of polychoric correlations. PMID:27148136

  3. Likelihood smoothing using gravitational wave surrogate models

    CERN Document Server

    Cole, Robert H

    2014-01-01

    Likelihood surfaces in the parameter space of gravitational wave signals can contain many secondary maxima, which can prevent search algorithms from finding the global peak and correctly mapping the distribution. Traditional schemes to mitigate this problem maintain the number of secondary maxima and thus retain the possibility that the global maximum will remain undiscovered. By contrast, the recently proposed technique of likelihood transform can modify the structure of the likelihood surface to reduce its complexity. We present a practical method to carry out a likelihood transform using a Gaussian smoothing kernel, utilising gravitational wave surrogate models to perform the smoothing operation analytically. We demonstrate the approach with Newtonian and post-Newtonian waveform models for an inspiralling circular compact binary.

  4. Likelihood analysis of the I(2) model

    DEFF Research Database (Denmark)

    Johansen, Søren

    1997-01-01

    The I(2) model is defined as a submodel of the general vector autoregressive model, by two reduced rank conditions. The model describes stochastic processes with stationary second difference. A parametrization is suggested which makes likelihood inference feasible. Consistency of the maximum...

  5. Likelihood-Based Climate Model Evaluation

    Science.gov (United States)

    Braverman, Amy; Cressie, Noel; Teixeira, Joao

    2012-01-01

    Climate models are deterministic, mathematical descriptions of the physics of climate. Confidence in predictions of future climate is increased if the physics are verifiably correct. A necessary, (but not sufficient) condition is that past and present climate be simulated well. Quantify the likelihood that a (summary statistic computed from a) set of observations arises from a physical system with the characteristics captured by a model generated time series. Given a prior on models, we can go further: posterior distribution of model given observations.

  6. Evaluating Network Models: A Likelihood Analysis

    CERN Document Server

    Wang, Wen-Qiang; Zhou, Tao

    2011-01-01

    Many models are put forward to mimic the evolution of real networked systems. A well-accepted way to judge the validity is to compare the modeling results with real networks subject to several structural features. Even for a specific real network, we cannot fairly evaluate the goodness of different models since there are too many structural features while there is no criterion to select and assign weights on them. Motivated by the studies on link prediction algorithms, we propose a unified method to evaluate the network models via the comparison of the likelihoods of the currently observed network driven by different models, with an assumption that the higher the likelihood is, the better the model is. We test our method on the real Internet at the Autonomous System (AS) level, and the results suggest that the Generalized Linear Preferential (GLP) model outperforms the Tel Aviv Network Generator (Tang), while both two models are better than the Barab\\'asi-Albert (BA) and Erd\\"os-R\\'enyi (ER) models. Our metho...

  7. Inference in HIV dynamics models via hierarchical likelihood

    CERN Document Server

    Commenges, D; Putter, H; Thiebaut, R

    2010-01-01

    HIV dynamical models are often based on non-linear systems of ordinary differential equations (ODE), which do not have analytical solution. Introducing random effects in such models leads to very challenging non-linear mixed-effects models. To avoid the numerical computation of multiple integrals involved in the likelihood, we propose a hierarchical likelihood (h-likelihood) approach, treated in the spirit of a penalized likelihood. We give the asymptotic distribution of the maximum h-likelihood estimators (MHLE) for fixed effects, a result that may be relevant in a more general setting. The MHLE are slightly biased but the bias can be made negligible by using a parametric bootstrap procedure. We propose an efficient algorithm for maximizing the h-likelihood. A simulation study, based on a classical HIV dynamical model, confirms the good properties of the MHLE. We apply it to the analysis of a clinical trial.

  8. INTERACTING MULTIPLE MODEL ALGORITHM BASED ON JOINT LIKELIHOOD ESTIMATION

    Institute of Scientific and Technical Information of China (English)

    Sun Jie; Jiang Chaoshu; Chen Zhuming; Zhang Wei

    2011-01-01

    A novel approach is proposed for the estimation of likelihood on Interacting Multiple-Model (IMM) filter.In this approach,the actual innovation,based on a mismatched model,can be formulated as sum of the theoretical innovation based on a matched model and the distance between matched and mismatched models,whose probability distributions are known.The joint likelihood of innovation sequence can be estimated by convolution of the two known probability density functions.The likelihood of tracking models can be calculated by conditional probability formula.Compared with the conventional likelihood estimation method,the proposed method improves the estimation accuracy of likelihood and robustness of IMM,especially when maneuver occurs.

  9. Maximum likelihood estimation of finite mixture model for economic data

    Science.gov (United States)

    Phoong, Seuk-Yen; Ismail, Mohd Tahir

    2014-06-01

    Finite mixture model is a mixture model with finite-dimension. This models are provides a natural representation of heterogeneity in a finite number of latent classes. In addition, finite mixture models also known as latent class models or unsupervised learning models. Recently, maximum likelihood estimation fitted finite mixture models has greatly drawn statistician's attention. The main reason is because maximum likelihood estimation is a powerful statistical method which provides consistent findings as the sample sizes increases to infinity. Thus, the application of maximum likelihood estimation is used to fit finite mixture model in the present paper in order to explore the relationship between nonlinear economic data. In this paper, a two-component normal mixture model is fitted by maximum likelihood estimation in order to investigate the relationship among stock market price and rubber price for sampled countries. Results described that there is a negative effect among rubber price and stock market price for Malaysia, Thailand, Philippines and Indonesia.

  10. EMPIRICAL LIKELIHOOD FOR LINEAR MODELS UNDER m-DEPENDENT ERRORS

    Institute of Scientific and Technical Information of China (English)

    QinYongsong; JiangBo; LiYufang

    2005-01-01

    In this paper,the empirical likelihood confidence regions for the regression coefficient in a linear model are constructed under m-dependent errors. It is shown that the blockwise empirical likelihood is a good way to deal with dependent samples.

  11. Gaussian Process Pseudo-Likelihood Models for Sequence Labeling

    OpenAIRE

    Srijith, P. K.; Balamurugan, P.; Shevade, Shirish

    2014-01-01

    Several machine learning problems arising in natural language processing can be modeled as a sequence labeling problem. We provide Gaussian process models based on pseudo-likelihood approximation to perform sequence labeling. Gaussian processes (GPs) provide a Bayesian approach to learning in a kernel based framework. The pseudo-likelihood model enables one to capture long range dependencies among the output components of the sequence without becoming computationally intractable. We use an ef...

  12. Likelihood inference for a fractionally cointegrated vector autoregressive model

    DEFF Research Database (Denmark)

    Johansen, Søren; Nielsen, Morten Ørregaard

    We consider model based inference in a fractionally cointegrated (or cofractional) vector autoregressive model based on the conditional Gaussian likelihood. The model allows the process X_{t} to be fractional of order d and cofractional of order d-b; that is, there exist vectors β for which β......′X_{t} is fractional of order d-b. The parameters d and b satisfy either d≥b≥1/2, d=b≥1/2, or d=d_{0}≥b≥1/2. Our main technical contribution is the proof of consistency of the maximum likelihood estimators on the set 1/2≤b≤d≤d_{1} for any d_{1}≥d_{0}. To this end, we consider the conditional likelihood as a...... Gaussian. We also find the asymptotic distribution of the likelihood ratio test for cointegration rank, which is a functional of fractional Brownian motion of type II....

  13. Tapered composite likelihood for spatial max-stable models

    KAUST Repository

    Sang, Huiyan

    2014-05-01

    Spatial extreme value analysis is useful to environmental studies, in which extreme value phenomena are of interest and meaningful spatial patterns can be discerned. Max-stable process models are able to describe such phenomena. This class of models is asymptotically justified to characterize the spatial dependence among extremes. However, likelihood inference is challenging for such models because their corresponding joint likelihood is unavailable and only bivariate or trivariate distributions are known. In this paper, we propose a tapered composite likelihood approach by utilizing lower dimensional marginal likelihoods for inference on parameters of various max-stable process models. We consider a weighting strategy based on a "taper range" to exclude distant pairs or triples. The "optimal taper range" is selected to maximize various measures of the Godambe information associated with the tapered composite likelihood function. This method substantially reduces the computational cost and improves the efficiency over equally weighted composite likelihood estimators. We illustrate its utility with simulation experiments and an analysis of rainfall data in Switzerland.

  14. Testing, monitoring, and dating structural changes in maximum likelihood models

    OpenAIRE

    Zeileis, Achim; Shah, Ajay; Patnaik, Ila

    2008-01-01

    A unified toolbox for testing, monitoring, and dating structural changes is provided for likelihood-based regression models. In particular, least-squares methods for dating breakpoints are extended to maximum likelihood estimation. The usefulness of all techniques is illustrated by assessing the stability of de facto exchange rate regimes. The toolbox is used for investigating the Chinese exchange rate regime after China gave up on a fixed exchange rate to the US dollar in 2005 and tracking t...

  15. Likelihood inference for a nonstationary fractional autoregressive model

    DEFF Research Database (Denmark)

    Johansen, Søren; Nielsen, Morten Ørregaard

    This paper discusses model based inference in an autoregressive model for fractional processes based on the Gaussian likelihood. The model allows for the process to be fractional of order d or d-b; where d ≥ b > 1/2 are parameters to be estimated. We model the data X1,...,XT given the initial val...

  16. Likelihood Inference for a Nonstationary Fractional Autoregressive Model

    DEFF Research Database (Denmark)

    Johansen, Søren; Nielsen, Morten Ørregaard

    This paper discusses model based inference in an autoregressive model for fractional processes based on the Gaussian likelihood. The model allows for the process to be fractional of order d or d - b; where d = b > 1/2 are parameters to be estimated. We model the data X¿, ..., X¿ given the initial...

  17. Non-Gaussian bifurcating models and quasi-likelihood estimation

    OpenAIRE

    Basawa, I. V.; J. Zhou

    2004-01-01

    A general class of Markovian non-Gaussian bifurcating models for cell lineage data is presented. Examples include bifurcating autoregression, random coefficient autoregression, bivariate exponential, bivariate gamma, and bivariate Poisson models. Quasi-likelihood estimation for the model parameters and large-sample properties of the estimates are discussed.

  18. Empirical likelihood-based evaluations of Value at Risk models

    Institute of Scientific and Technical Information of China (English)

    2009-01-01

    Value at Risk (VaR) is a basic and very useful tool in measuring market risks. Numerous VaR models have been proposed in literature. Therefore, it is of great interest to evaluate the efficiency of these models, and to select the most appropriate one. In this paper, we shall propose to use the empirical likelihood approach to evaluate these models. Simulation results and real life examples show that the empirical likelihood method is more powerful and more robust than some of the asymptotic method available in literature.

  19. AD Model Builder: using automatic differentiation for statistical inference of highly parameterized complex nonlinear models

    DEFF Research Database (Denmark)

    Fournier, David A.; Skaug, Hans J.; Ancheta, Johnoel;

    2011-01-01

    Many criteria for statistical parameter estimation, such as maximum likelihood, are formulated as a nonlinear optimization problem.Automatic Differentiation Model Builder (ADMB) is a programming framework based on automatic differentiation, aimed at highly nonlinear models with a large number of...

  20. How to Maximize the Likelihood Function for a DSGE Model

    DEFF Research Database (Denmark)

    Andreasen, Martin Møller

    This paper extends two optimization routines to deal with objective functions for DSGE models. The optimization routines are i) a version of Simulated Annealing developed by Corana, Marchesi & Ridella (1987), and ii) the evolutionary algorithm CMA-ES developed by Hansen, Müller & Koumoutsakos (2003......). Following these extensions, we examine the ability of the two routines to maximize the likelihood function for a sequence of test economies. Our results show that the CMA- ES routine clearly outperforms Simulated Annealing in its ability to find the global optimum and in efficiency. With 10 unknown...... structural parameters in the likelihood function, the CMA-ES routine finds the global optimum in 95% of our test economies compared to 89% for Simulated Annealing. When the number of unknown structural parameters in the likelihood function increases to 20 and 35, then the CMA-ES routine finds the global...

  1. HALM: A Hybrid Asperity Likelihood Model for Italy

    Science.gov (United States)

    Gulia, L.; Wiemer, S.

    2009-04-01

    The Asperity Likelihood Model (ALM), first developed and currently tested for California, hypothesizes that small-scale spatial variations in the b-value of the Gutenberg and Richter relationship play a central role in forecasting future seismicity (Wiemer and Schorlemmer, SRL, 2007). The physical basis of the model is the concept that the local b-value is inversely dependent on applied shear stress. Thus low b-values (b more likely to be generated, whereas the high b-values (b > 1.1) found for example in creeping section of faults suggest a lower seismic hazard. To test this model in a reproducible and prospective way suitable for the requirements of the CSEP initiative (www.cseptesting.org), the b-value variability is mapped on a grid. First, using the entire dataset above the overall magnitude of completeness, the regional b-value is estimated. This value is then compared to the one locally estimated at each grid-node for a number of radii, we use the local value if its likelihood score, corrected for the degrees of freedom using the Akaike Information Criterion, suggest to do so. We are currently calibrating the ALM model for implementation in the Italian testing region, the first region within the CSEP EU testing Center (eu.cseptesting.org) for which fully prospective tests of earthquake likelihood models will commence in Europe. We are also developing a modified approach, ‘hybrid' between a grid-based and a zoning one: the HALM (Hybrid Asperity Likelihood Model). According to HALM, the Italian territory is divided in three distinct regions depending on the main tectonic elements, combined with knowledge derived from GPS networks, seismic profile interpretation, borehole breakouts and the focal mechanisms of the event. The local b-value variability was thus mapped using three independent overall b-values. We evaluate the performance of the two models in retrospective tests using the standard CSEP likelihood test.

  2. Counseling Pretreatment and the Elaboration Likelihood Model of Attitude Change.

    Science.gov (United States)

    Heesacker, Martin

    1986-01-01

    Results of the application of the Elaboration Likelihood Model (ELM) to a counseling context revealed that more favorable attitudes toward counseling occurred as subjects' ego involvement increased and as intervention quality improved. Counselor credibility affected the degree to which subjects' attitudes reflected argument quality differences.…

  3. On penalized maximum likelihood estimation of approximate factor models

    OpenAIRE

    Wang, Shaoxin; Yang, Hu; Yao, Chaoli

    2016-01-01

    In this paper, we mainly focus on the estimation of high-dimensional approximate factor model. We rewrite the estimation of error covariance matrix as a new form which shares similar properties as the penalized maximum likelihood covariance estimator given by Bien and Tibshirani(2011). Based on the lagrangian duality, we propose an APG algorithm to give a positive definite estimate of the error covariance matrix. The new algorithm for the estimation of approximate factor model has a desirable...

  4. Maximum-likelihood methods in wavefront sensing: stochastic models and likelihood functions

    OpenAIRE

    Barrett, Harrison H.; Dainty, Christopher; Lara, David

    2007-01-01

    Maximum-likelihood (ML) estimation in wavefront sensing requires careful attention to all noise sources and all factors that influence the sensor data. We present detailed probability density functions for the output of the image detector in a wavefront sensor, conditional not only on wavefront parameters but also on various nuisance parameters. Practical ways of dealing with nuisance parameters are described, and final expressions for likelihoods and Fisher information matrices are derived. ...

  5. Asymptotic law of likelihood ratio for multilayer perceptron models

    CERN Document Server

    Rynkiewicz, Joseph

    2010-01-01

    We consider regression models involving multilayer perceptrons (MLP) with one hidden layer and a Gaussian noise. The data are assumed to be generated by a true MLP model and the estimation of the parameters of the MLP is done by maximizing the likelihood of the model. When the number of hidden units of the true model is known, the asymptotic distribution of the maximum likelihood estimator (MLE) and the likelihood ratio (LR) statistic is easy to compute and converge to a $\\chi^2$ law. However, if the number of hidden unit is over-estimated the Fischer information matrix of the model is singular and the asymptotic behavior of the MLE is unknown. This paper deals with this case, and gives the exact asymptotic law of the LR statistics. Namely, if the parameters of the MLP lie in a suitable compact set, we show that the LR statistics is the supremum of the square of a Gaussian process indexed by a class of limit score functions.

  6. Maximum Likelihood and the Bootstrap for Nonlinear Dynamic Models

    OpenAIRE

    Goncalves, Silvia; White, Halbert

    2002-01-01

    The bootstrap is an increasingly popular method for performing statistical inference. This paper provides the theoretical foundation for using the bootstrap as a valid tool of inference for quasi-maximum likelihood estimators (QMLE). We provide a unified framework for analyzing bootstrapped extremum estimators of nonlinear dynamic models for heterogeneous dependent stochastic processes. We apply our results to two block bootstrap methods, the moving blocks bootstrap of Künsch (1989) and Liu a...

  7. Applications of the Likelihood Theory in Finance: Modelling and Pricing

    CERN Document Server

    Janssen, Arnold

    2012-01-01

    This paper discusses the connection between mathematical finance and statistical modelling which turns out to be more than a formal mathematical correspondence. We like to figure out how common results and notions in statistics and their meaning can be translated to the world of mathematical finance and vice versa. A lot of similarities can be expressed in terms of LeCam's theory for statistical experiments which is the theory of the behaviour of likelihood processes. For positive prices the arbitrage free financial assets fit into filtered experiments. It is shown that they are given by filtered likelihood ratio processes. From the statistical point of view, martingale measures, completeness and pricing formulas are revisited. The pricing formulas for various options are connected with the power functions of tests. For instance the Black-Scholes price of a European option has an interpretation as Bayes risk of a Neyman Pearson test. Under contiguity the convergence of financial experiments and option prices ...

  8. Likelihood-Based Inference in Nonlinear Error-Correction Models

    DEFF Research Database (Denmark)

    Kristensen, Dennis; Rahbæk, Anders

    We consider a class of vector nonlinear error correction models where the transfer function (or loadings) of the stationary relation- ships is nonlinear. This includes in particular the smooth transition models. A general representation theorem is given which establishes the dynamic properties of...... trends and a linear trend in general. Gaussian likelihood-based estimators are considered for the long- run cointegration parameters, and the short-run parameters. Asymp- totic theory is provided for these and it is discussed to what extend asymptotic normality and mixed normaity can be found. A...

  9. Marginal Maximum Likelihood Estimation of Item Response Models in R

    Directory of Open Access Journals (Sweden)

    Matthew S. Johnson

    2007-02-01

    Full Text Available Item response theory (IRT models are a class of statistical models used by researchers to describe the response behaviors of individuals to a set of categorically scored items. The most common IRT models can be classified as generalized linear fixed- and/or mixed-effect models. Although IRT models appear most often in the psychological testing literature, researchers in other fields have successfully utilized IRT-like models in a wide variety of applications. This paper discusses the three major methods of estimation in IRT and develops R functions utilizing the built-in capabilities of the R environment to find the marginal maximum likelihood estimates of the generalized partial credit model. The currently available R packages ltm is also discussed.

  10. Maximum-likelihood methods in wavefront sensing: stochastic models and likelihood functions.

    Science.gov (United States)

    Barrett, Harrison H; Dainty, Christopher; Lara, David

    2007-02-01

    Maximum-likelihood (ML) estimation in wavefront sensing requires careful attention to all noise sources and all factors that influence the sensor data. We present detailed probability density functions for the output of the image detector in a wavefront sensor, conditional not only on wavefront parameters but also on various nuisance parameters. Practical ways of dealing with nuisance parameters are described, and final expressions for likelihoods and Fisher information matrices are derived. The theory is illustrated by discussing Shack-Hartmann sensors, and computational requirements are discussed. Simulation results show that ML estimation can significantly increase the dynamic range of a Shack-Hartmann sensor with four detectors and that it can reduce the residual wavefront error when compared with traditional methods. PMID:17206255

  11. Maximum-likelihood methods in wavefront sensing: stochastic models and likelihood functions

    Science.gov (United States)

    Barrett, Harrison H.; Dainty, Christopher; Lara, David

    2007-02-01

    Maximum-likelihood (ML) estimation in wavefront sensing requires careful attention to all noise sources and all factors that influence the sensor data. We present detailed probability density functions for the output of the image detector in a wavefront sensor, conditional not only on wavefront parameters but also on various nuisance parameters. Practical ways of dealing with nuisance parameters are described, and final expressions for likelihoods and Fisher information matrices are derived. The theory is illustrated by discussing Shack-Hartmann sensors, and computational requirements are discussed. Simulation results show that ML estimation can significantly increase the dynamic range of a Shack-Hartmann sensor with four detectors and that it can reduce the residual wavefront error when compared with traditional methods.

  12. Adaptive quasi-likelihood estimate in generalized linear models

    Institute of Scientific and Technical Information of China (English)

    CHEN Xia; CHEN Xiru

    2005-01-01

    This paper gives a thorough theoretical treatment on the adaptive quasilikelihood estimate of the parameters in the generalized linear models. The unknown covariance matrix of the response variable is estimated by the sample. It is shown that the adaptive estimator defined in this paper is asymptotically most efficient in the sense that it is asymptotic normal, and the covariance matrix of the limit distribution coincides with the one for the quasi-likelihood estimator for the case that the covariance matrix of the response variable is completely known.

  13. Calibration of two complex ecosystem models with different likelihood functions

    Science.gov (United States)

    Hidy, Dóra; Haszpra, László; Pintér, Krisztina; Nagy, Zoltán; Barcza, Zoltán

    2014-05-01

    The biosphere is a sensitive carbon reservoir. Terrestrial ecosystems were approximately carbon neutral during the past centuries, but they became net carbon sinks due to climate change induced environmental change and associated CO2 fertilization effect of the atmosphere. Model studies and measurements indicate that the biospheric carbon sink can saturate in the future due to ongoing climate change which can act as a positive feedback. Robustness of carbon cycle models is a key issue when trying to choose the appropriate model for decision support. The input parameters of the process-based models are decisive regarding the model output. At the same time there are several input parameters for which accurate values are hard to obtain directly from experiments or no local measurements are available. Due to the uncertainty associated with the unknown model parameters significant bias can be experienced if the model is used to simulate the carbon and nitrogen cycle components of different ecosystems. In order to improve model performance the unknown model parameters has to be estimated. We developed a multi-objective, two-step calibration method based on Bayesian approach in order to estimate the unknown parameters of PaSim and Biome-BGC models. Biome-BGC and PaSim are a widely used biogeochemical models that simulate the storage and flux of water, carbon, and nitrogen between the ecosystem and the atmosphere, and within the components of the terrestrial ecosystems (in this research the developed version of Biome-BGC is used which is referred as BBGC MuSo). Both models were calibrated regardless the simulated processes and type of model parameters. The calibration procedure is based on the comparison of measured data with simulated results via calculating a likelihood function (degree of goodness-of-fit between simulated and measured data). In our research different likelihood function formulations were used in order to examine the effect of the different model

  14. Empirical likelihood ratio tests for multivariate regression models

    Institute of Scientific and Technical Information of China (English)

    WU Jianhong; ZHU Lixing

    2007-01-01

    This paper proposes some diagnostic tools for checking the adequacy of multivariate regression models including classical regression and time series autoregression. In statistical inference, the empirical likelihood ratio method has been well known to be a powerful tool for constructing test and confidence region. For model checking, however, the naive empirical likelihood (EL) based tests are not of Wilks' phenomenon. Hence, we make use of bias correction to construct the EL-based score tests and derive a nonparametric version of Wilks' theorem. Moreover, by the advantages of both the EL and score test method, the EL-based score tests share many desirable features as follows: They are self-scale invariant and can detect the alternatives that converge to the null at rate n-1/2, the possibly fastest rate for lack-of-fit testing; they involve weight functions, which provides us with the flexibility to choose scores for improving power performance, especially under directional alternatives. Furthermore, when the alternatives are not directional, we construct asymptotically distribution-free maximin tests for a large class of possible alternatives. A simulation study is carried out and an application for a real dataset is analyzed.

  15. The elaboration likelihood model and communication about food risks.

    Science.gov (United States)

    Frewer, L J; Howard, C; Hedderley, D; Shepherd, R

    1997-12-01

    Factors such as hazard type and source credibility have been identified as important in the establishment of effective strategies for risk communication. The elaboration likelihood model was adapted to investigate the potential impact of hazard type, information source, and persuasive content of information on individual engagement in elaborative, or thoughtful, cognitions about risk messages. One hundred sixty respondents were allocated to one of eight experimental groups, and the effects of source credibility, persuasive content of information and hazard type were systematically varied. The impact of the different factors on beliefs about the information and elaborative processing examined. Low credibility was particularly important in reducing risk perceptions, although persuasive content and hazard type were also influential in determining whether elaborative processing occurred. PMID:9463930

  16. Likelihood ratio model for classification of forensic evidence

    International Nuclear Information System (INIS)

    One of the problems of analysis of forensic evidence such as glass fragments, is the determination of their use-type category, e.g. does a glass fragment originate from an unknown window or container? Very small glass fragments arise during various accidents and criminal offences, and could be carried on the clothes, shoes and hair of participants. It is therefore necessary to obtain information on their physicochemical composition in order to solve the classification problem. Scanning Electron Microscopy coupled with an Energy Dispersive X-ray Spectrometer and the Glass Refractive Index Measurement method are routinely used in many forensic institutes for the investigation of glass. A natural form of glass evidence evaluation for forensic purposes is the likelihood ratio-LR = p(E|H1)/p(E|H2). The main aim of this paper was to study the performance of LR models for glass object classification which considered one or two sources of data variability, i.e. between-glass-object variability and(or) within-glass-object variability. Within the proposed model a multivariate kernel density approach was adopted for modelling the between-object distribution and a multivariate normal distribution was adopted for modelling within-object distributions. Moreover, a graphical method of estimating the dependence structure was employed to reduce the highly multivariate problem to several lower-dimensional problems. The performed analysis showed that the best likelihood model was the one which allows to include information about between and within-object variability, and with variables derived from elemental compositions measured by SEM-EDX, and refractive values determined before (RIb) and after (RIa) the annealing process, in the form of dRI = log10|RIa - RIb|. This model gave better results than the model with only between-object variability considered. In addition, when dRI and variables derived from elemental compositions were used, this model outperformed two other

  17. Approximate Maximum Likelihood Commercial Bank Loan Management Model

    Directory of Open Access Journals (Sweden)

    Godwin N.O.   Asemota

    2009-01-01

    Full Text Available Problem statement: Loan management is a very complex and yet, a vitally important aspect of any commercial bank operations. The balance sheet position shows the main sources of funds as deposits and shareholders contributions. Approach: In order to operate profitably, remain solvent and consequently grow, a commercial bank needs to properly manage its excess cash to yield returns in the form of loans. Results: The above are achieved if the bank can honor depositors withdrawals at all times and also grant loans to credible borrowers. This is so because loans are the main portfolios of a commercial bank that yield the highest rate of returns. Commercial banks and the environment in which they operate are dynamic. So, any attempt to model their behavior without including some elements of uncertainty would be less than desirable. The inclusion of uncertainty factor is now possible with the advent of stochastic optimal control theories. Thus, approximate maximum likelihood algorithm with variable forgetting factor was used to model the loan management behavior of a commercial bank in this study. Conclusion: The results showed that uncertainty factor employed in the stochastic modeling, enable us to adaptively control loan demand as well as fluctuating cash balances in the bank. However, this loan model can also visually aid commercial bank managers planning decisions by allowing them to competently determine excess cash and invest this excess cash as loans to earn more assets without jeopardizing public confidence.

  18. Monte Carlo likelihood inference for missing data models

    OpenAIRE

    Sung, Yun Ju; Geyer, Charles J.

    2007-01-01

    We describe a Monte Carlo method to approximate the maximum likelihood estimate (MLE), when there are missing data and the observed data likelihood is not available in closed form. This method uses simulated missing data that are independent and identically distributed and independent of the observed data. Our Monte Carlo approximation to the MLE is a consistent and asymptotically normal estimate of the minimizer θ* of the Kullback–Leibler information, as both Monte Carlo and observed data sa...

  19. Quantifying uncertainty, variability and likelihood for ordinary differential equation models

    LENUS (Irish Health Repository)

    Weisse, Andrea Y

    2010-10-28

    Abstract Background In many applications, ordinary differential equation (ODE) models are subject to uncertainty or variability in initial conditions and parameters. Both, uncertainty and variability can be quantified in terms of a probability density function on the state and parameter space. Results The partial differential equation that describes the evolution of this probability density function has a form that is particularly amenable to application of the well-known method of characteristics. The value of the density at some point in time is directly accessible by the solution of the original ODE extended by a single extra dimension (for the value of the density). This leads to simple methods for studying uncertainty, variability and likelihood, with significant advantages over more traditional Monte Carlo and related approaches especially when studying regions with low probability. Conclusions While such approaches based on the method of characteristics are common practice in other disciplines, their advantages for the study of biological systems have so far remained unrecognized. Several examples illustrate performance and accuracy of the approach and its limitations.

  20. Efficient scatter modelling for incorporation in maximum likelihood reconstruction

    International Nuclear Information System (INIS)

    Definition of a simplified model of scatter which can be incorporated in maximum likelihood reconstruction for single-photon emission tomography (SPET) continues to be appealing; however, implementation must be efficient for it to be clinically applicable. In this paper an efficient algorithm for scatter estimation is described in which the spatial scatter distribution is implemented as a spatially invariant convolution for points of constant depth in tissue. The scatter estimate is weighted by a space-dependent build-up factor based on the measured attenuation in tissue. Monte Carlo simulation of a realistic thorax phantom was used to validate this approach. Further efficiency was introduced by estimating scatter once after a small number of iterations using the ordered subsets expectation maximisation (OSEM) reconstruction algorithm. The scatter estimate was incorporated as a constant term in subsequent iterations rather than modifying the scatter estimate each iteration. Monte Carlo simulation was used to demonstrate that the scatter estimate does not change significantly provided at least two iterations OSEM reconstruction, subset size 8, is used. Complete scatter-corrected reconstruction of 64 projections of 40 x 128 pixels was achieved in 38 min using a Sun Sparc20 computer. (orig.)

  1. Race of source effects in the elaboration likelihood model.

    Science.gov (United States)

    White, P H; Harkins, S G

    1994-11-01

    In a series of experiments, we investigated the effect of race of source on persuasive communications in the Elaboration Likelihood Model (R.E. Petty & J.T. Cacioppo, 1981, 1986). In Experiment 1, we found no evidence that White participants responded to a Black source as a simple negative cue. Experiment 2 suggested the possibility that exposure to a Black source led to low-involvement message processing. In Experiments 3 and 4, a distraction paradigm was used to test this possibility, and it was found that participants under low involvement were highly motivated to process a message presented by a Black source. In Experiment 5, we found that attitudes toward the source's ethnic group, rather than violations of expectancies, accounted for this processing effect. Taken together, the results of these experiments are consistent with S.L. Gaertner and J.F. Dovidio's (1986) theory of aversive racism, which suggests that Whites, because of a combination of egalitarian values and underlying negative racial attitudes, are very concerned about not appearing unfavorable toward Blacks, leading them to be highly motivated to process messages presented by a source from this group. PMID:7983579

  2. Empirical likelihood-based inference in a partially linear model for longitudinal data

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    A partially linear model with longitudinal data is considered, empirical likelihood to inference for the regression coefficients and the baseline function is investigated, the empirical log-likelihood ratios is proven to be asymptotically chi-squared, and the corresponding confidence regions for the parameters of interest are then constructed. Also by the empirical likelihood ratio functions, we can obtain the maximum empirical likelihood estimates of the regression coefficients and the baseline function, and prove the asymptotic normality. The numerical results are conducted to compare the performance of the empirical likelihood and the normal approximation-based method, and a real example is analysed.

  3. Empirical likelihood-based inference in a partially linear model for longitudinal data

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    A partially linear model with longitudinal data is considered, empirical likelihood to infer- ence for the regression coefficients and the baseline function is investigated, the empirical log-likelihood ratios is proven to be asymptotically chi-squared, and the corresponding confidence regions for the pa- rameters of interest are then constructed. Also by the empirical likelihood ratio functions, we can obtain the maximum empirical likelihood estimates of the regression coefficients and the baseline function, and prove the asymptotic normality. The numerical results are conducted to compare the performance of the empirical likelihood and the normal approximation-based method, and a real example is analysed.

  4. Menentukan Model Koefisien Regresi Multiple Variabel Menggunakan Masimum Likelihood.

    OpenAIRE

    Samosir, Benny Sofyan

    2011-01-01

    In determining equation of linear estimation with the straight line method will produce a good equation. All point reflected couple data are in the straight line. But, if the couple points are each other, so the good equation of linear to etimate variable value dependent is curve equation of linear which has minimal false between estimation point with real point. The research explains how the way to approach the linear regression with maxsimum likelihood method. General shape of equation s...

  5. Empirical likelihood

    CERN Document Server

    Owen, Art B

    2001-01-01

    Empirical likelihood provides inferences whose validity does not depend on specifying a parametric model for the data. Because it uses a likelihood, the method has certain inherent advantages over resampling methods: it uses the data to determine the shape of the confidence regions, and it makes it easy to combined data from multiple sources. It also facilitates incorporating side information, and it simplifies accounting for censored, truncated, or biased sampling.One of the first books published on the subject, Empirical Likelihood offers an in-depth treatment of this method for constructing confidence regions and testing hypotheses. The author applies empirical likelihood to a range of problems, from those as simple as setting a confidence region for a univariate mean under IID sampling, to problems defined through smooth functions of means, regression models, generalized linear models, estimating equations, or kernel smooths, and to sampling with non-identically distributed data. Abundant figures offer vi...

  6. Analyzing multivariate survival data using composite likelihood and flexible parametric modeling of the hazard functions

    DEFF Research Database (Denmark)

    Nielsen, Jan; Parner, Erik

    2010-01-01

    In this paper, we model multivariate time-to-event data by composite likelihood of pairwise frailty likelihoods and marginal hazards using natural cubic splines. Both right- and interval-censored data are considered. The suggested approach is applied on two types of family studies using the gamma...

  7. Approximate Likelihood

    CERN Document Server

    CERN. Geneva

    2015-01-01

    Most physics results at the LHC end in a likelihood ratio test. This includes discovery and exclusion for searches as well as mass, cross-section, and coupling measurements. The use of Machine Learning (multivariate) algorithms in HEP is mainly restricted to searches, which can be reduced to classification between two fixed distributions: signal vs. background. I will show how we can extend the use of ML classifiers to distributions parameterized by physical quantities like masses and couplings as well as nuisance parameters associated to systematic uncertainties. This allows for one to approximate the likelihood ratio while still using a high dimensional feature vector for the data. Both the MEM and ABC approaches mentioned above aim to provide inference on model parameters (like cross-sections, masses, couplings, etc.). ABC is fundamentally tied Bayesian inference and focuses on the “likelihood free” setting where only a simulator is available and one cannot directly compute the likelihood for the dat...

  8. Maximum likelihood estimation of the parameters of nonminimum phase and noncausal ARMA models

    DEFF Research Database (Denmark)

    Rasmussen, Klaus Bolding

    1994-01-01

    The well-known prediction-error-based maximum likelihood (PEML) method can only handle minimum phase ARMA models. This paper presents a new method known as the back-filtering-based maximum likelihood (BFML) method, which can handle nonminimum phase and noncausal ARMA models. The BFML method is id...... identical to the PEML method in the case of a minimum phase ARMA model, and it turns out that the BFML method incorporates a noncausal ARMA filter with poles outside the unit circle for estimation of the parameters of a causal, nonminimum phase ARMA model......The well-known prediction-error-based maximum likelihood (PEML) method can only handle minimum phase ARMA models. This paper presents a new method known as the back-filtering-based maximum likelihood (BFML) method, which can handle nonminimum phase and noncausal ARMA models. The BFML method is...

  9. Recent developments in maximum likelihood estimation of MTMM models for categorical data

    Directory of Open Access Journals (Sweden)

    Minjeong eJeon

    2014-04-01

    Full Text Available Maximum likelihood (ML estimation of categorical multitrait-multimethod (MTMM data is challenging because the likelihood involves high-dimensional integrals over the crossed method and trait factors, with no known closed-form solution.The purpose of the study is to introduce three newly developed ML methods that are eligible for estimating MTMM models with categorical responses: Variational maximization-maximization, Alternating imputation posterior, and Monte Carlo local likelihood. Each method is briefly described and its applicability for MTMM models with categorical data are discussed.An illustration is provided using an empirical example.

  10. An I(2) Cointegration Model with Piecewise Linear Trends: Likelihood Analysis and Application

    DEFF Research Database (Denmark)

    Kurita, Takamitsu; Nielsen, Heino Bohn; Rahbæk, Anders

    This paper presents likelihood analysis of the I(2) cointegrated vector autoregression with piecewise linear deterministic terms. Limiting behavior of the maximum likelihood estimators are derived, which is used to further derive the limiting distribution of the likelihood ratio statistic for the...... cointegration ranks, extending the result for I(2) models with a linear trend in Nielsen and Rahbek (2007) and for I(1) models with piecewise linear trends in Johansen, Mosconi, and Nielsen (2000). The provided asymptotic theory extends also the results in Johansen, Juselius, Frydman, and Goldberg (2009) where...

  11. Automatic terrain modeling using transfinite element analysis

    KAUST Repository

    Collier, Nathaniel O.

    2010-05-31

    An automatic procedure for modeling terrain is developed based on L2 projection-based interpolation of discrete terrain data onto transfinite function spaces. The function space is refined automatically by the use of image processing techniques to detect regions of high error and the flexibility of the transfinite interpolation to add degrees of freedom to these areas. Examples are shown of a section of the Palo Duro Canyon in northern Texas.

  12. A penalized likelihood approach for mixture cure models.

    OpenAIRE

    Corbière, Fabien; Commenges, Daniel; Taylor, Jeremy; Joly, Pierre

    2009-01-01

    Cure models have been developed to analyze failure time data with a cured fraction. For such data, standard survival models are usually not appropriate because they do not account for the possibility of cure. Mixture cure models assume that the studied population is a mixture of susceptible individuals, who may experience the event of interest, and non-susceptible individuals that will never experience it. Important issues in mixture cure models are estimation of the baseline survival functio...

  13. Likelihood inference for a nonstationary fractional autoregressive model

    DEFF Research Database (Denmark)

    Johansen, Søren; Ørregård Nielsen, Morten

    2010-01-01

    This paper discusses model-based inference in an autoregressive model for fractional processes which allows the process to be fractional of order d or d-b. Fractional differencing involves infinitely many past values and because we are interested in nonstationary processes we model the data X1,.....

  14. Empirical Likelihood for Mixed-effects Error-in-variables Model

    Institute of Scientific and Technical Information of China (English)

    Qiu-hua Chen; Ping-shou Zhong; Heng-jian Cui

    2009-01-01

    This paper mainly introduces the method of empirical likelihood and its applications on two dif-ferent models.We discuss the empirical likelihood inference on fixed-effect parameter in mixed-effects model with error-in-variables.We first consider a linear mixed-effects model with measurement errors in both fixed and random effects.We construct the empirical likelihood confidence regions for the fixed-effects parameters and the mean parameters of random-effects.The limiting distribution of the empirical log likelihood ratio at the true parameter is χ2p+q,where p,q are dimension of fixed and random effects respectively.Then we discuss empirical likelihood inference in a semi-linear error-in-variable mixed-effects model.Under certain conditions,it is shown that the empirical log likelihood ratio at the true parameter also converges to χ2p+q.Simulations illustrate that the proposed confidence region has a coverage probability more closer to the nominal level than normal approximation based confidence region.

  15. An Adjusted profile likelihood for non-stationary panel data models with fixed effects

    OpenAIRE

    Dhaene, Geert; Jochmans, Koen

    2011-01-01

    We calculate the bias of the profile score for the autoregressive parameters p and covariate slopes in the linear model for N x T panel data with p lags of the dependent variable, exogenous covariates, fixed effects, and unrestricted initial observations. The bias is a vector of multivariate polynomials in p with coefficients that depend only on T. We center the profile score and, on integration, obtain an adjusted profile likelihood. When p = 1, the adjusted profile likelihood coincides wi...

  16. Maximum likelihood training of connectionist models: comparison with least squares back-propagation and logistic regression.

    OpenAIRE

    Spackman, K. A.

    1991-01-01

    This paper presents maximum likelihood back-propagation (ML-BP), an approach to training neural networks. The widely reported original approach uses least squares back-propagation (LS-BP), minimizing the sum of squared errors (SSE). Unfortunately, least squares estimation does not give a maximum likelihood (ML) estimate of the weights in the network. Logistic regression, on the other hand, gives ML estimates for single layer linear models only. This report describes how to obtain ML estimates...

  17. Improved Likelihood Ratio Tests for Cointegration Rank in the VAR Model

    DEFF Research Database (Denmark)

    Boswijk, H. Peter; Jansson, Michael; Nielsen, Morten Ørregaard

    We suggest improved tests for cointegration rank in the vector autoregressive (VAR) model and develop asymptotic distribution theory and local power results. The tests are (quasi-)likelihood ratio tests based on a Gaussian likelihood, but of course the asymptotic results apply more generally....... The power gains relative to existing tests are due to two factors. First, instead of basing our tests on the conditional (with respect to the initial observations) likelihood, we follow the recent unit root literature and base our tests on the full likelihood as in, e.g., Elliott, Rothenberg, and Stock...... (1996). Secondly, our tests incorporate a “sign”restriction which generalizes the one-sided unit root test. We show that the asymptotic local power of the proposed tests dominates that of existing cointegration rank tests....

  18. Empirical Likelihood Inference for AR(p) Model%AR(p)模型的经验似然推断

    Institute of Scientific and Technical Information of China (English)

    陈燕红; 赵世舜; 宋立新

    2008-01-01

    In this article we study the empirical likelihood inference for AR(p) model.We propose the moment restrictions, by which we get the empirical likelihood estimator of the model parametric, and we also propose an empirical log-likelihood ratio base on this estimator.Our result shows that the EL estimator is asymptotically normal, and the empirical log-likelihood ratio is proved to be asymptotically standard chi-squared.

  19. Empirical Likelihood Inference for MA(q) Model%MA(q)模型的经验似然推断

    Institute of Scientific and Technical Information of China (English)

    陈燕红; 宋立新

    2009-01-01

    In this article we study the empirical likelihood inference for MA(q) model.We propose the moment restrictions,by which we get the empirical likelihood estimator of the model parameter,and we also propose an empirical log-likelihood ratio based on this estimator.Our result shows that the EL estimator is asymptotically normal,and the empirical log-likelihood ratio is proved to be asymptotical standard chi-square distribution.

  20. Generalized linear models with random effects unified analysis via H-likelihood

    CERN Document Server

    Lee, Youngjo; Pawitan, Yudi

    2006-01-01

    Since their introduction in 1972, generalized linear models (GLMs) have proven useful in the generalization of classical normal models. Presenting methods for fitting GLMs with random effects to data, Generalized Linear Models with Random Effects: Unified Analysis via H-likelihood explores a wide range of applications, including combining information over trials (meta-analysis), analysis of frailty models for survival data, genetic epidemiology, and analysis of spatial and temporal models with correlated errors.Written by pioneering authorities in the field, this reference provides an introduction to various theories and examines likelihood inference and GLMs. The authors show how to extend the class of GLMs while retaining as much simplicity as possible. By maximizing and deriving other quantities from h-likelihood, they also demonstrate how to use a single algorithm for all members of the class, resulting in a faster algorithm as compared to existing alternatives. Complementing theory with examples, many of...

  1. Likelihood-free methods for tumor progression modeling

    OpenAIRE

    Herold, Daniela

    2014-01-01

    Since it became clear that the number of newly diagnosed cancer cases and the number of deaths from cancer worldwide increases from year to year, a great effort has been put into the field of cancer research. One major point of interest is how and by means of which intermediate steps the development of cancer from an initially benign mass of cells into a large malignant and deadly tumor takes place. In order to shed light onto the details of this process, many models have been developed in th...

  2. The fine-tuning cost of the likelihood in SUSY models

    International Nuclear Information System (INIS)

    In SUSY models, the fine-tuning of the electroweak (EW) scale with respect to their parameters γi={m0,m1/2,μ0,A0,B0,…} and the maximal likelihood L to fit the experimental data are usually regarded as two different problems. We show that, if one regards the EW minimum conditions as constraints that fix the EW scale, this commonly held view is not correct and that the likelihood contains all the information about fine-tuning. In this case we show that the corrected likelihood is equal to the ratio L/Δ of the usual likelihood L and the traditional fine-tuning measure Δ of the EW scale. A similar result is obtained for the integrated likelihood over the set {γi}, that can be written as a surface integral of the ratio L/Δ, with the surface in γi space determined by the EW minimum constraints. As a result, a large likelihood actually demands a large ratio L/Δ or equivalently, a small χnew2=χold2+2lnΔ. This shows the fine-tuning cost to the likelihood (χnew2) of the EW scale stability enforced by SUSY, that is ignored in data fits. A good χnew2/d.o.f.≈1 thus demands SUSY models have a fine-tuning amount Δ≪exp(d.o.f./2), which provides a model-independent criterion for acceptable fine-tuning. If this criterion is not met, one can thus rule out SUSY models without a further χ2/d.o.f. analysis. Numerical methods to fit the data can easily be adapted to account for this effect.

  3. Linguistics Computation, Automatic Model Generation, and Intensions

    OpenAIRE

    Nourani, Cyrus F.

    1994-01-01

    Techniques are presented for defining models of computational linguistics theories. The methods of generalized diagrams that were developed by this author for modeling artificial intelligence planning and reasoning are shown to be applicable to models of computation of linguistics theories. It is shown that for extensional and intensional interpretations, models can be generated automatically which assign meaning to computations of linguistics theories for natural languages. Keywords: Computa...

  4. Maximum likelihood training of connectionist models: comparison with least squares back-propagation and logistic regression.

    Science.gov (United States)

    Spackman, K A

    1991-01-01

    This paper presents maximum likelihood back-propagation (ML-BP), an approach to training neural networks. The widely reported original approach uses least squares back-propagation (LS-BP), minimizing the sum of squared errors (SSE). Unfortunately, least squares estimation does not give a maximum likelihood (ML) estimate of the weights in the network. Logistic regression, on the other hand, gives ML estimates for single layer linear models only. This report describes how to obtain ML estimates of the weights in a multi-layer model, and compares LS-BP to ML-BP using several examples. It shows that in many neural networks, least squares estimation gives inferior results and should be abandoned in favor of maximum likelihood estimation. Questions remain about the potential uses of multi-level connectionist models in such areas as diagnostic systems and risk-stratification in outcomes research. PMID:1807606

  5. A Composite Likelihood Inference in Latent Variable Models for Ordinal Longitudinal Responses

    Science.gov (United States)

    Vasdekis, Vassilis G. S.; Cagnone, Silvia; Moustaki, Irini

    2012-01-01

    The paper proposes a composite likelihood estimation approach that uses bivariate instead of multivariate marginal probabilities for ordinal longitudinal responses using a latent variable model. The model considers time-dependent latent variables and item-specific random effects to be accountable for the interdependencies of the multivariate…

  6. On penalized likelihood estimation for a non-proportional hazards regression model

    OpenAIRE

    Devarajan, Karthik; Ebrahimi, Nader

    2013-01-01

    In this paper, a semi-parametric generalization of the Cox model that permits crossing hazard curves is described. A theoretical framework for estimation in this model is developed based on penalized likelihood methods. It is shown that the optimal solution to the baseline hazard, baseline cumulative hazard and their ratio are hyperbolic splines with knots at the distinct failure times.

  7. Driving the Model to Its Limit: Profile Likelihood Based Model Reduction.

    Science.gov (United States)

    Maiwald, Tim; Hass, Helge; Steiert, Bernhard; Vanlier, Joep; Engesser, Raphael; Raue, Andreas; Kipkeew, Friederike; Bock, Hans H; Kaschek, Daniel; Kreutz, Clemens; Timmer, Jens

    2016-01-01

    In systems biology, one of the major tasks is to tailor model complexity to information content of the data. A useful model should describe the data and produce well-determined parameter estimates and predictions. Too small of a model will not be able to describe the data whereas a model which is too large tends to overfit measurement errors and does not provide precise predictions. Typically, the model is modified and tuned to fit the data, which often results in an oversized model. To restore the balance between model complexity and available measurements, either new data has to be gathered or the model has to be reduced. In this manuscript, we present a data-based method for reducing non-linear models. The profile likelihood is utilised to assess parameter identifiability and designate likely candidates for reduction. Parameter dependencies are analysed along profiles, providing context-dependent suggestions for the type of reduction. We discriminate four distinct scenarios, each associated with a specific model reduction strategy. Iterating the presented procedure eventually results in an identifiable model, which is capable of generating precise and testable predictions. Source code for all toy examples is provided within the freely available, open-source modelling environment Data2Dynamics based on MATLAB available at http://www.data2dynamics.org/, as well as the R packages dMod/cOde available at https://github.com/dkaschek/. Moreover, the concept is generally applicable and can readily be used with any software capable of calculating the profile likelihood. PMID:27588423

  8. Maximum likelihood Bayesian model averaging and its predictive analysis for groundwater reactive transport models

    Science.gov (United States)

    Lu, Dan; Ye, Ming; Curtis, Gary P.

    2015-10-01

    While Bayesian model averaging (BMA) has been widely used in groundwater modeling, it is infrequently applied to groundwater reactive transport modeling because of multiple sources of uncertainty in the coupled hydrogeochemical processes and because of the long execution time of each model run. To resolve these problems, this study analyzed different levels of uncertainty in a hierarchical way, and used the maximum likelihood version of BMA, i.e., MLBMA, to improve the computational efficiency. This study demonstrates the applicability of MLBMA to groundwater reactive transport modeling in a synthetic case in which twenty-seven reactive transport models were designed to predict the reactive transport of hexavalent uranium (U(VI)) based on observations at a former uranium mill site near Naturita, CO. These reactive transport models contain three uncertain model components, i.e., parameterization of hydraulic conductivity, configuration of model boundary, and surface complexation reactions that simulate U(VI) adsorption. These uncertain model components were aggregated into the alternative models by integrating a hierarchical structure into MLBMA. The modeling results of the individual models and MLBMA were analyzed to investigate their predictive performance. The predictive logscore results show that MLBMA generally outperforms the best model, suggesting that using MLBMA is a sound strategy to achieve more robust model predictions relative to a single model. MLBMA works best when the alternative models are structurally distinct and have diverse model predictions. When correlation in model structure exists, two strategies were used to improve predictive performance by retaining structurally distinct models or assigning smaller prior model probabilities to correlated models. Since the synthetic models were designed using data from the Naturita site, the results of this study are expected to provide guidance for real-world modeling. Limitations of applying MLBMA to the

  9. Generating Semi-Markov Models Automatically

    Science.gov (United States)

    Johnson, Sally C.

    1990-01-01

    Abstract Semi-Markov Specification Interface to SURE Tool (ASSIST) program developed to generate semi-Markov model automatically from description in abstract, high-level language. ASSIST reads input file describing failure behavior of system in abstract language and generates Markov models in format needed for input to Semi-Markov Unreliability Range Evaluator (SURE) program (COSMIC program LAR-13789). Facilitates analysis of behavior of fault-tolerant computer. Written in PASCAL.

  10. Elaboration Likelihood Model and an Analysis of the Contexts of Its Application

    OpenAIRE

    Aslıhan Kıymalıoğlu

    2014-01-01

    Elaboration Likelihood Model (ELM), which supports the existence of two routes to persuasion: central and peripheral routes, has been one of the major models on persuasion. As the number of studies in the Turkish literature on ELM is limited, a detailed explanation of the model together with a comprehensive literature review was considered to be contributory for this gap. The findings of the review reveal that the model was mostly used in marketing and advertising researches, that the concept...

  11. An Empirical Likelihood Method in a Partially Linear Single-index Model with Right Censored Data

    Institute of Scientific and Technical Information of China (English)

    Yi Ping YANG; Liu Gen XUE; Wei Hu CHENG

    2012-01-01

    Empirical-likelihood-based inference for the parameters in a partially linear single-index model with randomly censored data is investigated.We introduce an estimated empirical likelihood for the parameters using a synthetic data approach and show that its limiting distribution is a mixture of central chi-squared distribution.To attack this difficulty we propose an adjusted empirical likelihood to achieve the standard x2-1imit.Furthermore,since the index is of norm 1,we use this constraint to reduce the dimension of parameters,which increases the accuracy of the confidence regions. A simulation study is carried out to compare its finite-sample properties with the existing method.An application to a real data set is illustrated.

  12. Empirical Likelihood Based Variable Selection for Varying Coefficient Partially Linear Models with Censored Data

    Institute of Scientific and Technical Information of China (English)

    Peixin ZHAO

    2013-01-01

    In this paper,we consider the variable selection for the parametric components of varying coefficient partially linear models with censored data.By constructing a penalized auxiliary vector ingeniously,we propose an empirical likelihood based variable selection procedure,and show that it is consistent and satisfies the sparsity.The simulation studies show that the proposed variable selection method is workable.

  13. The likelihood ratio test for cointegration ranks in the I(2) model

    DEFF Research Database (Denmark)

    Nielsen, Heino Bohn; Rahbek, Anders Christian

    2007-01-01

    This paper presents the likelihood ratio (LR) test for the number of cointegrating relations in the I(2) vector autoregressive model. It is shown that the asymptotic distribution of the LR test for the cointegration ranks is identical to the asymptotic distribution of the much applied test....... Overall, we propose use of the LR test for rank determination in I(2) analysis...

  14. Rate of strong consistency of quasi maximum likelihood estimate in generalized linear models

    Institute of Scientific and Technical Information of China (English)

    2004-01-01

    [1]McCullagh, P., Nelder, J. A., Generalized Linear Models, New York: Chapman and Hall, 1989.[2]Wedderbum, R. W. M., Quasi-likelihood functions, generalized linear models and Gauss-Newton method,Biometrika, 1974, 61:439-447.[3]Fahrmeir, L., Maximum likelihood estimation in misspecified generalized linear models, Statistics, 1990, 21:487-502.[4]Fahrmeir, L., Kaufmann, H., Consistency and asymptotic normality of the maximum likelihood estimator in generalized linear models, Ann. Statist., 1985, 13: 342-368.[5]Melder, J. A., Pregibon, D., An extended quasi-likelihood function, Biometrika, 1987, 74: 221-232.[6]Bennet, G., Probability inequalities for the sum of independent random variables, JASA, 1962, 57: 33-45.[7]Stout, W. F., Almost Sure Convergence, New York:Academic Press, 1974.[8]Petrov, V, V., Sums of Independent Random Variables, Berlin, New York: Springer-Verlag, 1975.

  15. On the Relationships between Jeffreys Modal and Weighted Likelihood Estimation of Ability under Logistic IRT Models

    Science.gov (United States)

    Magis, David; Raiche, Gilles

    2012-01-01

    This paper focuses on two estimators of ability with logistic item response theory models: the Bayesian modal (BM) estimator and the weighted likelihood (WL) estimator. For the BM estimator, Jeffreys' prior distribution is considered, and the corresponding estimator is referred to as the Jeffreys modal (JM) estimator. It is established that under…

  16. A multinomial maximum likelihood program /MUNOML/. [in modeling sensory and decision phenomena

    Science.gov (United States)

    Curry, R. E.

    1975-01-01

    A multinomial maximum likelihood program (MUNOML) for signal detection and for behavior models is discussed. It is found to be useful in day to day operation since it provides maximum flexibility with minimum duplicated effort. It has excellent convergence qualities and rarely goes beyond 10 iterations. A library of subroutines is being collected for use with MUNOML, including subroutines for a successive categories model and for signal detectability models.

  17. Maximum Likelihood Estimation for an Innovation Diffusion Model of New Product Acceptance

    OpenAIRE

    David C Schmittlein; Vijay Mahajan

    1982-01-01

    A maximum likelihood approach is proposed for estimating an innovation diffusion model of new product acceptance originally considered by Bass (Bass, F. M. 1969. A new product growth model for consumer durables. (January) 215–227.). The suggested approach allows: (1) computation of approximate standard errors for the diffusion model parameters, and (2) determination of the required sample size for forecasting the adoption level to any desired degree of accuracy. Using histograms from eight di...

  18. %lrasch_mml: A SAS Macro for Marginal Maximum Likelihood Estimation in Longitudinal Polytomous Rasch Models

    Directory of Open Access Journals (Sweden)

    Maja Olsbjerg

    2015-10-01

    Full Text Available Item response theory models are often applied when a number items are used to measure a unidimensional latent variable. Originally proposed and used within educational research, they are also used when focus is on physical functioning or psychological wellbeing. Modern applications often need more general models, typically models for multidimensional latent variables or longitudinal models for repeated measurements. This paper describes a SAS macro that fits two-dimensional polytomous Rasch models using a specification of the model that is sufficiently flexible to accommodate longitudinal Rasch models. The macro estimates item parameters using marginal maximum likelihood estimation. A graphical presentation of item characteristic curves is included.

  19. Operational risk models and maximum likelihood estimation error for small sample-sizes

    OpenAIRE

    Paul Larsen

    2015-01-01

    Operational risk models commonly employ maximum likelihood estimation (MLE) to fit loss data to heavy-tailed distributions. Yet several desirable properties of MLE (e.g. asymptotic normality) are generally valid only for large sample-sizes, a situation rarely encountered in operational risk. We study MLE in operational risk models for small sample-sizes across a range of loss severity distributions. We apply these results to assess (1) the approximation of parameter confidence intervals by as...

  20. The empirical likelihood goodness-of-fit test for regression model

    Institute of Scientific and Technical Information of China (English)

    Li-xing ZHU; Yong-song QIN; Wang-li XU

    2007-01-01

    Goodness-of-fit test for regression modes has received much attention in literature. In this paper, empirical likelihood (EL) goodness-of-fit tests for regression models including classical parametric and autoregressive (AR) time series models are proposed. Unlike the existing locally smoothing and globally smoothing methodologies, the new method has the advantage that the tests are self-scale invariant and that the asymptotic null distribution is chi-squared. Simulations are carried out to illustrate the methodology.

  1. Different Manhattan project: automatic statistical model generation

    Science.gov (United States)

    Yap, Chee Keng; Biermann, Henning; Hertzmann, Aaron; Li, Chen; Meyer, Jon; Pao, Hsing-Kuo; Paxia, Salvatore

    2002-03-01

    We address the automatic generation of large geometric models. This is important in visualization for several reasons. First, many applications need access to large but interesting data models. Second, we often need such data sets with particular characteristics (e.g., urban models, park and recreation landscape). Thus we need the ability to generate models with different parameters. We propose a new approach for generating such models. It is based on a top-down propagation of statistical parameters. We illustrate the method in the generation of a statistical model of Manhattan. But the method is generally applicable in the generation of models of large geographical regions. Our work is related to the literature on generating complex natural scenes (smoke, forests, etc) based on procedural descriptions. The difference in our approach stems from three characteristics: modeling with statistical parameters, integration of ground truth (actual map data), and a library-based approach for texture mapping.

  2. Using automatic programming for simulating reliability network models

    Science.gov (United States)

    Tseng, Fan T.; Schroer, Bernard J.; Zhang, S. X.; Wolfsberger, John W.

    1988-01-01

    This paper presents the development of an automatic programming system for assisting modelers of reliability networks to define problems and then automatically generate the corresponding code in the target simulation language GPSS/PC.

  3. Computing maximum-likelihood estimates for parameters of the National Descriptive Model of Mercury in Fish

    Science.gov (United States)

    Donato, David I.

    2012-01-01

    This report presents the mathematical expressions and the computational techniques required to compute maximum-likelihood estimates for the parameters of the National Descriptive Model of Mercury in Fish (NDMMF), a statistical model used to predict the concentration of methylmercury in fish tissue. The expressions and techniques reported here were prepared to support the development of custom software capable of computing NDMMF parameter estimates more quickly and using less computer memory than is currently possible with available general-purpose statistical software. Computation of maximum-likelihood estimates for the NDMMF by numerical solution of a system of simultaneous equations through repeated Newton-Raphson iterations is described. This report explains the derivation of the mathematical expressions required for computational parameter estimation in sufficient detail to facilitate future derivations for any revised versions of the NDMMF that may be developed.

  4. Maximum Likelihood Estimation in Gaussian Chain Graph Models under the Alternative Markov Property

    OpenAIRE

    Drton, Mathias; Eichler, Michael

    2005-01-01

    The AMP Markov property is a recently proposed alternative Markov property for chain graphs. In the case of continuous variables with a joint multivariate Gaussian distribution, it is the AMP rather than the earlier introduced LWF Markov property that is coherent with data-generation by natural block-recursive regressions. In this paper, we show that maximum likelihood estimates in Gaussian AMP chain graph models can be obtained by combining generalized least squares and iterative proportiona...

  5. Evaluation of smoking prevention television messages based on the elaboration likelihood model

    OpenAIRE

    Flynn, Brian S.; Worden, John K.; Bunn, Janice Yanushka; Connolly, Scott W.; Dorwaldt, Anne L.

    2011-01-01

    Progress in reducing youth smoking may depend on developing improved methods to communicate with higher risk youth. This study explored the potential of smoking prevention messages based on the Elaboration Likelihood Model (ELM) to address these needs. Structured evaluations of 12 smoking prevention messages based on three strategies derived from the ELM were conducted in classroom settings among a diverse sample of non-smoking middle school students in three states (n = 1771). Students categ...

  6. Quasi-likelihood estimation of average treatment effects based on model information

    Institute of Scientific and Technical Information of China (English)

    Zhi-hua SUN

    2007-01-01

    In this paper, the estimation of average treatment effects is considered when we have the model information of the conditional mean and conditional variance for the responses given the covariates. The quasi-likelihood method adapted to treatment effects data is developed to estimate the parameters in the conditional mean and conditional variance models. Based on the model information, we define three estimators by imputation, regression and inverse probability weighted methods.All the estimators are shown asymptotically normal. Our simulation results show that by using the model information, the substantial efficiency gains are obtained which are comparable with the existing estimators.

  7. Quasi-likelihood estimation of average treatment effects based on model information

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    In this paper, the estimation of average treatment effects is considered when we have the model information of the conditional mean and conditional variance for the responses given the covariates. The quasi-likelihood method adapted to treatment effects data is developed to estimate the parameters in the conditional mean and conditional variance models. Based on the model information, we define three estimators by imputation, regression and inverse probability weighted methods. All the estimators are shown asymptotically normal. Our simulation results show that by using the model information, the substantial efficiency gains are obtained which are comparable with the existing estimators.

  8. Computation of the Likelihood in Biallelic Diffusion Models Using Orthogonal Polynomials

    Directory of Open Access Journals (Sweden)

    Claus Vogl

    2014-11-01

    Full Text Available In population genetics, parameters describing forces such as mutation, migration and drift are generally inferred from molecular data. Lately, approximate methods based on simulations and summary statistics have been widely applied for such inference, even though these methods waste information. In contrast, probabilistic methods of inference can be shown to be optimal, if their assumptions are met. In genomic regions where recombination rates are high relative to mutation rates, polymorphic nucleotide sites can be assumed to evolve independently from each other. The distribution of allele frequencies at a large number of such sites has been called “allele-frequency spectrum” or “site-frequency spectrum” (SFS. Conditional on the allelic proportions, the likelihoods of such data can be modeled as binomial. A simple model representing the evolution of allelic proportions is the biallelic mutation-drift or mutation-directional selection-drift diffusion model. With series of orthogonal polynomials, specifically Jacobi and Gegenbauer polynomials, or the related spheroidal wave function, the diffusion equations can be solved efficiently. In the neutral case, the product of the binomial likelihoods with the sum of such polynomials leads to finite series of polynomials, i.e., relatively simple equations, from which the exact likelihoods can be calculated. In this article, the use of orthogonal polynomials for inferring population genetic parameters is investigated.

  9. Approximation of the likelihood ratio statistics in competing risks model under informative random censorship from both sides

    Directory of Open Access Journals (Sweden)

    Abdurahim Akhmedovich Abdushukurov

    2016-03-01

    Full Text Available It is clear that the likelihood ratio statistics plays an important role in theories of asymptotical estimation and hypothesis testing. The aim of the paper is to investigate the asymptotic properties of likelihood ratio statistics in competing risks model with informative random censorship from both sides. We prove the approximation version of the locally asymptotically normality of the likelihood ratio statistics. The results have asymptotic representation of the likelihood ratio statistics using the strong approximation method where local asymptotic normality is obtained as a consequence.

  10. A likelihood reformulation method in non-normal random effects models.

    Science.gov (United States)

    Liu, Lei; Yu, Zhangsheng

    2008-07-20

    In this paper, we propose a practical computational method to obtain the maximum likelihood estimates (MLE) for mixed models with non-normal random effects. By simply multiplying and dividing a standard normal density, we reformulate the likelihood conditional on the non-normal random effects to that conditional on the normal random effects. Gaussian quadrature technique, conveniently implemented in SAS Proc NLMIXED, can then be used to carry out the estimation process. Our method substantially reduces computational time, while yielding similar estimates to the probability integral transformation method (J. Comput. Graphical Stat. 2006; 15:39-57). Furthermore, our method can be applied to more general situations, e.g. finite mixture random effects or correlated random effects from Clayton copula. Simulations and applications are presented to illustrate our method. PMID:18038445

  11. Likelihood Inference of Nonlinear Models Based on a Class of Flexible Skewed Distributions

    Directory of Open Access Journals (Sweden)

    Xuedong Chen

    2014-01-01

    Full Text Available This paper deals with the issue of the likelihood inference for nonlinear models with a flexible skew-t-normal (FSTN distribution, which is proposed within a general framework of flexible skew-symmetric (FSS distributions by combining with skew-t-normal (STN distribution. In comparison with the common skewed distributions such as skew normal (SN, and skew-t (ST as well as scale mixtures of skew normal (SMSN, the FSTN distribution can accommodate more flexibility and robustness in the presence of skewed, heavy-tailed, especially multimodal outcomes. However, for this distribution, a usual approach of maximum likelihood estimates based on EM algorithm becomes unavailable and an alternative way is to return to the original Newton-Raphson type method. In order to improve the estimation as well as the way for confidence estimation and hypothesis test for the parameters of interest, a modified Newton-Raphson iterative algorithm is presented in this paper, based on profile likelihood for nonlinear regression models with FSTN distribution, and, then, the confidence interval and hypothesis test are also developed. Furthermore, a real example and simulation are conducted to demonstrate the usefulness and the superiority of our approach.

  12. Empirical likelihood confidence regions of the parameters in a partially linear single-index model

    Institute of Scientific and Technical Information of China (English)

    XUE Liugen; ZHU Lixing

    2005-01-01

    In this paper, a partially linear single-index model is investigated, and three empirical log-likelihood ratio statistics for the unknown parameters in the model are suggested. It is proved that the proposed statistics are asymptotically standard chi-square under some suitable conditions, and hence can be used to construct the confidence regions of the parameters. Our methods can also deal with the confidence region construction for the index in the pure single-index model. A simulation study indicates that, in terms of coverage probabilities and average areas of the confidence regions, the proposed methods perform better than the least-squares method.

  13. Maximum-likelihood model averaging to profile clustering of site types across discrete linear sequences.

    Directory of Open Access Journals (Sweden)

    Zhang Zhang

    2009-06-01

    Full Text Available A major analytical challenge in computational biology is the detection and description of clusters of specified site types, such as polymorphic or substituted sites within DNA or protein sequences. Progress has been stymied by a lack of suitable methods to detect clusters and to estimate the extent of clustering in discrete linear sequences, particularly when there is no a priori specification of cluster size or cluster count. Here we derive and demonstrate a maximum likelihood method of hierarchical clustering. Our method incorporates a tripartite divide-and-conquer strategy that models sequence heterogeneity, delineates clusters, and yields a profile of the level of clustering associated with each site. The clustering model may be evaluated via model selection using the Akaike Information Criterion, the corrected Akaike Information Criterion, and the Bayesian Information Criterion. Furthermore, model averaging using weighted model likelihoods may be applied to incorporate model uncertainty into the profile of heterogeneity across sites. We evaluated our method by examining its performance on a number of simulated datasets as well as on empirical polymorphism data from diverse natural alleles of the Drosophila alcohol dehydrogenase gene. Our method yielded greater power for the detection of clustered sites across a breadth of parameter ranges, and achieved better accuracy and precision of estimation of clusters, than did the existing empirical cumulative distribution function statistics.

  14. Automatic Queuing Model for Banking Applications

    Directory of Open Access Journals (Sweden)

    Dr. Ahmed S. A. AL-Jumaily

    2011-08-01

    Full Text Available Queuing is the process of moving customers in a specific sequence to a specific service according to the customer need. The term scheduling stands for the process of computing a schedule. This may be done by a queuing based scheduler. This paper focuses on the banks lines system, the different queuing algorithms that are used in banks to serve the customers, and the average waiting time. The aim of this paper is to build automatic queuing system for organizing the banks queuing system that can analyses the queue status and take decision which customer to serve. The new queuing architecture model can switch between different scheduling algorithms according to the testing results and the factor of the average waiting time. The main innovation of this work concerns the modeling of the average waiting time is taken into processing, in addition with the process of switching to the scheduling algorithm that gives the best average waiting time.

  15. Elaboration Likelihood Model and an Analysis of the Contexts of Its Application

    Directory of Open Access Journals (Sweden)

    Aslıhan Kıymalıoğlu

    2014-12-01

    Full Text Available Elaboration Likelihood Model (ELM, which supports the existence of two routes to persuasion: central and peripheral routes, has been one of the major models on persuasion. As the number of studies in the Turkish literature on ELM is limited, a detailed explanation of the model together with a comprehensive literature review was considered to be contributory for this gap. The findings of the review reveal that the model was mostly used in marketing and advertising researches, that the concept most frequently used in elaboration process was involvement, and that argument quality and endorser credibility were the factors most often employed in measuring their effect on the dependant variables. The review provides valuable insights as it presents a holistic view of the model and the variables used in the model.

  16. Effect of formal and informal likelihood functions on uncertainty assessment in a single event rainfall-runoff model

    Science.gov (United States)

    Nourali, Mahrouz; Ghahraman, Bijan; Pourreza-Bilondi, Mohsen; Davary, Kamran

    2016-09-01

    In the present study, DREAM(ZS), Differential Evolution Adaptive Metropolis combined with both formal and informal likelihood functions, is used to investigate uncertainty of parameters of the HEC-HMS model in Tamar watershed, Golestan province, Iran. In order to assess the uncertainty of 24 parameters used in HMS, three flood events were used to calibrate and one flood event was used to validate the posterior distributions. Moreover, performance of seven different likelihood functions (L1-L7) was assessed by means of DREAM(ZS)approach. Four likelihood functions, L1-L4, Nash-Sutcliffe (NS) efficiency, Normalized absolute error (NAE), Index of agreement (IOA), and Chiew-McMahon efficiency (CM), is considered as informal, whereas remaining (L5-L7) is represented in formal category. L5 focuses on the relationship between the traditional least squares fitting and the Bayesian inference, and L6, is a hetereoscedastic maximum likelihood error (HMLE) estimator. Finally, in likelihood function L7, serial dependence of residual errors is accounted using a first-order autoregressive (AR) model of the residuals. According to the results, sensitivities of the parameters strongly depend on the likelihood function, and vary for different likelihood functions. Most of the parameters were better defined by formal likelihood functions L5 and L7 and showed a high sensitivity to model performance. Posterior cumulative distributions corresponding to the informal likelihood functions L1, L2, L3, L4 and the formal likelihood function L6 are approximately the same for most of the sub-basins, and these likelihood functions depict almost a similar effect on sensitivity of parameters. 95% total prediction uncertainty bounds bracketed most of the observed data. Considering all the statistical indicators and criteria of uncertainty assessment, including RMSE, KGE, NS, P-factor and R-factor, results showed that DREAM(ZS) algorithm performed better under formal likelihood functions L5 and L7

  17. Discrete Model Reference Adaptive Control System for Automatic Profiling Machine

    OpenAIRE

    Peng Song; Guo-kai Xu; Xiu-chun Zhao

    2012-01-01

    Automatic profiling machine is a movement system that has a high degree of parameter variation and high frequency of transient process, and it requires an accurate control in time. In this paper, the discrete model reference adaptive control system of automatic profiling machine is discussed. Firstly, the model of automatic profiling machine is presented according to the parameters of DC motor. Then the design of the discrete model reference adaptive control is proposed, and the control rules...

  18. Accuracy of maximum likelihood estimates of a two-state model in single-molecule FRET

    International Nuclear Information System (INIS)

    Photon sequences from single-molecule Förster resonance energy transfer (FRET) experiments can be analyzed using a maximum likelihood method. Parameters of the underlying kinetic model (FRET efficiencies of the states and transition rates between conformational states) are obtained by maximizing the appropriate likelihood function. In addition, the errors (uncertainties) of the extracted parameters can be obtained from the curvature of the likelihood function at the maximum. We study the standard deviations of the parameters of a two-state model obtained from photon sequences with recorded colors and arrival times. The standard deviations can be obtained analytically in a special case when the FRET efficiencies of the states are 0 and 1 and in the limiting cases of fast and slow conformational dynamics. These results are compared with the results of numerical simulations. The accuracy and, therefore, the ability to predict model parameters depend on how fast the transition rates are compared to the photon count rate. In the limit of slow transitions, the key parameters that determine the accuracy are the number of transitions between the states and the number of independent photon sequences. In the fast transition limit, the accuracy is determined by the small fraction of photons that are correlated with their neighbors. The relative standard deviation of the relaxation rate has a “chevron” shape as a function of the transition rate in the log-log scale. The location of the minimum of this function dramatically depends on how well the FRET efficiencies of the states are separated

  19. Maximum likelihood fitting of FROC curves under an initial-detection-and-candidate-analysis model

    International Nuclear Information System (INIS)

    We have developed a model for FROC curve fitting that relates the observer's FROC performance not to the ROC performance that would be obtained if the observer's responses were scored on a per image basis, but rather to a hypothesized ROC performance that the observer would obtain in the task of classifying a set of 'candidate detections' as positive or negative. We adopt the assumptions of the Bunch FROC model, namely that the observer's detections are all mutually independent, as well as assumptions qualitatively similar to, but different in nature from, those made by Chakraborty in his AFROC scoring methodology. Under the assumptions of our model, we show that the observer's FROC performance is a linearly scaled version of the candidate analysis ROC curve, where the scaling factors are just given by the FROC operating point coordinates for detecting initial candidates. Further, we show that the likelihood function of the model parameters given observational data takes on a simple form, and we develop a maximum likelihood method for fitting a FROC curve to this data. FROC and AFROC curves are produced for computer vision observer datasets and compared with the results of the AFROC scoring method. Although developed primarily with computer vision schemes in mind, we hope that the methodology presented here will prove worthy of further study in other applications as well

  20. Maximum Likelihood Bayesian Averaging of Spatial Variability Models in Unsaturated Fractured Tuff

    International Nuclear Information System (INIS)

    Hydrologic analyses typically rely on a single conceptual-mathematical model. Yet hydrologic environments are open and complex, rendering them prone to multiple interpretations and mathematical descriptions. Adopting only one of these may lead to statistical bias and underestimation of uncertainty. Bayesian Model Averaging (BMA) provides an optimal way to combine the predictions of several competing models and to assess their joint predictive uncertainty. However, it tends to be computationally demanding and relies heavily on prior information about model parameters. We apply a maximum likelihood (ML) version of BMA (MLBMA) to seven alternative variogram models of log air permeability data from single-hole pneumatic injection tests in six boreholes at the Apache Leap Research Site (ALRS) in central Arizona. Unbiased ML estimates of variogram and drift parameters are obtained using Adjoint State Maximum Likelihood Cross Validation in conjunction with Universal Kriging and Generalized L east Squares. Standard information criteria provide an ambiguous ranking of the models, which does not justify selecting one of them and discarding all others as is commonly done in practice. Instead, we eliminate some of the models based on their negligibly small posterior probabilities and use the rest to project the measured log permeabilities by kriging onto a rock volume containing the six boreholes. We then average these four projections, and associated kriging variances, using the posterior probability of each model as weight. Finally, we cross-validate the results by eliminating from consideration all data from one borehole at a time, repeating the above process, and comparing the predictive capability of MLBMA with that of each individual model. We find that MLBMA is superior to any individual geostatistical model of log permeability among those we consider at the ALRS

  1. Consistency of the Maximum Likelihood Estimator for general hidden Markov models

    CERN Document Server

    Douc, Randal; Olsson, Jimmy; Van Handel, Ramon

    2009-01-01

    Consider a parametrized family of general hidden Markov models, where both the observed and unobserved components take values in a complete separable metric space. We prove that the maximum likelihood estimator (MLE) of the parameter is strongly consistent under a rather minimal set of assumptions. As special cases of our main result, we obtain consistency in a large class of nonlinear state space models, as well as general results on linear Gaussian state space models and finite state models. A novel aspect of our approach is an information-theoretic technique for proving identifiability, which does not require an explicit representation for the relative entropy rate. Our method of proof could therefore form a foundation for the investigation of MLE consistency in more general dependent and non-Markovian time series. Also of independent interest is a general concentration inequality for $V$-uniformly ergodic Markov chains.

  2. The early maximum likelihood estimation model of audiovisual integration in speech perception

    DEFF Research Database (Denmark)

    Andersen, Tobias

    2015-01-01

    Speech perception is facilitated by seeing the articulatory mouth movements of the talker. This is due to perceptual audiovisual integration, which also causes the McGurk−MacDonald illusion, and for which a comprehensive computational account is still lacking. Decades of research have largely...... focused on the fuzzy logical model of perception (FLMP), which provides excellent fits to experimental observations but also has been criticized for being too flexible, post hoc and difficult to interpret. The current study introduces the early maximum likelihood estimation (MLE) model of audiovisual...... integration to speech perception along with three model variations. In early MLE, integration is based on a continuous internal representation before categorization, which can make the model more parsimonious by imposing constraints that reflect experimental designs. The study also shows that cross...

  3. Mixture model for inferring susceptibility to mastitis in dairy cattle: a procedure for likelihood-based inference

    OpenAIRE

    Jensen Just; Madsen Per; Sorensen Daniel; Klemetsdal Gunnar; Heringstad Bjørg; Øegård Jørgen; Gianola Daniel; Detilleux Johann

    2004-01-01

    Abstract A Gaussian mixture model with a finite number of components and correlated random effects is described. The ultimate objective is to model somatic cell count information in dairy cattle and to develop criteria for genetic selection against mastitis, an important udder disease. Parameter estimation is by maximum likelihood or by an extension of restricted maximum likelihood. A Monte Carlo expectation-maximization algorithm is used for this purpose. The expectation step is carried out ...

  4. Mixture model for inferring susceptibility to mastitis in dairy cattle: a procedure for likelihood-based inference

    OpenAIRE

    Gianola, Daniel; Ødegaard, Jørgen; Heringstad, B; Klemetsdal, G; Sorensen, Daniel; Madsen, Per; Jensen, Just; Detilleux, J

    2004-01-01

    A Gaussian mixture model with a finite number of components and correlated random effects is described. The ultimate objective is to model somatic cell count information in dairy cattle and to develop criteria for genetic selection against mastitis, an important udder disease. Parameter estimation is by maximum likelihood or by an extension of restricted maximum likelihood. A Monte Carlo expectation-maximization algorithm is used for this purpose. The expectation step is carried out using Gib...

  5. Effects of deceptive packaging and product involvement on purchase intention: an elaboration likelihood model perspective.

    Science.gov (United States)

    Lammers, H B

    2000-04-01

    From an Elaboration Likelihood Model perspective, it was hypothesized that postexposure awareness of deceptive packaging claims would have a greater negative effect on scores for purchase intention by consumers lowly involved rather than highly involved with a product (n = 40). Undergraduates who were classified as either highly or lowly (ns = 20 and 20) involved with M&Ms examined either a deceptive or non-deceptive package design for M&Ms candy and were subsequently informed of the deception employed in the packaging before finally rating their intention to purchase. As anticipated, highly deceived subjects who were low in involvement rated intention to purchase lower than their highly involved peers. Overall, the results attest to the robustness of the model and suggest that the model has implications beyond advertising effects and into packaging effects. PMID:10840911

  6. Non-Nested Models and the Likelihood Ratio Statistic: A Comparison of Simulation and Bootstrap Based Tests

    OpenAIRE

    Kapetanios, George; Weeks, Melvyn J.

    2003-01-01

    We consider an alternative use of simulation in the context of using the Likelihood-Ratio statistic to test non-nested models. To date simulation has been used to estimate the Kullback-Leibler measure of closeness between two densities, which in turn ?mean adjusts? the Likelihood-Ratio statistic. Given that this adjustment is still based upon asymptotic arguments, an alternative procedure is to utilise bootstrap procedures to construct the empirical density. To our knowledge this study re...

  7. Likelihood-free inference of population structure and local adaptation in a Bayesian hierarchical model.

    Science.gov (United States)

    Bazin, Eric; Dawson, Kevin J; Beaumont, Mark A

    2010-06-01

    We address the problem of finding evidence of natural selection from genetic data, accounting for the confounding effects of demographic history. In the absence of natural selection, gene genealogies should all be sampled from the same underlying distribution, often approximated by a coalescent model. Selection at a particular locus will lead to a modified genealogy, and this motivates a number of recent approaches for detecting the effects of natural selection in the genome as "outliers" under some models. The demographic history of a population affects the sampling distribution of genealogies, and therefore the observed genotypes and the classification of outliers. Since we cannot see genealogies directly, we have to infer them from the observed data under some model of mutation and demography. Thus the accuracy of an outlier-based approach depends to a greater or a lesser extent on the uncertainty about the demographic and mutational model. A natural modeling framework for this type of problem is provided by Bayesian hierarchical models, in which parameters, such as mutation rates and selection coefficients, are allowed to vary across loci. It has proved quite difficult computationally to implement fully probabilistic genealogical models with complex demographies, and this has motivated the development of approximations such as approximate Bayesian computation (ABC). In ABC the data are compressed into summary statistics, and computation of the likelihood function is replaced by simulation of data under the model. In a hierarchical setting one may be interested both in hyperparameters and parameters, and there may be very many of the latter--for example, in a genetic model, these may be parameters describing each of many loci or populations. This poses a problem for ABC in that one then requires summary statistics for each locus, which, if used naively, leads to a consequent difficulty in conditional density estimation. We develop a general method for applying

  8. Inverse Modeling of Respiratory System during Noninvasive Ventilation by Maximum Likelihood Estimation

    Science.gov (United States)

    Saatci, Esra; Akan, Aydin

    2010-12-01

    We propose a procedure to estimate the model parameters of presented nonlinear Resistance-Capacitance (RC) and the widely used linear Resistance-Inductance-Capacitance (RIC) models of the respiratory system by Maximum Likelihood Estimator (MLE). The measurement noise is assumed to be Generalized Gaussian Distributed (GGD), and the variance and the shape factor of the measurement noise are estimated by MLE and Kurtosis method, respectively. The performance of the MLE algorithm is also demonstrated by the Cramer-Rao Lower Bound (CRLB) with artificially produced respiratory signals. Airway flow, mask pressure, and lung volume are measured from patients with Chronic Obstructive Pulmonary Disease (COPD) under the noninvasive ventilation and from healthy subjects. Simulations show that respiratory signals from healthy subjects are better represented by the RIC model compared to the nonlinear RC model. On the other hand, the Patient group respiratory signals are fitted to the nonlinear RC model with lower measurement noise variance, better converged measurement noise shape factor, and model parameter tracks. Also, it is observed that for the Patient group the shape factor of the measurement noise converges to values between 1 and 2 whereas for the Control group shape factor values are estimated in the super-Gaussian area.

  9. Suprasegmental Duration Modelling with Elastic Constraints in Automatic Speech Recognition

    OpenAIRE

    Molloy, Laurence; Isard, Stephen

    1998-01-01

    In this paper a method of integrating a model of suprasegmental duration with a HMM-based recogniser at the post-processing level is presented. The N-Best utterance output is rescored using a suitable linear combination of acoustic log-likelihood (provided by a set of tied-state triphone HMMs) and duration log-likelihood (provided by a set of durational models). The durational model used in the post-processing imposes syllable-level elastic constraints on the durational behaviour of speech se...

  10. Uncertainty in a monthly water balance model using the generalized likelihood uncertainty estimation methodology

    Indian Academy of Sciences (India)

    Diego Rivera; Yessica Rivas; Alex Godoy

    2015-02-01

    Hydrological models are simplified representations of natural processes and subject to errors. Uncertainty bounds are a commonly used way to assess the impact of an input or model architecture uncertainty in model outputs. Different sets of parameters could have equally robust goodness-of-fit indicators, which is known as Equifinality. We assessed the outputs from a lumped conceptual hydrological model to an agricultural watershed in central Chile under strong interannual variability (coefficient of variability of 25%) by using the Equifinality concept and uncertainty bounds. The simulation period ran from January 1999 to December 2006. Equifinality and uncertainty bounds from GLUE methodology (Generalized Likelihood Uncertainty Estimation) were used to identify parameter sets as potential representations of the system. The aim of this paper is to exploit the use of uncertainty bounds to differentiate behavioural parameter sets in a simple hydrological model. Then, we analyze the presence of equifinality in order to improve the identification of relevant hydrological processes. The water balance model for Chillan River exhibits, at a first stage, equifinality. However, it was possible to narrow the range for the parameters and eventually identify a set of parameters representing the behaviour of the watershed (a behavioural model) in agreement with observational and soft data (calculation of areal precipitation over the watershed using an isohyetal map). The mean width of the uncertainty bound around the predicted runoff for the simulation period decreased from 50 to 20 m3s−1 after fixing the parameter controlling the areal precipitation over the watershed. This decrement is equivalent to decreasing the ratio between simulated and observed discharge from 5.2 to 2.5. Despite the criticisms against the GLUE methodology, such as the lack of statistical formality, it is identified as a useful tool assisting the modeller with the identification of critical parameters.

  11. Uncertainty in a monthly water balance model using the generalized likelihood uncertainty estimation methodology

    Science.gov (United States)

    Rivera, Diego; Rivas, Yessica; Godoy, Alex

    2015-02-01

    Hydrological models are simplified representations of natural processes and subject to errors. Uncertainty bounds are a commonly used way to assess the impact of an input or model architecture uncertainty in model outputs. Different sets of parameters could have equally robust goodness-of-fit indicators, which is known as Equifinality. We assessed the outputs from a lumped conceptual hydrological model to an agricultural watershed in central Chile under strong interannual variability (coefficient of variability of 25%) by using the Equifinality concept and uncertainty bounds. The simulation period ran from January 1999 to December 2006. Equifinality and uncertainty bounds from GLUE methodology (Generalized Likelihood Uncertainty Estimation) were used to identify parameter sets as potential representations of the system. The aim of this paper is to exploit the use of uncertainty bounds to differentiate behavioural parameter sets in a simple hydrological model. Then, we analyze the presence of equifinality in order to improve the identification of relevant hydrological processes. The water balance model for Chillan River exhibits, at a first stage, equifinality. However, it was possible to narrow the range for the parameters and eventually identify a set of parameters representing the behaviour of the watershed (a behavioural model) in agreement with observational and soft data (calculation of areal precipitation over the watershed using an isohyetal map). The mean width of the uncertainty bound around the predicted runoff for the simulation period decreased from 50 to 20 m3s-1 after fixing the parameter controlling the areal precipitation over the watershed. This decrement is equivalent to decreasing the ratio between simulated and observed discharge from 5.2 to 2.5. Despite the criticisms against the GLUE methodology, such as the lack of statistical formality, it is identified as a useful tool assisting the modeller with the identification of critical parameters.

  12. Maximum Likelihood Mosaics

    CERN Document Server

    Pires, Bernardo Esteves

    2010-01-01

    The majority of the approaches to the automatic recovery of a panoramic image from a set of partial views are suboptimal in the sense that the input images are aligned, or registered, pair by pair, e.g., consecutive frames of a video clip. These approaches lead to propagation errors that may be very severe, particularly when dealing with videos that show the same region at disjoint time intervals. Although some authors have proposed a post-processing step to reduce the registration errors in these situations, there have not been attempts to compute the optimal solution, i.e., the registrations leading to the panorama that best matches the entire set of partial views}. This is our goal. In this paper, we use a generative model for the partial views of the panorama and develop an algorithm to compute in an efficient way the Maximum Likelihood estimate of all the unknowns involved: the parameters describing the alignment of all the images and the panorama itself.

  13. Recovery of Graded Response Model Parameters: A Comparison of Marginal Maximum Likelihood and Markov Chain Monte Carlo Estimation

    Science.gov (United States)

    Kieftenbeld, Vincent; Natesan, Prathiba

    2012-01-01

    Markov chain Monte Carlo (MCMC) methods enable a fully Bayesian approach to parameter estimation of item response models. In this simulation study, the authors compared the recovery of graded response model parameters using marginal maximum likelihood (MML) and Gibbs sampling (MCMC) under various latent trait distributions, test lengths, and…

  14. Activation detection in functional MRI using subspace modeling and maximum likelihood estimation.

    Science.gov (United States)

    Ardekani, B A; Kershaw, J; Kashikura, K; Kanno, I

    1999-02-01

    A statistical method for detecting activated pixels in functional MRI (fMIRI) data is presented. In this method, the fMRI time series measured at each pixel is modeled as the sum of a response signal which arises due to the experimentally controlled activation-baseline pattern, a nuisance component representing effects of no interest, and Gaussian white noise. For periodic activation-baseline patterns, the response signal is modeled by a truncated Fourier series with a known fundamental frequency but unknown Fourier coefficients. The nuisance subspace is assumed to be unknown. A maximum likelihood estimate is derived for the component of the nuisance subspace which is orthogonal to the response signal subspace. An estimate for the order of the nuisance subspace is obtained from an information theoretic criterion. A statistical test is derived and shown to be the uniformly most powerful (UMP) test invariant to a group of transformations which are natural to the hypothesis testing problem. The maximal invariant statistic used in this test has an F distribution. The theoretical F distribution under the null hypothesis strongly concurred with the experimental frequency distribution obtained by performing null experiments in which the subjects did not perform any activation task. Application of the theory to motor activation and visual stimulation fMRI studies is presented. PMID:10232667

  15. Statistical bounds and maximum likelihood performance for shot noise limited knife-edge modeled stellar occultation

    Science.gov (United States)

    McNicholl, Patrick J.; Crabtree, Peter N.

    2014-09-01

    Applications of stellar occultation by solar system objects have a long history for determining universal time, detecting binary stars, and providing estimates of sizes of asteroids and minor planets. More recently, extension of this last application has been proposed as a technique to provide information (if not complete shadow images) of geosynchronous satellites. Diffraction has long been recognized as a source of distortion for such occultation measurements, and models subsequently developed to compensate for this degradation. Typically these models employ a knife-edge assumption for the obscuring body. In this preliminary study, we report on the fundamental limitations of knife-edge position estimates due to shot noise in an otherwise idealized measurement. In particular, we address the statistical bounds, both Cramér- Rao and Hammersley-Chapman-Robbins, on the uncertainty in the knife-edge position measurement, as well as the performance of the maximum-likelihood estimator. Results are presented as a function of both stellar magnitude and sensor passband; the limiting case of infinite resolving power is also explored.

  16. Model Considerations for Memory-based Automatic Music Transcription

    Czech Academy of Sciences Publication Activity Database

    Albrecht, Š.; Šmídl, Václav

    Oxford, Mississipi: AIP, 2009, s. 1-8. [29th International Workshop on Bayesian Inference and Maximum Entropy Methods in Science and Engineering. Oxford, Mississipi (US), 05.07.2009-10.07.2009] Institutional research plan: CEZ:AV0Z10750506 Keywords : Automatic music recognition * Stochastic modeling * parameter estimation Subject RIV: BD - Theory of Information http://library.utia.cas.cz/separaty/2009/AS/smidl-model considerations for memory-based automatic music transcription.pdf

  17. The Use of Dynamic Stochastic Social Behavior Models to Produce Likelihood Functions for Risk Modeling of Proliferation and Terrorist Attacks

    International Nuclear Information System (INIS)

    The ability to estimate the likelihood of future events based on current and historical data is essential to the decision making process of many government agencies. Successful predictions related to terror events and characterizing the risks will support development of options for countering these events. The predictive tasks involve both technical and social component models. The social components have presented a particularly difficult challenge. This paper outlines some technical considerations of this modeling activity. Both data and predictions associated with the technical and social models will likely be known with differing certainties or accuracies - a critical challenge is linking across these model domains while respecting this fundamental difference in certainty level. This paper will describe the technical approach being taken to develop the social model and identification of the significant interfaces between the technical and social modeling in the context of analysis of diversion of nuclear material

  18. Comparison of Statistical Data Models for Identifying Differentially Expressed Genes Using a Generalized Likelihood Ratio Test

    Directory of Open Access Journals (Sweden)

    Kok-Yong Seng

    2008-01-01

    Full Text Available Currently, statistical techniques for analysis of microarray-generated data sets have deficiencies due to limited understanding of errors inherent in the data. A generalized likelihood ratio (GLR test based on an error model has been recently proposed to identify differentially expressed genes from microarray experiments. However, the use of different error structures under the GLR test has not been evaluated, nor has this method been compared to commonly used statistical tests such as the parametric t-test. The concomitant effects of varying data signal-to-noise ratio and replication number on the performance of statistical tests also remain largely unexplored. In this study, we compared the effects of different underlying statistical error structures on the GLR test’s power in identifying differentially expressed genes in microarray data. We evaluated such variants of the GLR test as well as the one sample t-test based on simulated data by means of receiver operating characteristic (ROC curves. Further, we used bootstrapping of ROC curves to assess statistical significance of differences between the areas under the curves. Our results showed that i the GLR tests outperformed the t-test for detecting differential gene expression, ii the identity of the underlying error structure was important in determining the GLR tests’ performance, and iii signal-to-noise ratio was a more important contributor than sample replication in identifying statistically significant differential gene expression.

  19. Automatic Modeling of Virtual Humans and Body Clothing

    Institute of Scientific and Technical Information of China (English)

    Nadia Magnenat-Thalmann; Hyewon Seo; Frederic Cordier

    2004-01-01

    Highly realistic virtual human models are rapidly becoming commonplace in computer graphics.These models, often represented by complex shape and requiring labor-intensive process, challenge the problem of automatic modeling. The problem and solutions to automatic modeling of animatable virtual humans are studied. Methods for capturing the shape of real people, parameterization techniques for modeling static shape (the variety of human body shapes) and dynamic shape (how the body shape changes as it moves) of virtual humans are classified, summarized and compared. Finally, methods for clothed virtual humans are reviewed.

  20. Automatic labelling of topic models learned from Twitter by summarisation

    OpenAIRE

    Cano Basave, Amparo Elizabeth; He, Yulan; Xu, Ruifeng

    2014-01-01

    Latent topics derived by topic models such as Latent Dirichlet Allocation (LDA) are the result of hidden thematic structures which provide further insights into the data. The automatic labelling of such topics derived from social media poses however new challenges since topics may characterise novel events happening in the real world. Existing automatic topic labelling approaches which depend on external knowledge sources become less applicable here since relevant articles/concepts of the ext...

  1. Automatic bootstrapping of a morphable face model using multiple components

    NARCIS (Netherlands)

    Haar, F.B. ter; Veltkamp, R.C.

    2009-01-01

    We present a new bootstrapping algorithm to automatically enhance a 3D morphable face model with new face data. Our algorithm is based on a morphable model fitting method that uses a set of predefined face components. This fitting method produces accurate model fits to 3D face data with noise and ho

  2. Maximum likelihood estimation for Cox's regression model under nested case-control sampling

    DEFF Research Database (Denmark)

    Scheike, Thomas Harder; Juul, Anders

    2004-01-01

    -like growth factor I was associated with ischemic heart disease. The study was based on a population of 3784 Danes and 231 cases of ischemic heart disease where controls were matched on age and gender. We illustrate the use of the MLE for these data and show how the maximum likelihood framework can be used to...

  3. An EM Algorithm for Maximum Likelihood Estimation of Process Factor Analysis Models

    Science.gov (United States)

    Lee, Taehun

    2010-01-01

    In this dissertation, an Expectation-Maximization (EM) algorithm is developed and implemented to obtain maximum likelihood estimates of the parameters and the associated standard error estimates characterizing temporal flows for the latent variable time series following stationary vector ARMA processes, as well as the parameters defining the…

  4. IRT Item Parameter Recovery with Marginal Maximum Likelihood Estimation Using Loglinear Smoothing Models

    Science.gov (United States)

    Casabianca, Jodi M.; Lewis, Charles

    2015-01-01

    Loglinear smoothing (LLS) estimates the latent trait distribution while making fewer assumptions about its form and maintaining parsimony, thus leading to more precise item response theory (IRT) item parameter estimates than standard marginal maximum likelihood (MML). This article provides the expectation-maximization algorithm for MML estimation…

  5. Straight line fitting and predictions: On a marginal likelihood approach to linear regression and errors-in-variables models

    Science.gov (United States)

    Christiansen, Bo

    2015-04-01

    Linear regression methods are without doubt the most used approaches to describe and predict data in the physical sciences. They are often good first order approximations and they are in general easier to apply and interpret than more advanced methods. However, even the properties of univariate regression can lead to debate over the appropriateness of various models as witnessed by the recent discussion about climate reconstruction methods. Before linear regression is applied important choices have to be made regarding the origins of the noise terms and regarding which of the two variables under consideration that should be treated as the independent variable. These decisions are often not easy to make but they may have a considerable impact on the results. We seek to give a unified probabilistic - Bayesian with flat priors - treatment of univariate linear regression and prediction by taking, as starting point, the general errors-in-variables model (Christiansen, J. Clim., 27, 2014-2031, 2014). Other versions of linear regression can be obtained as limits of this model. We derive the likelihood of the model parameters and predictands of the general errors-in-variables model by marginalizing over the nuisance parameters. The resulting likelihood is relatively simple and easy to analyze and calculate. The well known unidentifiability of the errors-in-variables model is manifested as the absence of a well-defined maximum in the likelihood. However, this does not mean that probabilistic inference can not be made; the marginal likelihoods of model parameters and the predictands have, in general, well-defined maxima. We also include a probabilistic version of classical calibration and show how it is related to the errors-in-variables model. The results are illustrated by an example from the coupling between the lower stratosphere and the troposphere in the Northern Hemisphere winter.

  6. Automatic differentiation, tangent linear models, and (pseudo) adjoints

    Energy Technology Data Exchange (ETDEWEB)

    Bischof, C.H.

    1993-12-31

    This paper provides a brief introduction to automatic differentiation and relates it to the tangent linear model and adjoint approaches commonly used in meteorology. After a brief review of the forward and reverse mode of automatic differentiation, the ADIFOR automatic differentiation tool is introduced, and initial results of a sensitivity-enhanced version of the MM5 PSU/NCAR mesoscale weather model are presented. We also present a novel approach to the computation of gradients that uses a reverse mode approach at the time loop level and a forward mode approach at every time step. The resulting ``pseudoadjoint`` shares the characteristic of an adjoint code that the ratio of gradient to function evaluation does not depend on the number of independent variables. In contrast to a true adjoint approach, however, the nonlinearity of the model plays no role in the complexity of the derivative code.

  7. Mixture model for inferring susceptibility to mastitis in dairy cattle: a procedure for likelihood-based inference

    Directory of Open Access Journals (Sweden)

    Jensen Just

    2004-01-01

    Full Text Available Abstract A Gaussian mixture model with a finite number of components and correlated random effects is described. The ultimate objective is to model somatic cell count information in dairy cattle and to develop criteria for genetic selection against mastitis, an important udder disease. Parameter estimation is by maximum likelihood or by an extension of restricted maximum likelihood. A Monte Carlo expectation-maximization algorithm is used for this purpose. The expectation step is carried out using Gibbs sampling, whereas the maximization step is deterministic. Ranking rules based on the conditional probability of membership in a putative group of uninfected animals, given the somatic cell information, are discussed. Several extensions of the model are suggested.

  8. Asymptotic Properties of Maximum Likelihood Estimates in the Mixed Poisson Model

    OpenAIRE

    Lambert, Diane; Tierney, Luke

    1984-01-01

    This paper considers the asymptotic behavior of the maximum likelihood estimators (mle's) of the probabilities of a mixed Poisson distribution with a nonparametric mixing distribution. The vector of estimated probabilities is shown to converge in probability to the vector of mixed probabilities at rate $n^{1/2-\\varepsilon}$ for any $\\varepsilon > 0$ under a generalized $\\chi^2$ distance function. It is then shown that any finite set of the mle's has the same joint limiting distribution as doe...

  9. Adapted Maximum-Likelihood Gaussian Models for Numerical Optimization with Continuous EDAs

    OpenAIRE

    Bosman, Peter; Grahl, J; Thierens, D.

    2007-01-01

    This article focuses on numerical optimization with continuous Estimation-of-Distribution Algorithms (EDAs). Specifically, the focus is on the use of one of the most common and best understood probability distributions: the normal distribution. We first give an overview of the existing research on this topic. We then point out a source of inefficiency in EDAs that make use of the normal distribution with maximum-likelihood (ML) estimates. Scaling the covariance matrix beyond its ML estimate d...

  10. A Maximum Likelihood Estimator based on First Differences for a Panel Data Tobit Model with Individual Specific Effects

    OpenAIRE

    A.S. Kalwij

    2000-01-01

    This paper proposes an alternative estimation procedure for a panel data Tobit model with individual specific effects based on taking first differences of the equation of interest. This helps to alleviate the sensitivity of the estimates to a specific parameterization of the individual specific effects and some Monte Carlo evidence is provided in support of this. To allow for arbitrary serial correlation estimation takes place in two steps: Maximum Likelihood is applied to each pair of consec...

  11. Formalising responsibility modelling for automatic analysis

    OpenAIRE

    Simpson, Robbie; Storer, Tim

    2015-01-01

    Modelling the structure of social-technical systems as a basis for informing software system design is a difficult compromise. Formal methods struggle to capture the scale and complexity of the heterogeneous organisations that use technical systems. Conversely, informal approaches lack the rigour needed to inform the software design and construction process or enable automated analysis. We revisit the concept of responsibility modelling, which models social technical systems as a collec...

  12. Automatic reactor model synthesis with genetic programming.

    Science.gov (United States)

    Dürrenmatt, David J; Gujer, Willi

    2012-01-01

    Successful modeling of wastewater treatment plant (WWTP) processes requires an accurate description of the plant hydraulics. Common methods such as tracer experiments are difficult and costly and thus have limited applicability in practice; engineers are often forced to rely on their experience only. An implementation of grammar-based genetic programming with an encoding to represent hydraulic reactor models as program trees should fill this gap: The encoding enables the algorithm to construct arbitrary reactor models compatible with common software used for WWTP modeling by linking building blocks, such as continuous stirred-tank reactors. Discharge measurements and influent and effluent concentrations are the only required inputs. As shown in a synthetic example, the technique can be used to identify a set of reactor models that perform equally well. Instead of being guided by experience, the most suitable model can now be chosen by the engineer from the set. In a second example, temperature measurements at the influent and effluent of a primary clarifier are used to generate a reactor model. A virtual tracer experiment performed on the reactor model has good agreement with a tracer experiment performed on-site. PMID:22277238

  13. Automatic 3D Modeling of the Urban Landscape

    NARCIS (Netherlands)

    Esteban, I.; Dijk, J.; Groen, F.A.

    2010-01-01

    In this paper we present a fully automatic system for building 3D models of urban areas at the street level. We propose a novel approach for the accurate estimation of the scale consistent camera pose given two previous images. We employ a new method for global optimization and use a novel sampling

  14. Geometric model of robotic arc welding for automatic programming

    Institute of Scientific and Technical Information of China (English)

    2000-01-01

    Geometric information is important for automatic programming of arc welding robot. Complete geometric models of robotic arc welding are established in this paper. In the geometric model of weld seam, an equation with seam length as its parameter is introduced to represent any weld seam. The method to determine discrete programming points on a weld seam is presented. In the geometric model of weld workpiece, three class primitives and CSG tree are used to describe weld workpiece. Detailed data structure is presented. In pose transformation of torch, world frame, torch frame and active frame are defined, and transformation between frames is presented. Based on these geometric models, an automatic programming software package for robotic arc welding, RAWCAD, is developed. Experiments show that the geometric models are practical and reliable.

  15. Nonlinear model predictive control using automatic differentiation

    OpenAIRE

    Al Seyab, Rihab Khalid Shakir

    2006-01-01

    Although nonlinear model predictive control (NMPC) might be the best choice for a nonlinear plant, it is still not widely used. This is mainly due to the computational burden associated with solving online a set of nonlinear differential equations and a nonlinear dynamic optimization problem in real time. This thesis is concerned with strategies aimed at reducing the computational burden involved in different stages of the NMPC such as optimization problem, state estimation, an...

  16. Towards automatic calibration of 2-dimensional flood propagation models

    Directory of Open Access Journals (Sweden)

    P. Fabio

    2009-11-01

    Full Text Available Hydraulic models for flood propagation description are an essential tool in many fields, e.g. civil engineering, flood hazard and risk assessments, evaluation of flood control measures, etc. Nowadays there are many models of different complexity regarding the mathematical foundation and spatial dimensions available, and most of them are comparatively easy to operate due to sophisticated tools for model setup and control. However, the calibration of these models is still underdeveloped in contrast to other models like e.g. hydrological models or models used in ecosystem analysis. This has basically two reasons: first, the lack of relevant data against the models can be calibrated, because flood events are very rarely monitored due to the disturbances inflicted by them and the lack of appropriate measuring equipment in place. Secondly, especially the two-dimensional models are computationally very demanding and therefore the use of available sophisticated automatic calibration procedures is restricted in many cases. This study takes a well documented flood event in August 2002 at the Mulde River in Germany as an example and investigates the most appropriate calibration strategy for a full 2-D hyperbolic finite element model. The model independent optimiser PEST, that gives the possibility of automatic calibrations, is used. The application of the parallel version of the optimiser to the model and calibration data showed that a it is possible to use automatic calibration in combination of 2-D hydraulic model, and b equifinality of model parameterisation can also be caused by a too large number of degrees of freedom in the calibration data in contrast to a too simple model setup. In order to improve model calibration and reduce equifinality a method was developed to identify calibration data with likely errors that obstruct model calibration.

  17. Automatic balancing of 3D models

    DEFF Research Database (Denmark)

    Christiansen, Asger Nyman; Schmidt, Ryan; Bærentzen, Jakob Andreas

    2014-01-01

    3D printing technologies allow for more diverse shapes than are possible with molds and the cost of making just one single object is negligible compared to traditional production methods. However, not all shapes are suitable for 3D print. One of the remaining costs is therefore human time spent......, in these cases, we will apply a rotation of the object which only deforms the shape a little near the base. No user input is required but it is possible to specify manufacturing constraints related to specific 3D print technologies. Several models have successfully been balanced and printed using both polyjet...

  18. Towards automatic model based controller design for reconfigurable plants

    DEFF Research Database (Denmark)

    Michelsen, Axel Gottlieb; Stoustrup, Jakob; Izadi-Zamanabadi, Roozbeh

    2008-01-01

    This paper introduces model-based Plug and Play Process Control, a novel concept for process control, which allows a model-based control system to be reconfigured when a sensor or an actuator is plugged into a controlled process. The work reported in this paper focuses on composing a monolithic...... model from models of a process to be controlled and the actuators and sensors connected to the process, and propagation of tuning criteria from these sub-models, thereby accommodating automatic controller synthesis using existing methods. The developed method is successfully tested on an industrial case...

  19. Modelling of risk events with uncertain likelihoods and impacts in large infrastructure projects

    DEFF Research Database (Denmark)

    Schjær-Jacobsen, Hans

    2010-01-01

    prevent future budget overruns. One of the central ideas is to introduce improved risk management processes and the present paper addresses this particular issue. A relevant cost function in terms of unit prices and quantities is developed and an event impact matrix with uncertain impacts from independent......This paper presents contributions to the mathematical core of risk and uncertainty management in compliance with the principles of New Budgeting laid out in 2008 by the Danish Ministry of Transport to be used in large infrastructure projects. Basically, the new principles are proposed in order to...... uncertain risk events is used to calculate the total uncertain risk budget. Cost impacts from the individual risk events on the individual project activities are kept precisely track of in order to comply with the requirements of New Budgeting. Additionally, uncertain likelihoods for the occurrence of risk...

  20. MEMOPS: data modelling and automatic code generation.

    Science.gov (United States)

    Fogh, Rasmus H; Boucher, Wayne; Ionides, John M C; Vranken, Wim F; Stevens, Tim J; Laue, Ernest D

    2010-01-01

    In recent years the amount of biological data has exploded to the point where much useful information can only be extracted by complex computational analyses. Such analyses are greatly facilitated by metadata standards, both in terms of the ability to compare data originating from different sources, and in terms of exchanging data in standard forms, e.g. when running processes on a distributed computing infrastructure. However, standards thrive on stability whereas science tends to constantly move, with new methods being developed and old ones modified. Therefore maintaining both metadata standards, and all the code that is required to make them useful, is a non-trivial problem. Memops is a framework that uses an abstract definition of the metadata (described in UML) to generate internal data structures and subroutine libraries for data access (application programming interfaces--APIs--currently in Python, C and Java) and data storage (in XML files or databases). For the individual project these libraries obviate the need for writing code for input parsing, validity checking or output. Memops also ensures that the code is always internally consistent, massively reducing the need for code reorganisation. Across a scientific domain a Memops-supported data model makes it easier to support complex standards that can capture all the data produced in a scientific area, share them among all programs in a complex software pipeline, and carry them forward to deposition in an archive. The principles behind the Memops generation code will be presented, along with example applications in Nuclear Magnetic Resonance (NMR) spectroscopy and structural biology. PMID:20375445

  1. Automatic Texture Mapping of Architectural and Archaeological 3d Models

    Science.gov (United States)

    Kersten, T. P.; Stallmann, D.

    2012-07-01

    Today, detailed, complete and exact 3D models with photo-realistic textures are increasingly demanded for numerous applications in architecture and archaeology. Manual texture mapping of 3D models by digital photographs with software packages, such as Maxon Cinema 4D, Autodesk 3Ds Max or Maya, still requires a complex and time-consuming workflow. So, procedures for automatic texture mapping of 3D models are in demand. In this paper two automatic procedures are presented. The first procedure generates 3D surface models with textures by web services, while the second procedure textures already existing 3D models with the software tmapper. The program tmapper is based on the Multi Layer 3D image (ML3DImage) algorithm and developed in the programming language C++. The studies showing that the visibility analysis using the ML3DImage algorithm is not sufficient to obtain acceptable results of automatic texture mapping. To overcome the visibility problem the Point Cloud Painter algorithm in combination with the Z-buffer-procedure will be applied in the future.

  2. Empirical likelihood-based dimension reduction inference for linear error-in-responses models with validation study

    Institute of Scientific and Technical Information of China (English)

    WANG Qihua; H(a)rdle Wolfgang

    2004-01-01

    In this paper, linear errors-in-response models are considered in the presence of validation data on the responses. A semiparametric dimension reduction technique is employed to define an estimator ofβ with asymptotic normality, the estimated empirical loglikelihoods and the adjusted empirical loglikelihoods for the vector of regression coefficients and linear combinations of the regression coefficients, respectively. The estimated empirical log-likelihoods are shown to be asymptotically distributed as weighted sums of independent x21 and the adjusted empirical loglikelihoods are proved to be asymptotically distributed as standard chi-squares, respectively.

  3. Likelihood for interval-censored observations from multi-state models

    DEFF Research Database (Denmark)

    Commenges, Daniel

    2002-01-01

    multi-state models; illness-death; counting processes; ignorability; interval-censoring; Markov models......multi-state models; illness-death; counting processes; ignorability; interval-censoring; Markov models...

  4. Automatic Part Primitive Feature Identification Based on Faceted Models

    Directory of Open Access Journals (Sweden)

    Muizuddin Azka

    2012-09-01

    Full Text Available Feature recognition technology has been developed along with the process of integrating CAD/CAPP/CAM. Automatic feature detection applications based on faceted models expected to speed up the manufacturing process design activities such as setting tool to be used or required machining process in a variety of different features. This research focuses on detection of primitive features available in a part. This is done by applying part slicing and grouping adjacent facets. Type of feature is identified by simply evaluating normal vector direction of all features group. In order to identify features on various planes of a part, planes, one at a time, are rotated to be parallel with the reference plane. The results showed that this method can identify the primitive features automatically accurately in all planes of tested part, this covered : pocket, cylindrical and profile feature.

  5. A Rayleigh Doppler frequency estimator derived from maximum likelihood theory

    OpenAIRE

    Hansen, Henrik; Affes, Sofiéne; Mermelstein, Paul

    1999-01-01

    Reliable estimates of Rayleigh Doppler frequency are useful for the optimization of adaptive multiple access wireless receivers. The adaptation parameters of such receivers are sensitive to the amount of Doppler and automatic reconfiguration to the speed of terminal movement can optimize cell capacities in low and high speed situations. We derive a Doppler frequency estimator using the maximum likelihood method and Jakes model (1974) of a Rayleigh fading channel. This estimator requires an FF...

  6. Maximum Marginal Likelihood Estimation of a Monotonic Polynomial Generalized Partial Credit Model with Applications to Multiple Group Analysis.

    Science.gov (United States)

    Falk, Carl F; Cai, Li

    2016-06-01

    We present a semi-parametric approach to estimating item response functions (IRF) useful when the true IRF does not strictly follow commonly used functions. Our approach replaces the linear predictor of the generalized partial credit model with a monotonic polynomial. The model includes the regular generalized partial credit model at the lowest order polynomial. Our approach extends Liang's (A semi-parametric approach to estimate IRFs, Unpublished doctoral dissertation, 2007) method for dichotomous item responses to the case of polytomous data. Furthermore, item parameter estimation is implemented with maximum marginal likelihood using the Bock-Aitkin EM algorithm, thereby facilitating multiple group analyses useful in operational settings. Our approach is demonstrated on both educational and psychological data. We present simulation results comparing our approach to more standard IRF estimation approaches and other non-parametric and semi-parametric alternatives. PMID:25487423

  7. Optimization of a Nucleic Acids united-RESidue 2-Point model (NARES-2P) with a maximum-likelihood approach

    International Nuclear Information System (INIS)

    Coarse-grained models are useful tools to investigate the structural and thermodynamic properties of biomolecules. They are obtained by merging several atoms into one interaction site. Such simplified models try to capture as much as possible information of the original biomolecular system in all-atom representation but the resulting parameters of these coarse-grained force fields still need further optimization. In this paper, a force field optimization method, which is based on maximum-likelihood fitting of the simulated to the experimental conformational ensembles and least-squares fitting of the simulated to the experimental heat-capacity curves, is applied to optimize the Nucleic Acid united-RESidue 2-point (NARES-2P) model for coarse-grained simulations of nucleic acids recently developed in our laboratory. The optimized NARES-2P force field reproduces the structural and thermodynamic data of small DNA molecules much better than the original force field

  8. Optimization of a Nucleic Acids united-RESidue 2-Point model (NARES-2P) with a maximum-likelihood approach

    Energy Technology Data Exchange (ETDEWEB)

    He, Yi; Scheraga, Harold A., E-mail: has5@cornell.edu [Department of Chemistry and Chemical Biology, Cornell University, Ithaca, New York 14853 (United States); Liwo, Adam [Faculty of Chemistry, University of Gdańsk, Wita Stwosza 63, 80-308 Gdańsk (Poland)

    2015-12-28

    Coarse-grained models are useful tools to investigate the structural and thermodynamic properties of biomolecules. They are obtained by merging several atoms into one interaction site. Such simplified models try to capture as much as possible information of the original biomolecular system in all-atom representation but the resulting parameters of these coarse-grained force fields still need further optimization. In this paper, a force field optimization method, which is based on maximum-likelihood fitting of the simulated to the experimental conformational ensembles and least-squares fitting of the simulated to the experimental heat-capacity curves, is applied to optimize the Nucleic Acid united-RESidue 2-point (NARES-2P) model for coarse-grained simulations of nucleic acids recently developed in our laboratory. The optimized NARES-2P force field reproduces the structural and thermodynamic data of small DNA molecules much better than the original force field.

  9. Optimization of a Nucleic Acids united-RESidue 2-Point model (NARES-2P) with a maximum-likelihood approach

    Science.gov (United States)

    He, Yi; Liwo, Adam; Scheraga, Harold A.

    2015-12-01

    Coarse-grained models are useful tools to investigate the structural and thermodynamic properties of biomolecules. They are obtained by merging several atoms into one interaction site. Such simplified models try to capture as much as possible information of the original biomolecular system in all-atom representation but the resulting parameters of these coarse-grained force fields still need further optimization. In this paper, a force field optimization method, which is based on maximum-likelihood fitting of the simulated to the experimental conformational ensembles and least-squares fitting of the simulated to the experimental heat-capacity curves, is applied to optimize the Nucleic Acid united-RESidue 2-point (NARES-2P) model for coarse-grained simulations of nucleic acids recently developed in our laboratory. The optimized NARES-2P force field reproduces the structural and thermodynamic data of small DNA molecules much better than the original force field.

  10. WOMBAT——A tool for mixed model analyses in quantitative genetics by restricted maximum likelihood (REML)

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    WOMBAT is a software package for quantitative genetic analyses of continuous traits, fitting a linear, mixed model;estimates of covariance components and the resulting genetic parameters are obtained by restricted maximum likelihood. A wide range of models, comprising numerous traits, multiple fixed and random effects, selected genetic covariance structures, random regression models and reduced rank estimation are accommodated. WOMBAT employs up-to-date numerical and computational methods. Together with the use of efficient compilers, this generates fast executable programs, suitable for large scale analyses.Use of WOMBAT is illustrated for a bivariate analysis. The package consists of the executable program, available for LINUX and WINDOWS environments, manual and a set of worked example, and can be downloaded free of charge from http://agbu.une.edu.au/~kmeyer/wombat.html

  11. Automatic Generation of 3D Building Models with Multiple Roofs

    Institute of Scientific and Technical Information of China (English)

    Kenichi Sugihara; Yoshitugu Hayashi

    2008-01-01

    Based on building footprints (building polygons) on digital maps, we are proposing the GIS and CG integrated system that automatically generates 3D building models with multiple roofs. Most building polygons' edges meet at right angles (orthogonal polygon). The integrated system partitions orthogonal building polygons into a set of rectangles and places rectangular roofs and box-shaped building bodies on these rectangles. In order to partition an orthogonal polygon, we proposed a useful polygon expression in deciding from which vertex a dividing line is drawn. In this paper, we propose a new scheme for partitioning building polygons and show the process of creating 3D roof models.

  12. A New Model for Automatic Raster-to-Vector Conversion

    Directory of Open Access Journals (Sweden)

    Hesham E. ElDeeb

    2011-06-01

    Full Text Available There is a growing need for automatic digitizing, or so called automated raster to vector conversion (ARVC for maps. The benefit of ARVC is the production of maps that consume less space and are easy to search for or retrieve information from. In addition, ARVC is the fundamental step to reusing old maps at higher level of recognition. In this paper, a new model for an ARVC is developed. The proposed model converts the “paper maps” into electronic formats for Geographic Information Systems (GIS and evaluates the performance of the conversion process. To overcome the limitations of existing commercial vectorization software packages, the proposed model is customized to separate textual information, usually the cause of problems in the automatic conversion process, from the delimiting graphics of the map. The model retains the coordinates of the textual information for a later merge with the map after the conversion process. The propose model also addresses the localization problems in ARVC through the knowledge-supported intelligent vectorization system that is designed specifically to improve the accuracy and speed of the vectorization process. Finally, the model has beenimplemented on a symmetric multiprocessing (SMP architecture, in order to achieve higher speed up and performance.

  13. Automatic data processing and crustal modeling on Brazilian Seismograph Network

    Science.gov (United States)

    Moreira, L. P.; Chimpliganond, C.; Peres Rocha, M.; Franca, G.; Marotta, G. S.; Von Huelsen, M. G.

    2014-12-01

    The Brazilian Seismograph Network (RSBR) is a joint project of four Brazilian research institutions with the support of Petrobras and its main goal is to monitor the seismic activities, generate alerts of seismic hazard and provide data for Brazilian tectonic and structure research. Each institution operates and maintain their seismic network, sharing their data in an virtual private network. These networks have seismic stations transmitting in real time (or near real time) raw data to their respective data centers, where the seismogram files are then shared with other institutions. Currently RSBR has 57 broadband stations, some of them operating since 1994, transmitting data through mobile phone data networks or satellite links. Station management, data acquisition and storage and earthquake data processing at the Seismological Observatory of the University of Brasilia is automatically performed by SeisComP3 (SC3). However, the SC3 data processing is limited to event detection, location and magnitude. An automatic crustal modeling system was designed process raw seismograms and generate 1D S-velocity profiles. This system automatically calculates receiver function (RF) traces, Vp/Vs ratio (h-k stack) and surface waves dispersion (SWD) curves. These traces and curves are then used to calibrate the lithosphere seismic velocity models using a joint inversion scheme The results can be reviewed by an analyst, change processing parameters and selecting/neglecting RF traces and SWD curves used in lithosphere model calibration. The results to be obtained from this system will be used to generate and update a quasi-3D crustal model of Brazil's territory.

  14. Automatic Generation of Symbolic Model for Parameterized Synchronous Systems

    Institute of Scientific and Technical Information of China (English)

    Wei-Wen Xu

    2004-01-01

    With the purpose of making the verification of parameterized system more general and easier, in this paper, a new and intuitive language PSL (Parameterized-system Specification Language) is proposed to specify a class of parameterized synchronous systems. From a PSL script, an automatic method is proposed to generate a constraint-based symbolic model. The model can concisely symbolically represent the collections of global states by counting the number of processes in a given state. Moreover, a theorem has been proved that there is a simulation relation between the original system and its symbolic model. Since the abstract and symbolic techniques are exploited in the symbolic model, state-explosion problem in traditional verification methods is efficiently avoided. Based on the proposed symbolic model, a reachability analysis procedure is implemented using ANSI C++ on UNIX platform. Thus, a complete tool for verifying the parameterized synchronous systems is obtained and tested for some cases. The experimental results show that the method is satisfactory.

  15. An Automatic Registration Algorithm for 3D Maxillofacial Model

    Science.gov (United States)

    Qiu, Luwen; Zhou, Zhongwei; Guo, Jixiang; Lv, Jiancheng

    2016-09-01

    3D image registration aims at aligning two 3D data sets in a common coordinate system, which has been widely used in computer vision, pattern recognition and computer assisted surgery. One challenging problem in 3D registration is that point-wise correspondences between two point sets are often unknown apriori. In this work, we develop an automatic algorithm for 3D maxillofacial models registration including facial surface model and skull model. Our proposed registration algorithm can achieve a good alignment result between partial and whole maxillofacial model in spite of ambiguous matching, which has a potential application in the oral and maxillofacial reparative and reconstructive surgery. The proposed algorithm includes three steps: (1) 3D-SIFT features extraction and FPFH descriptors construction; (2) feature matching using SAC-IA; (3) coarse rigid alignment and refinement by ICP. Experiments on facial surfaces and mandible skull models demonstrate the efficiency and robustness of our algorithm.

  16. Maximum Likelihood Estimation in Latent Class Models For Contingency Table Data

    OpenAIRE

    Fienberg, S.E.; Hersh, P.; Rinaldo, A.; Zhou, Y

    2007-01-01

    Statistical models with latent structure have a history going back to the 1950s and have seen widespread use in the social sciences and, more recently, in computational biology and in machine learning. Here we study the basic latent class model proposed originally by the sociologist Paul F. Lazarfeld for categorical variables, and we explain its geometric structure. We draw parallels between the statistical and geometric properties of latent class models and we illustrate geometrically the ca...

  17. R and D on automatic modeling methods for Monte Carlo codes FLUKA

    International Nuclear Information System (INIS)

    FLUKA is a fully integrated particle physics Monte Carlo simulation package. It is necessary to create the geometry models before calculation. However, it is time- consuming and error-prone to describe the geometry models manually. This study developed an automatic modeling method which could automatically convert computer-aided design (CAD) geometry models into FLUKA models. The conversion program was integrated into CAD/image-based automatic modeling program for nuclear and radiation transport simulation (MCAM). Its correctness has been demonstrated. (authors)

  18. Risk analysis of Leksell Gamma Knife Model C with automatic positioning system

    International Nuclear Information System (INIS)

    Purpose: This study was conducted to evaluate the decrease in risk from misadministration of the new Leksell Gamma Knife Model C with Automatic Positioning System compared with previous models. Methods and Materials: Elekta Instruments, A.B. of Stockholm has introduced a new computer-controlled Leksell Gamma Knife Model C which uses motor-driven trunnions to reposition the patient between isocenters (shots) without human intervention. Previous models required the operators to manually set coordinates from a printed list, permitting opportunities for coordinate transposition, incorrect helmet size, incorrect treatment times, missing shots, or repeated shots. Results: A risk analysis was conducted between craniotomy involving hospital admission and outpatient Gamma Knife radiosurgery. A report of the Institute of Medicine of the National Academies dated November 29, 1999 estimated that medical errors kill between 44,000 and 98,000 people each year in the United States. Another report from the National Nosocomial Infections Surveillance System estimates that 2.1 million nosocomial infections occur annually in the United States in acute care hospitals alone, with 31 million total admissions. Conclusions: All medical procedures have attendant risks of morbidity and possibly mortality. Each patient should be counseled as to the risk of adverse effects as well as the likelihood of good results for alternative treatment strategies. This paper seeks to fill a gap in the existing medical literature, which has a paucity of data involving risk estimates for stereotactic radiosurgery

  19. Maximum likelihood estimators for extended growth curve model with orthogonal between-individual design matrices

    NARCIS (Netherlands)

    Klein, Daniel; Zezula, Ivan

    2015-01-01

    The extended growth curve model is discussed in this paper. There are two versions of the model studied in the literature, which differ in the way how the column spaces of the design matrices are nested. The nesting is applied either to the between-individual or to the within-individual design matri

  20. Automatically calibrating admittances in KATE's autonomous launch operations model

    Science.gov (United States)

    Morgan, Steve

    1992-09-01

    This report documents a 1000-line Symbolics LISP program that automatically calibrates all 15 fluid admittances in KATE's Autonomous Launch Operations (ALO) model. (KATE is Kennedy Space Center's Knowledge-based Autonomous Test Engineer, a diagnosis and repair expert system created for use on the Space Shuttle's various fluid flow systems.) As a new KATE application, the calibrator described here breaks new ground for KSC's Artificial Intelligence Lab by allowing KATE to both control and measure the hardware she supervises. By automating a formerly manual process, the calibrator: (1) saves the ALO model builder untold amounts of labor; (2) enables quick repairs after workmen accidently adjust ALO's hand valves; and (3) frees the modeler to pursue new KATE applications that previously were too complicated. Also reported are suggestions for enhancing the program: (1) to calibrate ALO's TV cameras, pumps, and sensor tolerances; and (2) to calibrate devices in other KATE models, such as the shuttle's LOX and Environment Control System (ECS).

  1. Modelling Errors in Automatic Speech Recognition for Dysarthric Speakers

    Science.gov (United States)

    Caballero Morales, Santiago Omar; Cox, Stephen J.

    2009-12-01

    Dysarthria is a motor speech disorder characterized by weakness, paralysis, or poor coordination of the muscles responsible for speech. Although automatic speech recognition (ASR) systems have been developed for disordered speech, factors such as low intelligibility and limited phonemic repertoire decrease speech recognition accuracy, making conventional speaker adaptation algorithms perform poorly on dysarthric speakers. In this work, rather than adapting the acoustic models, we model the errors made by the speaker and attempt to correct them. For this task, two techniques have been developed: (1) a set of "metamodels" that incorporate a model of the speaker's phonetic confusion matrix into the ASR process; (2) a cascade of weighted finite-state transducers at the confusion matrix, word, and language levels. Both techniques attempt to correct the errors made at the phonetic level and make use of a language model to find the best estimate of the correct word sequence. Our experiments show that both techniques outperform standard adaptation techniques.

  2. Regularization for Generalized Additive Mixed Models by Likelihood-Based Boosting

    OpenAIRE

    Groll, Andreas; Tutz, Gerhard

    2012-01-01

    With the emergence of semi- and nonparametric regression the generalized linear mixed model has been expanded to account for additive predictors. In the present paper an approach to variable selection is proposed that works for generalized additive mixed models. In contrast to common procedures it can be used in high-dimensional settings where many covariates are available and the form of the influence is unknown. It is constructed as a componentwise boosting method and hence is able to pe...

  3. Cross validation and maximum likelihood estimations of hyper-parameters of Gaussian processes with model mis-specification

    International Nuclear Information System (INIS)

    The Maximum Likelihood (ML) and Cross Validation (CV) methods for estimating covariance hyper-parameters are compared, in the context of Kriging with a mis-specified covariance structure. A two-step approach is used. First, the case of the estimation of a single variance hyper-parameter is addressed, for which the fixed correlation function is mis-specified. A predictive variance based quality criterion is introduced and a closed-form expression of this criterion is derived. It is shown that when the correlation function is mis-specified, the CV does better compared to ML, while ML is optimal when the model is well-specified. In the second step, the results of the first step are extended to the case when the hyper-parameters of the correlation function are also estimated from data. (author)

  4. Maximum Likelihood Estimation in the Tensor Normal Model with a Structured Mean

    OpenAIRE

    Nzabanita, Joseph; von Rosen, Dietrich; Singull, Martin

    2015-01-01

    There is a growing interest in the analysis of multi-way data. In some studies the inference about the dependencies in three-way data is done using the third order tensor normal model, where the focus is on the estimation of the variance-covariance matrix which has a Kronecker product structure. Little attention is paid to the structure of the mean, though, there is a potential to improve the analysis by assuming a structured mean. In this paper, we introduce a 2-fold growth curve model by as...

  5. Using the Extended Parallel Process Model to Examine Teachers' Likelihood of Intervening in Bullying

    Science.gov (United States)

    Duong, Jeffrey; Bradshaw, Catherine P.

    2013-01-01

    Background: Teachers play a critical role in protecting students from harm in schools, but little is known about their attitudes toward addressing problems like bullying. Previous studies have rarely used theoretical frameworks, making it difficult to advance this area of research. Using the Extended Parallel Process Model (EPPM), we examined the…

  6. Maximum likelihood estimation of neutral model parameters for multiple samples with different degrees of dispersal limitation

    NARCIS (Netherlands)

    Etienne, Rampal S.

    2009-01-01

    In a recent paper, I presented a sampling formula for species abundances from multiple samples according to the prevailing neutral model of biodiversity, but practical implementation for parameter estimation was only possible when these samples were from local communities that were assumed to be equ

  7. Estimation of Spatial Sample Selection Models : A Partial Maximum Likelihood Approach

    NARCIS (Netherlands)

    Rabovic, Renata; Cizek, Pavel

    2016-01-01

    To analyze data obtained by non-random sampling in the presence of cross-sectional dependence, estimation of a sample selection model with a spatial lag of a latent dependent variable or a spatial error in both the selection and outcome equations is considered. Since there is no estimation framework

  8. Model Considerations for Memory-based Automatic Music Transcription

    Science.gov (United States)

    Albrecht, Štěpán; Šmídl, Václav

    2009-12-01

    The problem of automatic music description is considered. The recorded music is modeled as a superposition of known sounds from a library weighted by unknown weights. Similar observation models are commonly used in statistics and machine learning. Many methods for estimation of the weights are available. These methods differ in the assumptions imposed on the weights. In Bayesian paradigm, these assumptions are typically expressed in the form of prior probability density function (pdf) on the weights. In this paper, commonly used assumptions about music signal are summarized and complemented by a new assumption. These assumptions are translated into pdfs and combined into a single prior density using combination of pdfs. Validity of the model is tested in simulation using synthetic data.

  9. Use of Maximum Likelihood-Mixed Models to select stable reference genes: a case of heat stress response in sheep

    Directory of Open Access Journals (Sweden)

    Salces Judit

    2011-08-01

    Full Text Available Abstract Background Reference genes with stable expression are required to normalize expression differences of target genes in qPCR experiments. Several procedures and companion software have been proposed to find the most stable genes. Model based procedures are attractive because they provide a solid statistical framework. NormFinder, a widely used software, uses a model based method. The pairwise comparison procedure implemented in GeNorm is a simpler procedure but one of the most extensively used. In the present work a statistical approach based in Maximum Likelihood estimation under mixed models was tested and compared with NormFinder and geNorm softwares. Sixteen candidate genes were tested in whole blood samples from control and heat stressed sheep. Results A model including gene and treatment as fixed effects, sample (animal, gene by treatment, gene by sample and treatment by sample interactions as random effects with heteroskedastic residual variance in gene by treatment levels was selected using goodness of fit and predictive ability criteria among a variety of models. Mean Square Error obtained under the selected model was used as indicator of gene expression stability. Genes top and bottom ranked by the three approaches were similar; however, notable differences for the best pair of genes selected for each method and the remaining genes of the rankings were shown. Differences among the expression values of normalized targets for each statistical approach were also found. Conclusions Optimal statistical properties of Maximum Likelihood estimation joined to mixed model flexibility allow for more accurate estimation of expression stability of genes under many different situations. Accurate selection of reference genes has a direct impact over the normalized expression values of a given target gene. This may be critical when the aim of the study is to compare expression rate differences among samples under different environmental

  10. Fast Automatic Precision Tree Models from Terrestrial Laser Scanner Data

    Directory of Open Access Journals (Sweden)

    Mathias Disney

    2013-01-01

    Full Text Available This paper presents a new method for constructing quickly and automatically precision tree models from point clouds of the trunk and branches obtained by terrestrial laser scanning. The input of the method is a point cloud of a single tree scanned from multiple positions. The surface of the visible parts of the tree is robustly reconstructed by making a flexible cylinder model of the tree. The thorough quantitative model records also the topological branching structure. In this paper, every major step of the whole model reconstruction process, from the input to the finished model, is presented in detail. The model is constructed by a local approach in which the point cloud is covered with small sets corresponding to connected surface patches in the tree surface. The neighbor-relations and geometrical properties of these cover sets are used to reconstruct the details of the tree and, step by step, the whole tree. The point cloud and the sets are segmented into branches, after which the branches are modeled as collections of cylinders. From the model, the branching structure and size properties, such as volume and branch size distributions, for the whole tree or some of its parts, can be approximated. The approach is validated using both measured and modeled terrestrial laser scanner data from real trees and detailed 3D models. The results show that the method allows an easy extraction of various tree attributes from terrestrial or mobile laser scanning point clouds.

  11. An automatic fault management model for distribution networks

    Energy Technology Data Exchange (ETDEWEB)

    Lehtonen, M.; Haenninen, S. [VTT Energy, Espoo (Finland); Seppaenen, M. [North-Carelian Power Co (Finland); Antila, E.; Markkila, E. [ABB Transmit Oy (Finland)

    1998-08-01

    An automatic computer model, called the FI/FL-model, for fault location, fault isolation and supply restoration is presented. The model works as an integrated part of the substation SCADA, the AM/FM/GIS system and the medium voltage distribution network automation systems. In the model, three different techniques are used for fault location. First, by comparing the measured fault current to the computed one, an estimate for the fault distance is obtained. This information is then combined, in order to find the actual fault point, with the data obtained from the fault indicators in the line branching points. As a third technique, in the absence of better fault location data, statistical information of line section fault frequencies can also be used. For combining the different fault location information, fuzzy logic is used. As a result, the probability weights for the fault being located in different line sections, are obtained. Once the faulty section is identified, it is automatically isolated by remote control of line switches. Then the supply is restored to the remaining parts of the network. If needed, reserve connections from other adjacent feeders can also be used. During the restoration process, the technical constraints of the network are checked. Among these are the load carrying capacity of line sections, voltage drop and the settings of relay protection. If there are several possible network topologies, the model selects the technically best alternative. The FI/IL-model has been in trial use at two substations of the North-Carelian Power Company since November 1996. This chapter lists the practical experiences during the test use period. Also the benefits of this kind of automation are assessed and future developments are outlined

  12. A GIS Model for Minefield Area Prediction: The Minefield Likelihood Procedure

    OpenAIRE

    Chamberlayne, Edward Pye

    2002-01-01

    Existing minefields left over from previous conflicts pose a grave threat to humanitarian relief operations, domestic everyday life, and future military operations. The remaining minefields in Afghanistan, from the decade long war with the Soviet Union, are just one example of this global problem. The purpose of this research is to develop a methodology that will predict areas where minefields are the most likely to exist through use of a GIS model. The concept is to combine geospatial dat...

  13. Reinforcement learning models and their neural correlates: An activation likelihood estimation meta-analysis.

    Science.gov (United States)

    Chase, Henry W; Kumar, Poornima; Eickhoff, Simon B; Dombrovski, Alexandre Y

    2015-06-01

    Reinforcement learning describes motivated behavior in terms of two abstract signals. The representation of discrepancies between expected and actual rewards/punishments-prediction error-is thought to update the expected value of actions and predictive stimuli. Electrophysiological and lesion studies have suggested that mesostriatal prediction error signals control behavior through synaptic modification of cortico-striato-thalamic networks. Signals in the ventromedial prefrontal and orbitofrontal cortex are implicated in representing expected value. To obtain unbiased maps of these representations in the human brain, we performed a meta-analysis of functional magnetic resonance imaging studies that had employed algorithmic reinforcement learning models across a variety of experimental paradigms. We found that the ventral striatum (medial and lateral) and midbrain/thalamus represented reward prediction errors, consistent with animal studies. Prediction error signals were also seen in the frontal operculum/insula, particularly for social rewards. In Pavlovian studies, striatal prediction error signals extended into the amygdala, whereas instrumental tasks engaged the caudate. Prediction error maps were sensitive to the model-fitting procedure (fixed or individually estimated) and to the extent of spatial smoothing. A correlate of expected value was found in a posterior region of the ventromedial prefrontal cortex, caudal and medial to the orbitofrontal regions identified in animal studies. These findings highlight a reproducible motif of reinforcement learning in the cortico-striatal loops and identify methodological dimensions that may influence the reproducibility of activation patterns across studies. PMID:25665667

  14. Likelihood Analysis of Seasonal Cointegration

    DEFF Research Database (Denmark)

    Johansen, Søren; Schaumburg, Ernst

    1999-01-01

    The error correction model for seasonal cointegration is analyzed. Conditions are found under which the process is integrated of order 1 and cointegrated at seasonal frequency, and a representation theorem is given. The likelihood function is analyzed and the numerical calculation of the maximum...... likelihood estimators is discussed. The asymptotic distribution of the likelihood ratio test for cointegrating rank is given. It is shown that the estimated cointegrating vectors are asymptotically mixed Gaussian. The results resemble the results for cointegration at zero frequency when expressed in terms...

  15. Automatically extracting sheet-metal features from solid model

    Institute of Scientific and Technical Information of China (English)

    刘志坚; 李建军; 王义林; 李材元; 肖祥芷

    2004-01-01

    With the development of modern industry,sheet-metal parts in mass production have been widely applied in mechanical,communication,electronics,and light industries in recent decades; but the advances in sheet-metal part design and manufacturing remain too slow compared with the increasing importance of sheet-metal parts in modern industry. This paper proposes a method for automatically extracting features from an arbitrary solid model of sheet-metal parts; whose characteristics are used for classification and graph-based representation of the sheet-metal features to extract the features embodied in a sheet-metal part. The extracting feature process can be divided for valid checking of the model geometry,feature matching,and feature relationship. Since the extracted features include abundant geometry and engineering information,they will be effective for downstream application such as feature rebuilding and stamping process planning.

  16. An automatic and effective parameter optimization method for model tuning

    Science.gov (United States)

    Zhang, T.; Li, L.; Lin, Y.; Xue, W.; Xie, F.; Xu, H.; Huang, X.

    2015-11-01

    Physical parameterizations in general circulation models (GCMs), having various uncertain parameters, greatly impact model performance and model climate sensitivity. Traditional manual and empirical tuning of these parameters is time-consuming and ineffective. In this study, a "three-step" methodology is proposed to automatically and effectively obtain the optimum combination of some key parameters in cloud and convective parameterizations according to a comprehensive objective evaluation metrics. Different from the traditional optimization methods, two extra steps, one determining the model's sensitivity to the parameters and the other choosing the optimum initial value for those sensitive parameters, are introduced before the downhill simplex method. This new method reduces the number of parameters to be tuned and accelerates the convergence of the downhill simplex method. Atmospheric GCM simulation results show that the optimum combination of these parameters determined using this method is able to improve the model's overall performance by 9 %. The proposed methodology and software framework can be easily applied to other GCMs to speed up the model development process, especially regarding unavoidable comprehensive parameter tuning during the model development stage.

  17. Efficient Word Reading: Automaticity of Print-Related Skills Indexed by Rapid Automatized Naming through Cusp-Catastrophe Modeling

    Science.gov (United States)

    Sideridis, Georgios D.; Simos, Panagiotis; Mouzaki, Angeliki; Stamovlasis, Dimitrios

    2016-01-01

    The study explored the moderating role of rapid automatized naming (RAN) in reading achievement through a cusp-catastrophe model grounded on nonlinear dynamic systems theory. Data were obtained from a community sample of 496 second through fourth graders who were followed longitudinally over 2 years and split into 2 random subsamples (validation…

  18. Improving on hidden Markov models: An articulatorily constrained, maximum likelihood approach to speech recognition and speech coding

    Energy Technology Data Exchange (ETDEWEB)

    Hogden, J.

    1996-11-05

    The goal of the proposed research is to test a statistical model of speech recognition that incorporates the knowledge that speech is produced by relatively slow motions of the tongue, lips, and other speech articulators. This model is called Maximum Likelihood Continuity Mapping (Malcom). Many speech researchers believe that by using constraints imposed by articulator motions, we can improve or replace the current hidden Markov model based speech recognition algorithms. Unfortunately, previous efforts to incorporate information about articulation into speech recognition algorithms have suffered because (1) slight inaccuracies in our knowledge or the formulation of our knowledge about articulation may decrease recognition performance, (2) small changes in the assumptions underlying models of speech production can lead to large changes in the speech derived from the models, and (3) collecting measurements of human articulator positions in sufficient quantity for training a speech recognition algorithm is still impractical. The most interesting (and in fact, unique) quality of Malcom is that, even though Malcom makes use of a mapping between acoustics and articulation, Malcom can be trained to recognize speech using only acoustic data. By learning the mapping between acoustics and articulation using only acoustic data, Malcom avoids the difficulties involved in collecting articulator position measurements and does not require an articulatory synthesizer model to estimate the mapping between vocal tract shapes and speech acoustics. Preliminary experiments that demonstrate that Malcom can learn the mapping between acoustics and articulation are discussed. Potential applications of Malcom aside from speech recognition are also discussed. Finally, specific deliverables resulting from the proposed research are described.

  19. Improving Statistical Language Model Performance with Automatically Generated Word Hierarchies

    CERN Document Server

    McMahon, J; Mahon, John Mc

    1995-01-01

    An automatic word classification system has been designed which processes word unigram and bigram frequency statistics extracted from a corpus of natural language utterances. The system implements a binary top-down form of word clustering which employs an average class mutual information metric. Resulting classifications are hierarchical, allowing variable class granularity. Words are represented as structural tags --- unique $n$-bit numbers the most significant bit-patterns of which incorporate class information. Access to a structural tag immediately provides access to all classification levels for the corresponding word. The classification system has successfully revealed some of the structure of English, from the phonemic to the semantic level. The system has been compared --- directly and indirectly --- with other recent word classification systems. Class based interpolated language models have been constructed to exploit the extra information supplied by the classifications and some experiments have sho...

  20. Automatic Construction of Anomaly Detectors from Graphical Models

    Energy Technology Data Exchange (ETDEWEB)

    Ferragut, Erik M [ORNL; Darmon, David M [ORNL; Shue, Craig A [ORNL; Kelley, Stephen [ORNL

    2011-01-01

    Detection of rare or previously unseen attacks in cyber security presents a central challenge: how does one search for a sufficiently wide variety of types of anomalies and yet allow the process to scale to increasingly complex data? In particular, creating each anomaly detector manually and training each one separately presents untenable strains on both human and computer resources. In this paper we propose a systematic method for constructing a potentially very large number of complementary anomaly detectors from a single probabilistic model of the data. Only one model needs to be trained, but numerous detectors can then be implemented. This approach promises to scale better than manual methods to the complex heterogeneity of real-life data. As an example, we develop a Latent Dirichlet Allocation probability model of TCP connections entering Oak Ridge National Laboratory. We show that several detectors can be automatically constructed from the model and will provide anomaly detection at flow, sub-flow, and host (both server and client) levels. This demonstrates how the fundamental connection between anomaly detection and probabilistic modeling can be exploited to develop more robust operational solutions.

  1. Fully automatic perceptual modeling of near regular textures

    Science.gov (United States)

    Menegaz, G.; Franceschetti, A.; Mecocci, A.

    2007-02-01

    Near regular textures feature a relatively high degree of regularity. They can be conveniently modeled by the combination of a suitable set of textons and a placement rule. The main issues in this respect are the selection of the minimum set of textons bringing the variability of the basic patterns; the identification and positioning of the generating lattice; and the modelization of the variability in both the texton structure and the deviation from periodicity of the lattice capturing the naturalness of the considered texture. In this contribution, we provide a fully automatic solution to both the analysis and the synthesis issues leading to the generation of textures samples that are perceptually indistinguishable from the original ones. The definition of an ad-hoc periodicity index allows to predict the suitability of the model for a given texture. The model is validated through psychovisual experiments providing the conditions for subjective equivalence among the original and synthetic textures, while allowing to determine the minimum number of textons to be used to meet such a requirement for a given texture class. This is of prime importance in model-based coding applications, as is the one we foresee, as it allows to minimize the amount of information to be transmitted to the receiver.

  2. Empirical likelihood-based dimension reduction inference for linear error-in-responses models with validation study

    Institute of Scientific and Technical Information of China (English)

    2004-01-01

    [1]Fuller, W. A., Measurement Error Models, New York: John Wiley & Sons Inc., 1987.[2]Carroll, R. J., Ruppert, D., Stefanski, L. W., Measurement Error in Nonlinear Models, New York: Chapman and Hall, 1995.[3]Wittes, J., Lakatos, E., Probstfied, J., Surrogate endpoints in clinical trails: Cardiovascular diseases, Statist,Med., 1989, 8: 415-425.[4]Buonaccorsi, J. P., Measurement error in the response in the general linear model, J. Amer. Statist. Assoc., 1996,91(434): 633-642.[5]Carroll, R. J., Stefanski, L. A., Approximate quasi-likelihood estimation in models with surrogate predictors, J.Amer. Statist. Assoc., 1990, 85: 652-663.[6]Pepe, M. S., Inference using surrogate outcome data and a validation sample, Biometrika, 1992, 79: 355-365.[7]Duncan, G., Hill, D., An investigations of the extent and consequences of measurement error in labor-economics survey data, Journal of Labor Economics, 1985, 3: 508-532.[8]Stefanski, L. A., Carrol, R. J., Conditional scores and optimal scores for generalized linear measurement error models, Biometrika, 1987, 74:703-716.[9]Carroll, R. J., Wand, M. P., Semiparametric estimation in logistic measure error models, J. Roy. Statist. Soc.,Ser B, 1991, 53: 652-663.[10]Pepe, M. S., Fleming, T. R., A general nonparametric method for dealing with errors in missing or surrogate covariate data, J. Amer. Statist. Assoc. 1991, 86:108-113.[11]Pepe, M. S., Reilly, M., Fleming, T. R., Auxiliary outcome data and the mean score method, J. Statist. Plan.Inference, 1994, 42: 137-160.[12]Reilly, M., Pepe, M. S., A mean score method for missing and auxiliary covariate data in regression models,Biometrika, 1995, 82: 299-314.[13]Carroll, R. J., Knickerbocker, R. K., Wang, C. Y., Dimension reduction in a semiparametric regression model with errors in covariates, The Annals of Statistics, 1995, 23: 161-181.[14]Sepanski, J. H., Lee, L. F., Semiparametric estimation of nonlinear error-in-variables models

  3. Modelling Errors in Automatic Speech Recognition for Dysarthric Speakers

    Directory of Open Access Journals (Sweden)

    Santiago Omar Caballero Morales

    2009-01-01

    Full Text Available Dysarthria is a motor speech disorder characterized by weakness, paralysis, or poor coordination of the muscles responsible for speech. Although automatic speech recognition (ASR systems have been developed for disordered speech, factors such as low intelligibility and limited phonemic repertoire decrease speech recognition accuracy, making conventional speaker adaptation algorithms perform poorly on dysarthric speakers. In this work, rather than adapting the acoustic models, we model the errors made by the speaker and attempt to correct them. For this task, two techniques have been developed: (1 a set of “metamodels” that incorporate a model of the speaker's phonetic confusion matrix into the ASR process; (2 a cascade of weighted finite-state transducers at the confusion matrix, word, and language levels. Both techniques attempt to correct the errors made at the phonetic level and make use of a language model to find the best estimate of the correct word sequence. Our experiments show that both techniques outperform standard adaptation techniques.

  4. Electricity prices forecasting by automatic dynamic harmonic regression models

    International Nuclear Information System (INIS)

    The changes experienced by electricity markets in recent years have created the necessity for more accurate forecast tools of electricity prices, both for producers and consumers. Many methodologies have been applied to this aim, but in the view of the authors, state space models are not yet fully exploited. The present paper proposes a univariate dynamic harmonic regression model set up in a state space framework for forecasting prices in these markets. The advantages of the approach are threefold. Firstly, a fast automatic identification and estimation procedure is proposed based on the frequency domain. Secondly, the recursive algorithms applied offer adaptive predictions that compare favourably with respect to other techniques. Finally, since the method is based on unobserved components models, explicit information about trend, seasonal and irregular behaviours of the series can be extracted. This information is of great value to the electricity companies' managers in order to improve their strategies, i.e. it provides management innovations. The good forecast performance and the rapid adaptability of the model to changes in the data are illustrated with actual prices taken from the PJM interconnection in the US and for the Spanish market for the year 2002

  5. Recovery of Item Parameters in the Nominal Response Model: A Comparison of Marginal Maximum Likelihood Estimation and Markov Chain Monte Carlo Estimation.

    Science.gov (United States)

    Wollack, James A.; Bolt, Daniel M.; Cohen, Allan S.; Lee, Young-Sun

    2002-01-01

    Compared the quality of item parameter estimates for marginal maximum likelihood (MML) and Markov Chain Monte Carlo (MCMC) with the nominal response model using simulation. The quality of item parameter recovery was nearly identical for MML and MCMC, and both methods tended to produce good estimates. (SLD)

  6. TMB: Automatic differentiation and laplace approximation

    DEFF Research Database (Denmark)

    Kristensen, Kasper; Nielsen, Anders; Berg, Casper Willestofte;

    2016-01-01

    TMB is an open source R package that enables quick implementation of complex nonlinear random effects (latent variable) models in a manner similar to the established AD Model Builder package (ADMB, http://admb-project.org/; Fournier et al. 2011). In addition, it offers easy access to parallel...... computations. The user defines the joint likelihood for the data and the random effects as a C++ template function, while all the other operations are done in R; e.g., reading in the data. The package evaluates and maximizes the Laplace approximation of the marginal likelihood where the random effects are...... automatically integrated out. This approximation, and its derivatives, are obtained using automatic differentiation (up to order three) of the joint likelihood. The computations are designed to be fast for problems with many random effects (approximate to 10(6)) and parameters (approximate to 10...

  7. Empirical likelihood method in survival analysis

    CERN Document Server

    Zhou, Mai

    2015-01-01

    Add the Empirical Likelihood to Your Nonparametric ToolboxEmpirical Likelihood Method in Survival Analysis explains how to use the empirical likelihood method for right censored survival data. The author uses R for calculating empirical likelihood and includes many worked out examples with the associated R code. The datasets and code are available for download on his website and CRAN.The book focuses on all the standard survival analysis topics treated with empirical likelihood, including hazard functions, cumulative distribution functions, analysis of the Cox model, and computation of empiric

  8. Control of automatic processes: A parallel distributed-processing model of the stroop effect. Technical report

    Energy Technology Data Exchange (ETDEWEB)

    Cohen, J.D.; Dunbar, K.; McClelland, J.L.

    1988-06-16

    A growing body of evidence suggests that traditional views of automaticity are in need of revision. For example, automaticity has often been treated as an all-or-none phenomenon, and traditional theories have held that automatic processes are independent of attention. Yet recent empirial data suggests that automatic processes are continuous, and furthermore are subject to attentional control. In this paper we present a model of attention which addresses these issues. Using a parallel distributed processing framework we propose that the attributes of automaticity depend upon the strength of a process and that strength increases with training. Using the Stroop effect as an example, we show how automatic processes are continuous and emerge gradually with practice. Specifically, we present a computational model of the Stroop task which simulates the time course of processing as well as the effects of learning.

  9. Rising Above Chaotic Likelihoods

    CERN Document Server

    Du, Hailiang

    2014-01-01

    Berliner (Likelihood and Bayesian prediction for chaotic systems, J. Am. Stat. Assoc. 1991) identified a number of difficulties in using the likelihood function within the Bayesian paradigm for state estimation and parameter estimation of chaotic systems. Even when the equations of the system are given, he demonstrated "chaotic likelihood functions" of initial conditions and parameter values in the 1-D Logistic Map. Chaotic likelihood functions, while ultimately smooth, have such complicated small scale structure as to cast doubt on the possibility of identifying high likelihood estimates in practice. In this paper, the challenge of chaotic likelihoods is overcome by embedding the observations in a higher dimensional sequence-space, which is shown to allow good state estimation with finite computational power. An Importance Sampling approach is introduced, where Pseudo-orbit Data Assimilation is employed in the sequence-space in order first to identify relevant pseudo-orbits and then relevant trajectories. Es...

  10. CAD-based automatic modeling method for Geant4 geometry model through MCAM

    International Nuclear Information System (INIS)

    The full text of publication follows. Geant4 is a widely used Monte Carlo transport simulation package. Before calculating using Geant4, the calculation model need be established which could be described by using Geometry Description Markup Language (GDML) or C++ language. However, it is time-consuming and error-prone to manually describe the models by GDML. Automatic modeling methods have been developed recently, but there are some problems that exist in most present modeling programs, specially some of them were not accurate or adapted to specifically CAD format. To convert the GDML format models to CAD format accurately, a Geant4 Computer Aided Design (CAD) based modeling method was developed for automatically converting complex CAD geometry model into GDML geometry model. The essence of this method was dealing with CAD model represented with boundary representation (B-REP) and GDML model represented with constructive solid geometry (CSG). At first, CAD model was decomposed to several simple solids which had only one close shell. And then the simple solid was decomposed to convex shell set. Then corresponding GDML convex basic solids were generated by the boundary surfaces getting from the topological characteristic of a convex shell. After the generation of these solids, GDML model was accomplished with series boolean operations. This method was adopted in CAD/Image-based Automatic Modeling Program for Neutronics and Radiation Transport (MCAM), and tested with several models including the examples in Geant4 install package. The results showed that this method could convert standard CAD model accurately, and can be used for Geant4 automatic modeling. (authors)

  11. Automatic prediction of facial trait judgments: appearance vs. structural models.

    Directory of Open Access Journals (Sweden)

    Mario Rojas

    Full Text Available Evaluating other individuals with respect to personality characteristics plays a crucial role in human relations and it is the focus of attention for research in diverse fields such as psychology and interactive computer systems. In psychology, face perception has been recognized as a key component of this evaluation system. Multiple studies suggest that observers use face information to infer personality characteristics. Interactive computer systems are trying to take advantage of these findings and apply them to increase the natural aspect of interaction and to improve the performance of interactive computer systems. Here, we experimentally test whether the automatic prediction of facial trait judgments (e.g. dominance can be made by using the full appearance information of the face and whether a reduced representation of its structure is sufficient. We evaluate two separate approaches: a holistic representation model using the facial appearance information and a structural model constructed from the relations among facial salient points. State of the art machine learning methods are applied to a derive a facial trait judgment model from training data and b predict a facial trait value for any face. Furthermore, we address the issue of whether there are specific structural relations among facial points that predict perception of facial traits. Experimental results over a set of labeled data (9 different trait evaluations and classification rules (4 rules suggest that a prediction of perception of facial traits is learnable by both holistic and structural approaches; b the most reliable prediction of facial trait judgments is obtained by certain type of holistic descriptions of the face appearance; and c for some traits such as attractiveness and extroversion, there are relationships between specific structural features and social perceptions.

  12. Automatic component calibration and error diagnostics for model-based accelerator control. Phase I final report

    International Nuclear Information System (INIS)

    Phase I work studied the feasibility of developing software for automatic component calibration and error correction in beamline optics models. A prototype application was developed that corrects quadrupole field strength errors in beamline models

  13. ModelMage: a tool for automatic model generation, selection and management.

    Science.gov (United States)

    Flöttmann, Max; Schaber, Jörg; Hoops, Stephan; Klipp, Edda; Mendes, Pedro

    2008-01-01

    Mathematical modeling of biological systems usually involves implementing, simulating, and discriminating several candidate models that represent alternative hypotheses. Generating and managing these candidate models is a tedious and difficult task and can easily lead to errors. ModelMage is a tool that facilitates management of candidate models. It is designed for the easy and rapid development, generation, simulation, and discrimination of candidate models. The main idea of the program is to automatically create a defined set of model alternatives from a single master model. The user provides only one SBML-model and a set of directives from which the candidate models are created by leaving out species, modifiers or reactions. After generating models the software can automatically fit all these models to the data and provides a ranking for model selection, in case data is available. In contrast to other model generation programs, ModelMage aims at generating only a limited set of models that the user can precisely define. ModelMage uses COPASI as a simulation and optimization engine. Thus, all simulation and optimization features of COPASI are readily incorporated. ModelMage can be downloaded from http://sysbio.molgen.mpg.de/modelmage and is distributed as free software. PMID:19425122

  14. Simplifying Likelihood Ratios

    OpenAIRE

    McGee, Steven

    2002-01-01

    Likelihood ratios are one of the best measures of diagnostic accuracy, although they are seldom used, because interpreting them requires a calculator to convert back and forth between “probability” and “odds” of disease. This article describes a simpler method of interpreting likelihood ratios, one that avoids calculators, nomograms, and conversions to “odds” of disease. Several examples illustrate how the clinician can use this method to refine diagnostic decisions at the bedside.

  15. Automatic removal of eye movement artifacts from the EEG using ICA and the dipole model

    Institute of Scientific and Technical Information of China (English)

    Weidong Zhou; Jean Gotman

    2009-01-01

    12 patients were analyzed.The experimental results indicate that ICA with the dipole model is very efficient at automatically subtracting the eye movement artifacts,while retaining the EEG slow waves and making their interpretation easier.

  16. Towards a Pattern-based Automatic Generation of Logical Specifications for Software Models

    OpenAIRE

    Klimek, Radoslaw

    2014-01-01

    The work relates to the automatic generation of logical specifications, considered as sets of temporal logic formulas, extracted directly from developed software models. The extraction process is based on the assumption that the whole developed model is structured using only predefined workflow patterns. A method of automatic transformation of workflow patterns to logical specifications is proposed. Applying the presented concepts enables bridging the gap between the benefits of deductive rea...

  17. Evaluating PcGets and RETINA as Automatic Model Selection Algorithms.

    OpenAIRE

    Jennifer L. Castle

    2005-01-01

    The paper describes two automatic model selection algorithms, RETINA and PcGets, briefly discussing how the algorithms work and what their performance claims are. RETINA's Matlab implementation of the code is explained, then the program is compared with PcGets on the data in Perez-Amaral, Gallo and White (2005, Econometric Theory, Vol. 21, pp. 262-277), "A Comparison of Complementary Automatic Modelling Methods: RETINA and PcGets", and Hoover and Perez (1999, Econometrics Journal, Vol. 2, pp....

  18. On the necessity of a consistent likelihood-based approach to model magnitude dispersion in type Ia supernovae observations

    CERN Document Server

    Lago, B L; Jorás, S E; Reis, R R R; Waga, I; Giostri, R

    2011-01-01

    In this article we present an alternative statistical, numerically efficient, analysis to the type Ia supernovae (SNe Ia) data: instead of performing the traditional $\\chi^2$ procedure, we suggest working with the likelihood itself. We argue that the latter should be preferred to the former when dealing with parameters in the expression for the variance --- which is exactly the case of SNe Ia surveys, using either MLCS2k2 or SALT2 light-curve fitters. Although these two analyses are in principle distinct, we find no significant numerical differences in cosmological parameter estimation (in neither best-fit parameters nor confidence interval) when using current SNe Ia data. We argue that this practical equivalence may not remain when dealing with future SNe Ia data.

  19. Maximum likelihood polynomial regression for robust speech recognition

    Institute of Scientific and Technical Information of China (English)

    LU Yong; WU Zhenyang

    2011-01-01

    The linear hypothesis is the main disadvantage of maximum likelihood linear re- gression (MLLR). This paper applies the polynomial regression method to model adaptation and establishes a nonlinear model adaptation algorithm using maximum likelihood polyno

  20. Matching and Clustering: Two Steps Towards Automatic Model Generation in Computer Vision

    OpenAIRE

    Gros, Patrick

    1993-01-01

    International audience In this paper, we present a general frame for a system of automatic modelling and recognition of 3D polyhedral objects. Such a system has many applications for robotics : recognition, localization, grasping,...Here we focus upon one main aspect of the system : when many images of one 3D object are taken from different unknown viewpoints, how to recognize those of them which represent the same aspect of the object ? Briefly, it is possible to determine automatically i...

  1. Automatic synthesis of mathematical models using graph theory for optimisation of thermal energy systems

    International Nuclear Information System (INIS)

    During the synthesis optimisation of an energy system, the configuration changes and there is need to adapt properly the mathematical model of the system. A method is presented here for the automatic synthesis of the model itself of the energy system, which is based on the graph theory. The topology of the graph is stored in the computer memory and the computer model of the respective system is constructed automatically by Object Oriented Programming. The modelling diagram of the system is introduced by an Application Programming Interface. A combined-cycle system serves as an application example. The method has been proved efficient and convenient

  2. Towards the Availability of the Distributed Cluster Rendering System: Automatic Modeling and Verification

    DEFF Research Database (Denmark)

    Wang, Kemin; Jiang, Zhengtao; Wang, Yongbin;

    2012-01-01

    In this study, we proposed a Continuous Time Markov Chain Model towards the availability of n-node clusters of Distributed Rendering System. It's an infinite one, we formalized it, based on the model, we implemented a software, which can automatically model with PRISM language. With the tool, whe...

  3. A probabilistic approach using deformable organ models for automatic definition of normal anatomical structures for 3D treatment planning

    International Nuclear Information System (INIS)

    priori (prior) information and 2) a likelihood function. In Bayesian terminology, the energy functions in the model represent a priori information. The likelihood function is computed from the image data and can take the form of geometric measurements obtained at the skeleton and boundary points. The best match is obtained by deforming the model to optimize the posterior probability. Results : A 2D implementation of the approach was tested on CT slices through the liver and kidneys and on MRI slices through the ventricles of the brain. Automatic segmentation was successful in all cases. When the model is matched against several slices, the slice that best matches the model corresponds to the slice with the greatest posterior probability. This finding is a demonstration of object recognition. Moreover abnormalities in shape, e.g., protrusions or indentations not represented in the model, can be recognized and localized by analyzing local values of the posterior probability. Conclusion : The method for combining the model and image data warps the model to conform to the image data while preserving neighbor relationships in the model, stabilizing the localization of the object boundary region in the presence of noise, contrast gradients, and poor contrast resolution. Therefore the method is robust and the results are reproducible and user independent. The success of initial studies is encouraging and extension to 3D is planned using more sophisticated models that capture statistical variations in organ shape across a population of images

  4. An Approach Using a 1D Hydraulic Model, Landsat Imaging and Generalized Likelihood Uncertainty Estimation for an Approximation of Flood Discharge

    Directory of Open Access Journals (Sweden)

    Seung Oh Lee

    2013-10-01

    Full Text Available Collection and investigation of flood information are essential to understand the nature of floods, but this has proved difficult in data-poor environments, or in developing or under-developed countries due to economic and technological limitations. The development of remote sensing data, GIS, and modeling techniques have, therefore, proved to be useful tools in the analysis of the nature of floods. Accordingly, this study attempts to estimate a flood discharge using the generalized likelihood uncertainty estimation (GLUE methodology and a 1D hydraulic model, with remote sensing data and topographic data, under the assumed condition that there is no gauge station in the Missouri river, Nebraska, and Wabash River, Indiana, in the United States. The results show that the use of Landsat leads to a better discharge approximation on a large-scale reach than on a small-scale. Discharge approximation using the GLUE depended on the selection of likelihood measures. Consideration of physical conditions in study reaches could, therefore, contribute to an appropriate selection of informal likely measurements. The river discharge assessed by using Landsat image and the GLUE Methodology could be useful in supplementing flood information for flood risk management at a planning level in ungauged basins. However, it should be noted that this approach to the real-time application might be difficult due to the GLUE procedure.

  5. A Maximum Likelihood Estimator of a Markov Model for Disease Activity in Crohn's Disease and Ulcerative Colitis for Annually Aggregated Partial Observations

    DEFF Research Database (Denmark)

    Borg, Søren; Persson, U.; Jess, T.;

    2010-01-01

    cycle length of 1 month. The purpose of these models was to enable evaluation of interventions that would shorten relapses or postpone future relapses. An exact maximum likelihood estimator was developed that disaggregates the yearly observations into monthly transition probabilities between remission...... observed data and has good face validity. The disease activity model is less suitable for UC due to its transient nature through the presence of curative surgery...... Hospital, Copenhagen, Denmark, during 1991 to 1993. The data were aggregated over calendar years; for each year, the number of relapses and the number of surgical operations were recorded. Our aim was to estimate Markov models for disease activity in CD and UC, in terms of relapse and remission, with a...

  6. Automatic Curation of SBML Models based on their ODE Semantics

    OpenAIRE

    Fages, Francois; Gay, Steven; Soliman, Sylvain

    2012-01-01

    Many models in Systems Biology are described as a system of Ordinary Differential Equations. The fact that the Systems Biology Markup Language SBML has become a standard for sharing and publishing models, has helped in making modelers formalize the structure of the reactions and use structure-related methods for reasoning about models. Unfortunately, SBML does not enforce any coherence between the structure and the kinetics of a reaction. Therefore the structural interpretation of models tran...

  7. Obtaining reliable likelihood ratio tests from simulated likelihood functions

    DEFF Research Database (Denmark)

    Andersen, Laura Mørch

    2014-01-01

    programs - to base test statistics for mixed models on simulations using asymmetric draws (e.g. Halton draws). Problem 1: Inconsistent LR tests due to asymmetric draws: This paper shows that when the estimated likelihood functions depend on standard deviations of mixed parameters this practice is very...... likely to cause misleading test results for the number of draws usually used today. The paper illustrates that increasing the number of draws is a very inefficient solution strategy requiring very large numbers of draws to ensure against misleading test statistics. The main conclusion of this paper is...

  8. On divergences tests for composite hypotheses under composite likelihood

    OpenAIRE

    Martin, Nirian; Pardo, Leandro; Zografos, Konstantinos

    2016-01-01

    It is well-known that in some situations it is not easy to compute the likelihood function as the datasets might be large or the model is too complex. In that contexts composite likelihood, derived by multiplying the likelihoods of subjects of the variables, may be useful. The extension of the classical likelihood ratio test statistics to the framework of composite likelihoods is used as a procedure to solve the problem of testing in the context of composite likelihood. In this paper we intro...

  9. Towards an automatic model transformation mechanism from UML state machines to DEVS models

    Directory of Open Access Journals (Sweden)

    Ariel González

    2015-08-01

    Full Text Available The development of complex event-driven systems requires studies and analysis prior to deployment with the goal of detecting unwanted behavior. UML is a language widely used by the software engineering community for modeling these systems through state machines, among other mechanisms. Currently, these models do not have appropriate execution and simulation tools to analyze the real behavior of systems. Existing tools do not provide appropriate libraries (sampling from a probability distribution, plotting, etc. both to build and to analyze models. Modeling and simulation for design and prototyping of systems are widely used techniques to predict, investigate and compare the performance of systems. In particular, the Discrete Event System Specification (DEVS formalism separates the modeling and simulation; there are several tools available on the market that run and collect information from DEVS models. This paper proposes a model transformation mechanism from UML state machines to DEVS models in the Model-Driven Development (MDD context, through the declarative QVT Relations language, in order to perform simulations using tools, such as PowerDEVS. A mechanism to validate the transformation is proposed. Moreover, examples of application to analyze the behavior of an automatic banking machine and a control system of an elevator are presented.

  10. Automatic Generation of Predictive Dynamic Models Reveals Nuclear Phosphorylation as the Key Msn2 Control Mechanism

    OpenAIRE

    Sunnåker, Mikael; Zamora-Sillero, Elias; Dechant, Reinhard; Ludwig, Christina; Busetto, Alberto Giovanni; Wagner, Andreas; Stelling, Joerg

    2013-01-01

    Predictive dynamical models are critical for the analysis of complex biological systems. However, methods to systematically develop and discriminate among systems biology models are still lacking. Here, we describe a computational method that incorporates all hypothetical mechanisms about the architecture of a biological system into a single model, and automatically generates a set of simpler models compatible with observational data. As a proof-of-principle, we analyzed the dynamic control o...

  11. A Study of Automatic Migration of Programs Across the Java Event Models

    OpenAIRE

    Kumar, Bharath M; Lakshminarayanan, R.; Srikant, YN

    2000-01-01

    Evolution of a framework forces a change in the design of an application, which is based on the framework. The same is the case when the Java event model changed from the Inher- itance model to the Event Delegation model. We summarize our experiences when attempting an automatic and elegant migration across the event models. Further, we also necessi- tate the need for extra documentation in patterns that will help programs evolve better.

  12. Automatic generation of groundwater model hydrostratigraphy from AEM resistivity and boreholes

    DEFF Research Database (Denmark)

    Marker, Pernille Aabye; Foged, N.; Christiansen, A. V.;

    2014-01-01

    Regional hydrological models are important tools in water resources management. Model prediction uncertainty is primarily due to structural (geological) non-uniqueness which makes sampling of the structural model space necessary to estimate prediction uncertainties. Geological structures and...... heterogeneity, which spatially scarce borehole lithology data may overlook, are well resolved in AEM surveys. This study presents a semi-automatic sequential hydrogeophysical inversion method for the integration of AEM and borehole data into regional groundwater models in sedimentary areas, where sand/ clay...

  13. Evaluating treatment effectiveness under model misspecification: a comparison of targeted maximum likelihood estimation with bias-corrected matching

    OpenAIRE

    Kreif, N.; Gruber, S.; Radice, Rosalba; Grieve, R; J S Sekhon

    2014-01-01

    Statistical approaches for estimating treatment effectiveness commonly model the endpoint, or the propensity score, using parametric regressions such as generalised linear models. Misspecification of these models can lead to biased parameter estimates. We compare two approaches that combine the propensity score and the endpoint regression, and can make weaker modelling assumptions, by using machine learning approaches to estimate the regression function and the propensity score. Targeted maxi...

  14. Likelihood models for detecting positively selected amino acid sites and applications to the HIV-1 envelope gene.

    OpenAIRE

    Nielsen, R.; Z. Yang

    1998-01-01

    Several codon-based models for the evolution of protein-coding DNA sequences are developed that account for varying selection intensity among amino acid sites. The "neutral model" assumes two categories of sites at which amino acid replacements are either neutral or deleterious. The "positive-selection model" assumes an additional category of positively selected sites at which nonsynonymous substitutions occur at a higher rate than synonymous ones. This model is also used to identify target s...

  15. The Maximum Likelihood Threshold of a Graph

    OpenAIRE

    Gross, Elizabeth; Sullivant, Seth

    2014-01-01

    The maximum likelihood threshold of a graph is the smallest number of data points that guarantees that maximum likelihood estimates exist almost surely in the Gaussian graphical model associated to the graph. We show that this graph parameter is connected to the theory of combinatorial rigidity. In particular, if the edge set of a graph $G$ is an independent set in the $n-1$-dimensional generic rigidity matroid, then the maximum likelihood threshold of $G$ is less than or equal to $n$. This c...

  16. Automatic fitting of spiking neuron models to electrophysiological recordings

    Directory of Open Access Journals (Sweden)

    Cyrille Rossant

    2010-03-01

    Full Text Available Spiking models can accurately predict the spike trains produced by cortical neurons in response to somatically injected currents. Since the specific characteristics of the model depend on the neuron, a computational method is required to fit models to electrophysiological recordings. The fitting procedure can be very time consuming both in terms of computer simulations and in terms of code writing. We present algorithms to fit spiking models to electrophysiological data (time-varying input and spike trains that can run in parallel on graphics processing units (GPUs. The model fitting library is interfaced with Brian, a neural network simulator in Python. If a GPU is present it uses just-in-time compilation to translate model equations into optimized code. Arbitrary models can then be defined at script level and run on the graphics card. This tool can be used to obtain empirically validated spiking models of neurons in various systems. We demonstrate its use on public data from the INCF Quantitative Single-Neuron Modeling 2009 competition by comparing the performance of a number of neuron spiking models.

  17. Usefulness and limitations of dK random graph models to predict interactions and functional homogeneity in biological networks under a pseudo-likelihood parameter estimation approach

    Directory of Open Access Journals (Sweden)

    Luan Yihui

    2009-09-01

    Full Text Available Abstract Background Many aspects of biological functions can be modeled by biological networks, such as protein interaction networks, metabolic networks, and gene coexpression networks. Studying the statistical properties of these networks in turn allows us to infer biological function. Complex statistical network models can potentially more accurately describe the networks, but it is not clear whether such complex models are better suited to find biologically meaningful subnetworks. Results Recent studies have shown that the degree distribution of the nodes is not an adequate statistic in many molecular networks. We sought to extend this statistic with 2nd and 3rd order degree correlations and developed a pseudo-likelihood approach to estimate the parameters. The approach was used to analyze the MIPS and BIOGRID yeast protein interaction networks, and two yeast coexpression networks. We showed that 2nd order degree correlation information gave better predictions of gene interactions in both protein interaction and gene coexpression networks. However, in the biologically important task of predicting functionally homogeneous modules, degree correlation information performs marginally better in the case of the MIPS and BIOGRID protein interaction networks, but worse in the case of gene coexpression networks. Conclusion Our use of dK models showed that incorporation of degree correlations could increase predictive power in some contexts, albeit sometimes marginally, but, in all contexts, the use of third-order degree correlations decreased accuracy. However, it is possible that other parameter estimation methods, such as maximum likelihood, will show the usefulness of incorporating 2nd and 3rd degree correlations in predicting functionally homogeneous modules.

  18. Automatic Formal Framework of Coercion-resistance in Internet Voting Protocols with CryptoVerif in Computational Model

    OpenAIRE

    Bo Meng

    2012-01-01

    Automatic proof for internet voting protocols is a hotspot issue in security protocol world. To our best knowledge, until now analysis of coercion-resistance and internet voting protocols with automatic tool in computational model does not exist. So in this study, we initiatively proposed the automatic framework of coercion-resistance and internet voting protocols based on computational model with active adversary. In the proposed framework observational equivalence is used to formalize coerc...

  19. Automatic model-based face reconstruction and recognition

    OpenAIRE

    Breuer, Pia

    2011-01-01

    Three-dimensional Morphable Models (3DMM) are known to be valuable tools for both face reconstruction and face recognition. These models are particularly relevant in safety applications or Computer Graphics. In this thesis, contributions are made to address the major difficulties preceding and during the fitting process of the Morphable Model in the framework of a fully automated system.It is shown to which extent the reconstruction and recognition results depend on the initialization and wha...

  20. Automatic Generation of 3D Building Models for Sustainable Development

    OpenAIRE

    Sugihara, Kenichi

    2015-01-01

    3D city models are important in urban planning for sustainable development. Urban planners draw maps for efficient land use and a compact city. 3D city models based on these maps are quite effective in understanding what, if this alternative plan is realized, the image of a sustainable city will be. However, enormous time and labour has to be consumed to create these 3D models, using 3D modelling software such as 3ds Max or SketchUp. In order to automate the laborious steps, a GIS and CG inte...

  1. Fully automatic adjoints: a robust and efficient mechanism for generating adjoint ocean models

    Science.gov (United States)

    Ham, D. A.; Farrell, P. E.; Funke, S. W.; Rognes, M. E.

    2012-04-01

    The problem of generating and maintaining adjoint models is sufficiently difficult that typically only the most advanced and well-resourced community ocean models achieve it. There are two current technologies which each suffer from their own limitations. Algorithmic differentiation, also called automatic differentiation, is employed by models such as the MITGCM [2] and the Alfred Wegener Institute model FESOM [3]. This technique is very difficult to apply to existing code, and requires a major initial investment to prepare the code for automatic adjoint generation. AD tools may also have difficulty with code employing modern software constructs such as derived data types. An alternative is to formulate the adjoint differential equation and to discretise this separately. This approach, known as the continuous adjoint and employed in ROMS [4], has the disadvantage that two different model code bases must be maintained and manually kept synchronised as the model develops. The discretisation of the continuous adjoint is not automatically consistent with that of the forward model, producing an additional source of error. The alternative presented here is to formulate the flow model in the high level language UFL (Unified Form Language) and to automatically generate the model using the software of the FEniCS project. In this approach it is the high level code specification which is differentiated, a task very similar to the formulation of the continuous adjoint [5]. However since the forward and adjoint models are generated automatically, the difficulty of maintaining them vanishes and the software engineering process is therefore robust. The scheduling and execution of the adjoint model, including the application of an appropriate checkpointing strategy is managed by libadjoint [1]. In contrast to the conventional algorithmic differentiation description of a model as a series of primitive mathematical operations, libadjoint employs a new abstraction of the simulation

  2. BIVOPROB: A Computer Program for Maximum-Likelihood Estimation of Bivariate Ordered-Probit Models for Censored Data

    OpenAIRE

    Calhoun, C. A.

    1989-01-01

    Despite the large number of models devoted to the statistical analysis of censored data, relatively little attention has been given to the case of censored discrete outcomes. In this paper, the author presents a technical description and user's guide to a computer program for estimating bivariate ordered-probit models for censored and uncensored data. The model and program are currently being applied in an analysis of World Fertility Survey data for Europe and the United States, and the resul...

  3. Likelihood estimators for multivariate extremes

    KAUST Repository

    Huser, Raphael Georges

    2015-11-17

    The main approach to inference for multivariate extremes consists in approximating the joint upper tail of the observations by a parametric family arising in the limit for extreme events. The latter may be expressed in terms of componentwise maxima, high threshold exceedances or point processes, yielding different but related asymptotic characterizations and estimators. The present paper clarifies the connections between the main likelihood estimators, and assesses their practical performance. We investigate their ability to estimate the extremal dependence structure and to predict future extremes, using exact calculations and simulation, in the case of the logistic model.

  4. AUTOMATIC CALIBRATION OF A STOCHASTIC-LAGRANGIAN TRANSPORT MODEL (SLAM)

    Science.gov (United States)

    Numerical models are a useful tool in evaluating and designing NAPL remediation systems. Traditional constitutive finite difference and finite element models are complex and expensive to apply. For this reason, this paper presents the application of a simplified stochastic-Lagran...

  5. Towards automatic Markov reliability modeling of computer architectures

    Science.gov (United States)

    Liceaga, C. A.; Siewiorek, D. P.

    1986-01-01

    The analysis and evaluation of reliability measures using time-varying Markov models is required for Processor-Memory-Switch (PMS) structures that have competing processes such as standby redundancy and repair, or renewal processes such as transient or intermittent faults. The task of generating these models is tedious and prone to human error due to the large number of states and transitions involved in any reasonable system. Therefore model formulation is a major analysis bottleneck, and model verification is a major validation problem. The general unfamiliarity of computer architects with Markov modeling techniques further increases the necessity of automating the model formulation. This paper presents an overview of the Automated Reliability Modeling (ARM) program, under development at NASA Langley Research Center. ARM will accept as input a description of the PMS interconnection graph, the behavior of the PMS components, the fault-tolerant strategies, and the operational requirements. The output of ARM will be the reliability of availability Markov model formulated for direct use by evaluation programs. The advantages of such an approach are (a) utility to a large class of users, not necessarily expert in reliability analysis, and (b) a lower probability of human error in the computation.

  6. Creation of voxel-based models for paediatric dosimetry from automatic segmentation methods

    International Nuclear Information System (INIS)

    Full text: The first computational models representing human anatomy were mathematical phantoms, but still far from accurate representations of human body. These models have been used with radiation transport codes (Monte Carlo) to estimate organ doses from radiological procedures. Although new medical imaging techniques have recently allowed the construction of voxel-based models based on the real anatomy, few children models from individual CT or MRI data have been reported [1,3]. For pediatric dosimetry purposes, a large range of voxel models by ages is required since scaling the anatomy from existing models is not sufficiently accurate. The small number of models available arises from the small number of CT or MRI data sets of children available and the long amount of time required to segment the data sets. The existing models have been constructed by manual segmentation slice by slice and using simple thresholding techniques. In medical image segmentation, considerable difficulties appear when applying classical techniques like thresholding or simple edge detection. Until now, any evidence of more accurate or near-automatic methods used in construction of child voxel models exists. We aim to construct a range of pediatric voxel models, integrating automatic or semi-automatic 3D segmentation techniques. In this paper we present the first stage of this work using pediatric CT data.

  7. A Method for Modeling the Virtual Instrument Automatic Test System Based on the Petri Net

    Institute of Scientific and Technical Information of China (English)

    MA Min; CHEN Guang-ju

    2005-01-01

    Virtual instrument is playing the important role in automatic test system. This paper introduces a composition of a virtual instrument automatic test system and takes the VXIbus based a test software platform which is developed by CAT lab of the UESTC as an example. Then a method to model this system based on Petri net is proposed. Through this method, we can analyze the test task scheduling to prevent the deadlock or resources conflict. At last, this paper analyzes the feasibility of this method.

  8. A Stochastic Approach for Automatic and Dynamic Modeling of Students' Learning Styles in Adaptive Educational Systems

    Science.gov (United States)

    Dorça, Fabiano Azevedo; Lima, Luciano Vieira; Fernandes, Márcia Aparecida; Lopes, Carlos Roberto

    2012-01-01

    Considering learning and how to improve students' performances, an adaptive educational system must know how an individual learns best. In this context, this work presents an innovative approach for student modeling through probabilistic learning styles combination. Experiments have shown that our approach is able to automatically detect and…

  9. AUTOMATIC MODEL SELECTION FOR 3D RECONSTRUCTION OF BUILDINGS FROM SATELLITE IMAGARY

    OpenAIRE

    T. Partovi; H. Arefi; T. Krauß; P. Reinartz

    2013-01-01

    Through the improvements of satellite sensor and matching technology, the derivation of 3D models from space borne stereo data obtained a lot of interest for various applications such as mobile navigation, urban planning, telecommunication, and tourism. The automatic reconstruction of 3D building models from space borne point cloud data is still an active research topic. The challenging problem in this field is the relatively low quality of the Digital Surface Model (DSM) generated by stereo ...

  10. AUTOMATIC MODEL SELECTION FOR 3D RECONSTRUCTION OF BUILDINGS FROM SATELLITE IMAGARY

    OpenAIRE

    T. Partovi; H. Arefi; T. Krauß; P. Reinartz

    2013-01-01

    Through the improvements of satellite sensor and matching technology, the derivation of 3D models from space borne stereo data obtained a lot of interest for various applications such as mobile navigation, urban planning, telecommunication, and tourism. The automatic reconstruction of 3D building models from space borne point cloud data is still an active research topic. The challenging problem in this field is the relatively low quality of the Digital Surface Model (DSM) generated by st...

  11. Revisiting the Steam-Boiler Case Study with LUTESS : Modeling for Automatic Test Generation

    OpenAIRE

    Papailiopoulou, Virginia; Seljimi, Besnik; Parissis, Ioannis

    2009-01-01

    International audience LUTESS is a testing tool for synchronous software making possible to automatically build test data generators. The latter rely on a formal model of the program environment composed of a set of invariant properties, supposed to hold for every software execution. Additional assumptions can be used to guide the test data generation. The environment descriptions together with the assumptions correspond to a test model of the program. In this paper, we apply this modeling...

  12. Automatic generation of computable implementation guides from clinical information models

    OpenAIRE

    Boscá Tomás, Diego; Maldonado Segura, José Alberto; Moner Cano, David; Robles Viejo, Montserrat

    2015-01-01

    Clinical information models are increasingly used to describe the contents of Electronic Health Records. Implementation guides are a common specification mechanism used to define such models. They contain, among other reference materials, all the constraints and rules that clinical information must obey. However, these implementation guides typically are oriented to human-readability, and thus cannot be processed by computers. As a consequence, they must be reinterpreted and trans...

  13. Statistical Language Modeling for Automatic Speech Recognition of Agglutinative Languages

    OpenAIRE

    Ar&#;soy, Ebru; Kurimo, Mikko; Sara&#;lar, Murat; Hirsim&#;ki, Teemu; Pylkk&#;nen, Janne; Alum&#;e, Tanel; Sak, Ha&#;im

    2008-01-01

    This work presents statistical language models trained on different agglutinative languages utilizing a lexicon based on the recently proposed unsupervised statistical morphs. The significance of this work is that similarly generated sub-word unit lexica are developed and successfully evaluated in three different LVCSR systems in different languages. In each case the morph-based approach is at least as good or better than a very large vocabulary wordbased LVCSR language model. Even though usi...

  14. An automatic 3D CAD model errors detection method of aircraft structural part for NC machining

    Directory of Open Access Journals (Sweden)

    Bo Huang

    2015-10-01

    Full Text Available Feature-based NC machining, which requires high quality of 3D CAD model, is widely used in machining aircraft structural part. However, there has been little research on how to automatically detect the CAD model errors. As a result, the user has to manually check the errors with great effort before NC programming. This paper proposes an automatic CAD model errors detection approach for aircraft structural part. First, the base faces are identified based on the reference directions corresponding to machining coordinate systems. Then, the CAD models are partitioned into multiple local regions based on the base faces. Finally, the CAD model error types are evaluated based on the heuristic rules. A prototype system based on CATIA has been developed to verify the effectiveness of the proposed approach.

  15. CADLIVE toolbox for MATLAB: automatic dynamic modeling of biochemical networks with comprehensive system analysis.

    Science.gov (United States)

    Inoue, Kentaro; Maeda, Kazuhiro; Miyabe, Takaaki; Matsuoka, Yu; Kurata, Hiroyuki

    2014-09-01

    Mathematical modeling has become a standard technique to understand the dynamics of complex biochemical systems. To promote the modeling, we had developed the CADLIVE dynamic simulator that automatically converted a biochemical map into its associated mathematical model, simulated its dynamic behaviors and analyzed its robustness. To enhance the feasibility by CADLIVE and extend its functions, we propose the CADLIVE toolbox available for MATLAB, which implements not only the existing functions of the CADLIVE dynamic simulator, but also the latest tools including global parameter search methods with robustness analysis. The seamless, bottom-up processes consisting of biochemical network construction, automatic construction of its dynamic model, simulation, optimization, and S-system analysis greatly facilitate dynamic modeling, contributing to the research of systems biology and synthetic biology. This application can be freely downloaded from http://www.cadlive.jp/CADLIVE_MATLAB/ together with an instruction. PMID:24623466

  16. Semi-Automatic Modelling of Building FAÇADES with Shape Grammars Using Historic Building Information Modelling

    Science.gov (United States)

    Dore, C.; Murphy, M.

    2013-02-01

    This paper outlines a new approach for generating digital heritage models from laser scan or photogrammetric data using Historic Building Information Modelling (HBIM). HBIM is a plug-in for Building Information Modelling (BIM) software that uses parametric library objects and procedural modelling techniques to automate the modelling stage. The HBIM process involves a reverse engineering solution whereby parametric interactive objects representing architectural elements are mapped onto laser scan or photogrammetric survey data. A library of parametric architectural objects has been designed from historic manuscripts and architectural pattern books. These parametric objects were built using an embedded programming language within the ArchiCAD BIM software called Geometric Description Language (GDL). Procedural modelling techniques have been implemented with the same language to create a parametric building façade which automatically combines library objects based on architectural rules and proportions. Different configurations of the façade are controlled by user parameter adjustment. The automatically positioned elements of the façade can be subsequently refined using graphical editing while overlaying the model with orthographic imagery. Along with this semi-automatic method for generating façade models, manual plotting of library objects can also be used to generate a BIM model from survey data. After the 3D model has been completed conservation documents such as plans, sections, elevations and 3D views can be automatically generated for conservation projects.

  17. A semi-automatic model for sinkhole identification in a karst area of Zhijin County, China

    Science.gov (United States)

    Chen, Hao; Oguchi, Takashi; Wu, Pan

    2015-12-01

    The objective of this study is to investigate the use of DEMs derived from ASTER and SRTM remote sensing images and topographic maps to detect and quantify natural sinkholes in a karst area in Zhijin county, southwest China. Two methodologies were implemented. The first is a semi-automatic approach which stepwise identifies the depression using DEMs: 1) DEM acquisition; 2) sink fill; 3) sink depth calculation using the difference between the original and sinkfree DEMs; and 4) elimination of the spurious sinkholes by the threshold values of morphometric parameters including TPI (topographic position index), geology, and land use. The second is the traditional visual interpretation of depressions based on the integrated analysis of the high-resolution aerial photographs and topographic maps. The threshold values of the depression area, shape, depth and TPI appropriate for distinguishing true depressions were abstained from the maximum overall accuracy generated by the comparison between the depression maps produced by the semi-automatic model or visual interpretation. The result shows that the best performance of the semi-automatic model for meso-scale karst depression delineation was using the DEM from the topographic maps with the thresholds area >~ 60 m2, ellipticity >~ 0.2 and TPI <= 0. With these realistic thresholds, the accuracy of the semi-automatic model ranges from 0.78 to 0.95 for DEM resolutions from 3 to 75 m.

  18. On Automatic Modeling and Use of Domain-specific Ontologies

    DEFF Research Database (Denmark)

    Andreasen, Troels; Knappe, Rasmus; Bulskov, Henrik

    2005-01-01

    In this paper, we firstly introduce an approach to the modeling of a domain-specific ontology for use in connection with a given document collection. Secondly, we present a methodology for deriving conceptual similarity from the domain-specific ontology. Adopted for ontology representation is a s...

  19. Maximum Likelihood Methods in Treating Outliers and Symmetrically Heavy-Tailed Distributions for Nonlinear Structural Equation Models with Missing Data

    Science.gov (United States)

    Lee, Sik-Yum; Xia, Ye-Mao

    2006-01-01

    By means of more than a dozen user friendly packages, structural equation models (SEMs) are widely used in behavioral, education, social, and psychological research. As the underlying theory and methods in these packages are vulnerable to outliers and distributions with longer-than-normal tails, a fundamental problem in the field is the…

  20. Automatic Relevance Determination for multi-way models

    DEFF Research Database (Denmark)

    Mørup, Morten; Hansen, Lars Kai

    parameters and learning the hyperparameters of these priors the method is able to turn off excess components and simplify the core structure at a computational cost of fitting the conventional Tucker/CP model. To investigate the impact of the choice of priors we based the ARD on both Laplace and Gaussian...... priors corresponding to regularization by the sparsity promoting L1-norm and the conventional L2-norm, respectively. While the form of the priors had limited effect on the results obtained the ARD approach turned out to form a useful, simple, and efficient tool for selecting the adequate number of...... components of data within the Tucker and CP structure. For the Tucker and CP model the approach performs better than heuristics such as the Bayesian Information Criterion, Akaikes Information Criterion, DIFFIT and the numerical convex hull (NumConvHull) while operating only at the cost of estimating an...

  1. Using automatic differentiation in sensitivity analysis of nuclear simulatoin models.

    Energy Technology Data Exchange (ETDEWEB)

    Alexe, M.; Roderick, O.; Anitescu, M.; Utke, J.; Fanning, T.; Hovland, P.; Virginia Tech.

    2010-01-01

    Sensitivity analysis is an important tool in the study of nuclear systems. In our recent work, we introduced a hybrid method that combines sampling techniques with first-order sensitivity analysis to approximate the effects of uncertainty in parameters of a nuclear reactor simulation model. For elementary examples, the approach offers a substantial advantage (in precision, computational efficiency, or both) over classical methods of uncertainty quantification.

  2. Automatic generation of computable implementation guides from clinical information models.

    Science.gov (United States)

    Boscá, Diego; Maldonado, José Alberto; Moner, David; Robles, Montserrat

    2015-06-01

    Clinical information models are increasingly used to describe the contents of Electronic Health Records. Implementation guides are a common specification mechanism used to define such models. They contain, among other reference materials, all the constraints and rules that clinical information must obey. However, these implementation guides typically are oriented to human-readability, and thus cannot be processed by computers. As a consequence, they must be reinterpreted and transformed manually into an executable language such as Schematron or Object Constraint Language (OCL). This task can be difficult and error prone due to the big gap between both representations. The challenge is to develop a methodology for the specification of implementation guides in such a way that humans can read and understand easily and at the same time can be processed by computers. In this paper, we propose and describe a novel methodology that uses archetypes as basis for generation of implementation guides. We use archetypes to generate formal rules expressed in Natural Rule Language (NRL) and other reference materials usually included in implementation guides such as sample XML instances. We also generate Schematron rules from NRL rules to be used for the validation of data instances. We have implemented these methods in LinkEHR, an archetype editing platform, and exemplify our approach by generating NRL rules and implementation guides from EN ISO 13606, openEHR, and HL7 CDA archetypes. PMID:25910958

  3. Mathematical modelling and quality indices optimization of automatic control systems of reactor facility

    International Nuclear Information System (INIS)

    The mathematical modeling of automatic control systems of reactor facility WWER-1000 with various regulator types is considered. The linear and nonlinear models of neutron power control systems of nuclear reactor WWER-1000 with various group numbers of delayed neutrons are designed. The results of optimization of direct quality indexes of neutron power control systems of nuclear reactor WWER-1000 are designed. The identification and optimization of level control systems with various regulator types of steam generator are executed

  4. Automatic, Global and Dynamic Student Modeling in a Ubiquitous Learning Environment

    OpenAIRE

    Sabine Graf; Guangbing Yang; Tzu-Chien Liu; Kinshuk

    2009-01-01

    Ubiquitous learning allows students to learn at any time and any place. Adaptivity plays an important role in ubiquitous learning, aiming at providing students with adaptive and personalized learning material, activities, and information at the right place and the right time. However, for providing rich adaptivity, the student model needs to be able to gather a variety of information about the students. In this paper, an automatic, global, and dynamic student modeling approach is introduced, ...

  5. Multiphase Modelling of a Gas Storage in Aquifer with Automatic Calibration and Confidence Limits

    OpenAIRE

    Thiéry, Dominique; Guedeney, Karine

    1999-01-01

    Multiphase flow modelling involving gas and water is widely used in gas dissolution in aquifers or in aquifer gas storage. The parameters related to the gas are usually well known but the parameters of the aquifer system are not. In order to obtain reliable forecasts, it is necessary to calibrate the multiphase model on monitored data. This can be done by automatic calibration followed by the determination of the confidence limits of the parameters, and of the confidence limits of the forecas...

  6. The ACR-program for automatic finite element model generation for part through cracks

    International Nuclear Information System (INIS)

    The ACR-program (Automatic Finite Element Model Generation for Part Through Cracks) has been developed at the Technical Research Centre of Finland (VTT) for automatic finite element model generation for surface flaws using three dimensional solid elements. Circumferential or axial cracks can be generated on the inner or outer surface of a cylindrical or toroidal geometry. Several crack forms are available including the standard semi-elliptical surface crack. The program can be used in the development of automated systems for fracture mechanical analyses of structures. The tests for the accuracy of the FE-mesh have been started with two-dimensional models. The results indicate that the accuracy of the standard mesh is sufficient for practical analyses. Refinement of the standard mesh is needed in analyses with high load levels well over the limit load of the structure

  7. Automatic Assessment of Craniofacial Growth in a Mouse Model of Crouzon Syndrome

    DEFF Research Database (Denmark)

    Thorup, Signe Strann; Larsen, Rasmus; Darvann, Tron Andre; Ólafsdóttir, Hildur; Paulsen, Rasmus Reinhold; Hermann, Nuno Vibe; Larsen, Per; Perlyn, Chad A.; Kreiborg, Sven

    . CONCLUSIONS: Image registrations made it possible to automatically quantify and visualize average craniofacial growth in normal and Crouzon mouse models, and significantly different growth patterns were found between the two. The methodology generalizes to quantification of shape and growth in other mouse...... the human counterpart. Quantifying growth in the Crouzon mouse model could test hypotheses of the relationship between craniosynostosis and dysmorphology, leading to better understanding of the causes of Crouzon syndrome as well as providing knowledge relevant for surgery planning. METHODS: Automatic...... growth vectors for each mouse-type; growth models were created using linear interpolation and visualized as 3D animations. Spatial regions of significantly different growth were identified using the local False Discovery Rate method, estimating the expected percentage of false predictions in a set of...

  8. Automatic Assessment of Craniofacial Growth in a Mouse Model of Crouzon Syndrome

    DEFF Research Database (Denmark)

    Thorup, Signe Strann; Larsen, Rasmus; Darvann, Tron Andre;

    2009-01-01

    the human counterpart. Quantifying growth in the Crouzon mouse model could test hypotheses of the relationship between craniosynostosis and dysmorphology, leading to better understanding of the causes of Crouzon syndrome as well as providing knowledge relevant for surgery planning. METHODS: Automatic...... growth vectors for each mouse-type; growth models were created using linear interpolation and visualized as 3D animations. Spatial regions of significantly different growth were identified using the local False Discovery Rate method, estimating the expected percentage of false predictions in a set of....... CONCLUSIONS: Image registrations made it possible to automatically quantify and visualize average craniofacial growth in normal and Crouzon mouse models, and significantly different growth patterns were found between the two. The methodology generalizes to quantification of shape and growth in other mouse...

  9. A CAD based automatic modeling method for primitive solid based Monte Carlo calculation geometry

    International Nuclear Information System (INIS)

    The Multi-Physics Coupling Analysis Modeling Program (MCAM), developed by FDS Team, China, is an advanced modeling tool aiming to solve the modeling challenges for multi-physics coupling simulation. The automatic modeling method for SuperMC, the Super Monte Carlo Calculation Program for Nuclear and Radiation Process, was recently developed and integrated in MCAM5.2. This method could bi-convert between CAD model and SuperMC input file. While converting from CAD model to SuperMC model, the CAD model was decomposed into several convex solids set, and then corresponding SuperMC convex basic solids were generated and output. While inverting from SuperMC model to CAD model, the basic primitive solids was created and related operation was done to according the SuperMC model. This method was benchmarked with ITER Benchmark model. The results showed that the method was correct and effective. (author)

  10. Automatic Navigation for Rat-Robots with Modeling of the Human Guidance

    Institute of Scientific and Technical Information of China (English)

    Chao Sun; Nenggan Zheng; Xinlu Zhang; Weidong Chen; Xiaoxiang Zheng

    2013-01-01

    A bio-robot system refers to an animal equipped with Brain-Computer Interface (BCI),through which the outer stimulation is delivered directly into the animal's brain to control its behaviors.The development ofbio-robots suffers from the dependency on real-time guidance by human operators.Because of its inherent difficulties,there is no feasible method for automatic controlling of bio-robots yet.In this paper,we propose a new method to realize the automatic navigation for bio-robots.A General Regression Neural Network (GRNN) is adopted to analyze and model the controlling procedure of human operations.Comparing to the traditional approaches with explicit controlling rules,our algorithm learns the controlling process and imitates the decision-making of human-beings to steer the rat-robot automatically.In real-time navigation experiments,our method successfully controls bio-robots to follow given paths automatically and precisely.This work would be significant for future applications of bio-robots and provide a new way to realize hybrid intelligent systems with artificial intelligence and natural biological intelligence combined together.

  11. Automatic reconstruction of physiological gestures used in a model of birdsong production.

    Science.gov (United States)

    Boari, Santiago; Perl, Yonatan Sanz; Amador, Ana; Margoliash, Daniel; Mindlin, Gabriel B

    2015-11-01

    Highly coordinated learned behaviors are key to understanding neural processes integrating the body and the environment. Birdsong production is a widely studied example of such behavior in which numerous thoracic muscles control respiratory inspiration and expiration: the muscles of the syrinx control syringeal membrane tension, while upper vocal tract morphology controls resonances that modulate the vocal system output. All these muscles have to be coordinated in precise sequences to generate the elaborate vocalizations that characterize an individual's song. Previously we used a low-dimensional description of the biomechanics of birdsong production to investigate the associated neural codes, an approach that complements traditional spectrographic analysis. The prior study used algorithmic yet manual procedures to model singing behavior. In the present work, we present an automatic procedure to extract low-dimensional motor gestures that could predict vocal behavior. We recorded zebra finch songs and generated synthetic copies automatically, using a biomechanical model for the vocal apparatus and vocal tract. This dynamical model described song as a sequence of physiological parameters the birds control during singing. To validate this procedure, we recorded electrophysiological activity of the telencephalic nucleus HVC. HVC neurons were highly selective to the auditory presentation of the bird's own song (BOS) and gave similar selective responses to the automatically generated synthetic model of song (AUTO). Our results demonstrate meaningful dimensionality reduction in terms of physiological parameters that individual birds could actually control. Furthermore, this methodology can be extended to other vocal systems to study fine motor control. PMID:26378204

  12. Automatic Multi-Scale Calibration Procedure for Nested Hydrological-Hydrogeological Regional Models

    Science.gov (United States)

    Labarthe, B.; Abasq, L.; Flipo, N.; de Fouquet, C. D.

    2014-12-01

    Large hydrosystem modelling and understanding is a complex process depending on regional and local processes. A nested interface concept has been implemented in the hydrosystem modelling platform for a large alluvial plain model (300 km2) part of a 11000 km2 multi-layer aquifer system, included in the Seine basin (65000 km2, France). The platform couples hydrological and hydrogeological processes through four spatially distributed modules (Mass balance, Unsaturated Zone, River and Groundwater). An automatic multi-scale calibration procedure is proposed. Using different data sets from regional scale (117 gauging stations and 183 piezometers over the 65000 km2) to the intermediate scale(dense past piezometric snapshot), it permits the calibration and homogenization of model parameters over scales.The stepwise procedure starts with the optimisation of the water mass balance parameters at regional scale using a conceptual 7 parameters bucket model coupled with the inverse modelling tool PEST. The multi-objective function is derived from river discharges and their de-composition by hydrograph separation. The separation is performed at each gauging station using an automatic procedure based one Chapman filter. Then, the model is run at the regional scale to provide recharge estimate and regional fluxes to the groundwater local model. Another inversion method is then used to determine the local hydrodynamic parameters. This procedure used an initial kriged transmissivity field which is successively updated until the simulated hydraulic head distribution equals a reference one obtained by krigging. Then, the local parameters are upscaled to the regional model by renormalisation procedure.This multi-scale automatic calibration procedure enhances both the local and regional processes representation. Indeed, it permits a better description of local heterogeneities and of the associated processes which are transposed into the regional model, improving the overall performances

  13. Approximate maximum likelihood estimation using data-cloning ABC

    OpenAIRE

    Picchini, Umberto; Anderson, Rachele

    2015-01-01

    A maximum likelihood methodology for a general class of models is presented, using an approximate Bayesian computation (ABC) approach. The typical target of ABC methods are models with intractable likelihoods, and we combine an ABC-MCMC sampler with so-called "data cloning" for maximum likelihood estimation. Accuracy of ABC methods relies on the use of a small threshold value for comparing simulations from the model and observed data. The proposed methodology shows how to use large threshold ...

  14. A Stochastic Approach for Automatic and Dynamic Modeling of Students' Learning Styles in Adaptive Educational Systems

    OpenAIRE

    Fabiano Azevedo DORÇA; Luciano Vieira LIMA; Márcia Aparecida FERNANDES; Carlos Roberto LOPES

    2012-01-01

    Considering learning and how to improve students' performances, an adaptive educational system must know how an individual learns best. In this context, this work presents an innovative approach for student modeling through probabilistic learning styles combination. Experiments have shown that our approach is able to automatically detect and precisely adjust students' learning styles, based on the non-deterministic and non-stationary aspects of learning styles. Because of the probabilistic an...

  15. On the automatic compilation of e-learning models to planning

    OpenAIRE

    Garrido Tejero, Antonio; Morales, Lluvia; Fernandez, Susana; Onaindia De La Rivaherrera, Eva; Borrajo, Daniel; Castillo, Luis

    2013-01-01

    This paper presents a general approach to automatically compile e-learning models to planning, allowing us to easily generate plans, in the form of learning designs, by using existing domain-independent planners. The idea is to compile, first, a course defined in a standard e-learning language into a planning domain, and, second, a file containing students learning information into a planning problem. We provide a common compilation and extend it to three particular approaches that cover a fu...

  16. Lightning Protection Performance Assessment of Transmission Line Based on ATP model Automatic Generation

    OpenAIRE

    Luo Hanwu; Li Mengke; Xu Xinyao; Cui Shigang; Han Yin; Yan Kai; Wang Jing; Le Jian

    2016-01-01

    This paper presents a novel method to solve the initial lightning breakdown current by combing ATP and MATLAB simulation software effectively, with the aims to evaluate the lightning protection performance of transmission line. Firstly, the executable ATP simulation model is generated automatically according to the required information such as power source parameters, tower parameters, overhead line parameters, grounding resistance and lightning current parameters, etc. through an interface p...

  17. Improvements of Continuous Model for Memory-based Automatic Music Transcription

    Czech Academy of Sciences Publication Activity Database

    Albrecht, Š.; Šmídl, Václav

    Aalborg: Eurasip, 2010, s. 487-491. ISSN 2076-1465. [European signal processing conference. Aalborg (DK), 23.07.2010-27.07.2010] R&D Projects: GA ČR GP102/08/P250 Institutional research plan: CEZ:AV0Z10750506 Keywords : music transcription * extended Kalman filter Subject RIV: BD - Theory of Information http://library.utia.cas.cz/separaty/2010/AS/smidl-improvements of continuous model for memory-based automatic music transcription.pdf

  18. Automatic Segmentation Framework of Building Anatomical Mouse Model for Bioluminescence Tomography

    OpenAIRE

    Abdullah Alali

    2013-01-01

    Bioluminescence tomography is known as a highly ill-posed inverse problem. To improve the reconstruction performance by introducing anatomical structures as a priori knowledge, an automatic segmentation framework has been proposed in this paper to extract the mouse whole-body organs and tissues, which enables to build up a heterogeneous mouse model for reconstruction of bioluminescence tomography. Finally, an in vivo mouse experiment has been conducted to evaluate this framework by using an X...

  19. Accelerated maximum likelihood parameter estimation for stochastic biochemical systems

    Directory of Open Access Journals (Sweden)

    Daigle Bernie J

    2012-05-01

    Full Text Available Abstract Background A prerequisite for the mechanistic simulation of a biochemical system is detailed knowledge of its kinetic parameters. Despite recent experimental advances, the estimation of unknown parameter values from observed data is still a bottleneck for obtaining accurate simulation results. Many methods exist for parameter estimation in deterministic biochemical systems; methods for discrete stochastic systems are less well developed. Given the probabilistic nature of stochastic biochemical models, a natural approach is to choose parameter values that maximize the probability of the observed data with respect to the unknown parameters, a.k.a. the maximum likelihood parameter estimates (MLEs. MLE computation for all but the simplest models requires the simulation of many system trajectories that are consistent with experimental data. For models with unknown parameters, this presents a computational challenge, as the generation of consistent trajectories can be an extremely rare occurrence. Results We have developed Monte Carlo Expectation-Maximization with Modified Cross-Entropy Method (MCEM2: an accelerated method for calculating MLEs that combines advances in rare event simulation with a computationally efficient version of the Monte Carlo expectation-maximization (MCEM algorithm. Our method requires no prior knowledge regarding parameter values, and it automatically provides a multivariate parameter uncertainty estimate. We applied the method to five stochastic systems of increasing complexity, progressing from an analytically tractable pure-birth model to a computationally demanding model of yeast-polarization. Our results demonstrate that MCEM2 substantially accelerates MLE computation on all tested models when compared to a stand-alone version of MCEM. Additionally, we show how our method identifies parameter values for certain classes of models more accurately than two recently proposed computationally efficient methods

  20. Weakly supervised automatic segmentation and 3D modeling of the knee joint from MR images

    Science.gov (United States)

    Amami, Amal; Ben Azouz, Zouhour

    2013-12-01

    Automatic segmentation and 3D modeling of the knee joint from MR images, is a challenging task. Most of the existing techniques require the tedious manual segmentation of a training set of MRIs. We present an approach that necessitates the manual segmentation of one MR image. It is based on a volumetric active appearance model. First, a dense tetrahedral mesh is automatically created on a reference MR image that is arbitrary selected. Second, a pairwise non-rigid registration between each MRI from a training set and the reference MRI is computed. The non-rigid registration is based on a piece-wise affine deformation using the created tetrahedral mesh. The minimum description length is then used to bring all the MR images into a correspondence. An average image and tetrahedral mesh, as well as a set of main modes of variations, are generated using the established correspondence. Any manual segmentation of the average MRI can be mapped to other MR images using the AAM. The proposed approach has the advantage of simultaneously generating 3D reconstructions of the surface as well as a 3D solid model of the knee joint. The generated surfaces and tetrahedral meshes present the interesting property of fulfilling a correspondence between different MR images. This paper shows preliminary results of the proposed approach. It demonstrates the automatic segmentation and 3D reconstruction of a knee joint obtained by mapping a manual segmentation of a reference image.

  1. Grammar-based Automatic 3D Model Reconstruction from Terrestrial Laser Scanning Data

    Science.gov (United States)

    Yu, Q.; Helmholz, P.; Belton, D.; West, G.

    2014-04-01

    The automatic reconstruction of 3D buildings has been an important research topic during the last years. In this paper, a novel method is proposed to automatically reconstruct the 3D building models from segmented data based on pre-defined formal grammar and rules. Such segmented data can be extracted e.g. from terrestrial or mobile laser scanning devices. Two steps are considered in detail. The first step is to transform the segmented data into 3D shapes, for instance using the DXF (Drawing Exchange Format) format which is a CAD data file format used for data interchange between AutoCAD and other program. Second, we develop a formal grammar to describe the building model structure and integrate the pre-defined grammars into the reconstruction process. Depending on the different segmented data, the selected grammar and rules are applied to drive the reconstruction process in an automatic manner. Compared with other existing approaches, our proposed method allows the model reconstruction directly from 3D shapes and takes the whole building into account.

  2. Automatic Creation of Structural Models from Point Cloud Data: the Case of Masonry Structures

    Science.gov (United States)

    Riveiro, B.; Conde-Carnero, B.; González-Jorge, H.; Arias, P.; Caamaño, J. C.

    2015-08-01

    One of the fields where 3D modelling has an important role is in the application of such 3D models to structural engineering purposes. The literature shows an intense activity on the conversion of 3D point cloud data to detailed structural models, which has special relevance in masonry structures where geometry plays a key role. In the work presented in this paper, color data (from Intensity attribute) is used to automatically segment masonry structures with the aim of isolating masonry blocks and defining interfaces in an automatic manner using a 2.5D approach. An algorithm for the automatic processing of laser scanning data based on an improved marker-controlled watershed segmentation was proposed and successful results were found. Geometric accuracy and resolution of point cloud are constrained by the scanning instruments, giving accuracy levels reaching a few millimetres in the case of static instruments and few centimetres in the case of mobile systems. In any case, the algorithm is not significantly sensitive to low quality images because acceptable segmentation results were found in cases where blocks could not be visually segmented.

  3. GIS Data Based Automatic High-Fidelity 3D Road Network Modeling

    Science.gov (United States)

    Wang, Jie; Shen, Yuzhong

    2011-01-01

    3D road models are widely used in many computer applications such as racing games and driving simulations_ However, almost all high-fidelity 3D road models were generated manually by professional artists at the expense of intensive labor. There are very few existing methods for automatically generating 3D high-fidelity road networks, especially those existing in the real world. This paper presents a novel approach thai can automatically produce 3D high-fidelity road network models from real 2D road GIS data that mainly contain road. centerline in formation. The proposed method first builds parametric representations of the road centerlines through segmentation and fitting . A basic set of civil engineering rules (e.g., cross slope, superelevation, grade) for road design are then selected in order to generate realistic road surfaces in compliance with these rules. While the proposed method applies to any types of roads, this paper mainly addresses automatic generation of complex traffic interchanges and intersections which are the most sophisticated elements in the road networks

  4. Verossimilhança na seleção de modelos para predição espacial Likelihood in the selection of models for spatial prediction

    Directory of Open Access Journals (Sweden)

    Cristiano Nunes Nesi

    2013-04-01

    the area from the available measurements on 48 experimental plots located in Xanxerê/SC with emphasis on the methodological framework. Choices of covariates in the model and for data transformation define four modeling options to be assessed. The Matèrn correlation function was used, evaluated at values 0.5; 1.5 and 2.5 for smoothness parameter. Models were compared by the maximized logarithm of the likelihood function and also by cross validation. The model with transformed response variable, including coordinates of the area as covariates and the value of 0.5 for the smoothness parameter was selected. The cross validation measures did not add relevant information to the likelihood, and the analysis highlights care must be taken with globally or locally atypical data, as well as the need of objective choice based on different candidate models which ought to be the focus of geostatistical modeling to ensure results compatible with reality.

  5. Maximum likelihood estimation for integrated diffusion processes

    DEFF Research Database (Denmark)

    Baltazar-Larios, Fernando; Sørensen, Michael

    measurement errors. Integrated volatility is an example of this type of observations. Another example is ice-core data on oxygen isotopes used to investigate paleo-temperatures. The data can be viewed as incomplete observations of a model with a tractable likelihood function. Therefore we propose a simulated...... EM-algorithm to obtain maximum likelihood estimates of the parameters in the diffusion model. As part of the algorithm, we use a recent simple method for approximate simulation of diffusion bridges. In simulation studies for the Ornstein-Uhlenbeck process and the CIR process the proposed method works...

  6. Data base structure and Management for Automatic Calculation of 210Pb Dating Methods Applying Different Models

    International Nuclear Information System (INIS)

    The introduction of macros in try calculation sheets allows the automatic application of various dating models using unsupported ''210 Pb data from a data base. The calculation books the contain the models have been modified to permit the implementation of these macros. The Marine and Aquatic Radioecology group of CIEMAT (MARG) will be involved in new European Projects, thus new models have been developed. This report contains a detailed description of: a) the new implement macros b) the design of a dating Menu in the calculation sheet and c) organization and structure of the data base. (Author) 4 refs

  7. GEM System: automatic prototyping of cell-wide metabolic pathway models from genomes

    Directory of Open Access Journals (Sweden)

    Nakayama Yoichi

    2006-03-01

    Full Text Available Abstract Background Successful realization of a "systems biology" approach to analyzing cells is a grand challenge for our understanding of life. However, current modeling approaches to cell simulation are labor-intensive, manual affairs, and therefore constitute a major bottleneck in the evolution of computational cell biology. Results We developed the Genome-based Modeling (GEM System for the purpose of automatically prototyping simulation models of cell-wide metabolic pathways from genome sequences and other public biological information. Models generated by the GEM System include an entire Escherichia coli metabolism model comprising 968 reactions of 1195 metabolites, achieving 100% coverage when compared with the KEGG database, 92.38% with the EcoCyc database, and 95.06% with iJR904 genome-scale model. Conclusion The GEM System prototypes qualitative models to reduce the labor-intensive tasks required for systems biology research. Models of over 90 bacterial genomes are available at our web site.

  8. Automatic sleep staging based on ECG signals using hidden Markov models.

    Science.gov (United States)

    Ying Chen; Xin Zhu; Wenxi Chen

    2015-08-01

    This study is designed to investigate the feasibility of automatic sleep staging using features only derived from electrocardiography (ECG) signal. The study was carried out using the framework of hidden Markov models (HMMs). The mean, and SD values of heart rates (HRs) computed from each 30-second epoch served as the features. The two feature sequences were first detrended by ensemble empirical mode decomposition (EEMD), formed as a two-dimensional feature vector, and then converted into code vectors by vector quantization (VQ) method. The output VQ indexes were utilized to estimate parameters for HMMs. The proposed model was tested and evaluated on a group of healthy individuals using leave-one-out cross-validation. The automatic sleep staging results were compared with PSG estimated ones. Results showed accuracies of 82.2%, 76.0%, 76.1% and 85.5% for deep, light, REM and wake sleep, respectively. The findings proved that HRs-based HMM approach is feasible for automatic sleep staging and can pave a way for developing more efficient, robust, and simple sleep staging system suitable for home application. PMID:26736316

  9. New semi-automatic ROI setting system for brain PET images based on elastic model

    Energy Technology Data Exchange (ETDEWEB)

    Tanizaki, Naoaki; Okamura, Tetsuya (Sumitomo Heavy Industries Ltd., Kanagawa (Japan). Research and Development Center); Senda, Michio; Toyama, Hinako; Ishii, Kenji

    1994-10-01

    We have developed a semi-automatic ROI setting system for brain PET images. It is based on the elastic network model that fits the standard ROI atlas into individual brain image. The standard ROI atlas is a set of segments that represent each anatomical region. For transformation, the operator needs to set only three kinds of district anatomical features: manually determined midsagittal line, brain contour line determined with SNAKES algorithm semi-automatically, a few manually determined specific ROIs to be used for exact transformation. Improvement of the operation time and the inter-operator variance were demonstrated in the experiment by comparing with the conventional manual ROI setting. The operation time was reduced to 50% in almost all cases. And the inter-operator variance was reduced to one seventh in the maximum case. (author).

  10. Automatic parametrization of implicit solvent models for the blind prediction of solvation free energies

    CERN Document Server

    Wang, Bao; Wei, Guowei

    2016-01-01

    In this work, a systematic protocol is proposed to automatically parametrize implicit solvent models with polar and nonpolar components. The proposed protocol utilizes the classical Poisson model or the Kohn-Sham density functional theory (KSDFT) based polarizable Poisson model for modeling polar solvation free energies. For the nonpolar component, either the standard model of surface area, molecular volume, and van der Waals interactions, or a model with atomic surface areas and molecular volume is employed. Based on the assumption that similar molecules have similar parametrizations, we develop scoring and ranking algorithms to classify solute molecules. Four sets of radius parameters are combined with four sets of charge force fields to arrive at a total of 16 different parametrizations for the Poisson model. A large database with 668 experimental data is utilized to validate the proposed protocol. The lowest leave-one-out root mean square (RMS) error for the database is 1.33k cal/mol. Additionally, five s...

  11. Likelihood analysis of large-scale flows

    CERN Document Server

    Jaffe, A; Jaffe, Andrew; Kaiser, Nick

    1994-01-01

    We apply a likelihood analysis to the data of Lauer & Postman 1994. With P(k) parametrized by (\\sigma_8, \\Gamma), the likelihood function peaks at \\sigma_8\\simeq0.9, \\Gamma\\simeq0.05, indicating at face value very strong large-scale power, though at a level incompatible with COBE. There is, however, a ridge of likelihood such that more conventional power spectra do not seem strongly disfavored. The likelihood calculated using as data only the components of the bulk flow solution peaks at higher \\sigma_8, as suggested by other analyses, but is rather broad. The likelihood incorporating both bulk flow and shear gives a different picture. The components of the shear are all low, and this pulls the peak to lower amplitudes as a compromise. The velocity data alone are therefore {\\em consistent} with models with very strong large scale power which generates a large bulk flow, but the small shear (which also probes fairly large scales) requires that the power would have to be at {\\em very} large scales, which is...

  12. Semi-automatic registration of 3D orthodontics models from photographs

    OpenAIRE

    Destrez, Raphaël; Treuillet, Sylvie; Lucas, Yves; Albouy-Kissi, Benjamin

    2013-01-01

    In orthodontics, a common practice used to diagnose and plan the treatment is the dental cast. After digitization by a CT-scan or a laser scanner, the obtained 3D surface models can feed orthodontics numerical tools for computer-aided diagnosis and treatment planning. One of the pre-processing critical steps is the 3D registration of dental arches to obtain the occlusion of these numerical models. For this task, we propose a vision based method to automatically compute the registration based ...

  13. Automatic parameter extraction techniques in IC-CAP for a compact double gate MOSFET model

    International Nuclear Information System (INIS)

    In this paper, automatic parameter extraction techniques of Agilent's IC-CAP modeling package are presented to extract our explicit compact model parameters. This model is developed based on a surface potential model and coded in Verilog-A. The model has been adapted to Trigate MOSFETs, includes short channel effects (SCEs) and allows accurate simulations of the device characteristics. The parameter extraction routines provide an effective way to extract the model parameters. The techniques minimize the discrepancy and error between the simulation results and the available experimental data for more accurate parameter values and reliable circuit simulation. Behavior of the second derivative of the drain current is also verified and proves to be accurate and continuous through the different operating regimes. The results show good agreement with measured transistor characteristics under different conditions and through all operating regimes. (paper)

  14. Nonparametric (smoothed) likelihood and integral equations

    CERN Document Server

    Groeneboom, Piet

    2012-01-01

    We show that there is an intimate connection between the theory of nonparametric (smoothed) maximum likelihood estimators for certain inverse problems and integral equations. This is illustrated by estimators for interval censoring and deconvolution problems. We also discuss the asymptotic efficiency of the MLE for smooth functionals in these models.

  15. Maintaining symmetry of simulated likelihood functions

    DEFF Research Database (Denmark)

    Andersen, Laura Mørch

    improves precision substantially. Another source of error is that models testing away mixing dimensions must replicate the relevant dimensions of the quasi-random draws in the simulation of the restricted likelihood. These simulation errors are ignored in the standard estimation procedures used today and...

  16. Automatic geometric modeling, mesh generation and FE analysis for pipelines with idealized defects and arbitrary location

    Energy Technology Data Exchange (ETDEWEB)

    Motta, R.S.; Afonso, S.M.B.; Willmersdorf, R.B.; Lyra, P.R.M. [Universidade Federal de Pernambuco (UFPE), Recife, PE (Brazil); Cabral, H.L.D. [TRANSPETRO, Rio de Janeiro, RJ (Brazil); Andrade, E.Q. [Petroleo Brasileiro S.A. (PETROBRAS), Rio de Janeiro, RJ (Brazil)

    2009-07-01

    Although the Finite Element Method (FEM) has proved to be a powerful tool to predict the failure pressure of corroded pipes, the generation of good computational models of pipes with corrosion defects can take several days. This makes the use of computational simulation procedure difficult to apply in practice. The main purpose of this work is to develop a set of computational tools to produce automatically models of pipes with defects, ready to be analyzed with commercial FEM programs, starting from a few parameters that locate and provide the main dimensions of the defect or a series of defects. Here these defects can be internal and external and also assume general spatial locations along the pipe. Idealized rectangular and elliptic geometries can be generated. These tools were based on MSC.PATRAN pre and post-processing programs and were written with PCL (Patran Command Language). The program for the automatic generation of models (PIPEFLAW) has a simplified and customized graphical interface, so that an engineer with basic notions of computational simulation with the FEM can generate rapidly models that result in precise and reliable simulations. Some examples of models of pipes with defects generated by the PIPEFLAW system are shown, and the results of numerical analyses, done with the tools presented in this work, are compared with, empiric results. (author)

  17. Automatic Texture Reconstruction of 3d City Model from Oblique Images

    Science.gov (United States)

    Kang, Junhua; Deng, Fei; Li, Xinwei; Wan, Fang

    2016-06-01

    In recent years, the photorealistic 3D city models are increasingly important in various geospatial applications related to virtual city tourism, 3D GIS, urban planning, real-estate management. Besides the acquisition of high-precision 3D geometric data, texture reconstruction is also a crucial step for generating high-quality and visually realistic 3D models. However, most of the texture reconstruction approaches are probably leading to texture fragmentation and memory inefficiency. In this paper, we introduce an automatic framework of texture reconstruction to generate textures from oblique images for photorealistic visualization. Our approach include three major steps as follows: mesh parameterization, texture atlas generation and texture blending. Firstly, mesh parameterization procedure referring to mesh segmentation and mesh unfolding is performed to reduce geometric distortion in the process of mapping 2D texture to 3D model. Secondly, in the texture atlas generation step, the texture of each segmented region in texture domain is reconstructed from all visible images with exterior orientation and interior orientation parameters. Thirdly, to avoid color discontinuities at boundaries between texture regions, the final texture map is generated by blending texture maps from several corresponding images. We evaluated our texture reconstruction framework on a dataset of a city. The resulting mesh model can get textured by created texture without resampling. Experiment results show that our method can effectively mitigate the occurrence of texture fragmentation. It is demonstrated that the proposed framework is effective and useful for automatic texture reconstruction of 3D city model.

  18. Automatic localization of IASLC-defined mediastinal lymph node stations on CT images using fuzzy models

    Science.gov (United States)

    Matsumoto, Monica M. S.; Beig, Niha G.; Udupa, Jayaram K.; Archer, Steven; Torigian, Drew A.

    2014-03-01

    Lung cancer is associated with the highest cancer mortality rates among men and women in the United States. The accurate and precise identification of the lymph node stations on computed tomography (CT) images is important for staging disease and potentially for prognosticating outcome in patients with lung cancer, as well as for pretreatment planning and response assessment purposes. To facilitate a standard means of referring to lymph nodes, the International Association for the Study of Lung Cancer (IASLC) has recently proposed a definition of the different lymph node stations and zones in the thorax. However, nodal station identification is typically performed manually by visual assessment in clinical radiology. This approach leaves room for error due to the subjective and potentially ambiguous nature of visual interpretation, and is labor intensive. We present a method of automatically recognizing the mediastinal IASLC-defined lymph node stations by modifying a hierarchical fuzzy modeling approach previously developed for body-wide automatic anatomy recognition (AAR) in medical imagery. Our AAR-lymph node (AAR-LN) system follows the AAR methodology and consists of two steps. In the first step, the various lymph node stations are manually delineated on a set of CT images following the IASLC definitions. These delineations are then used to build a fuzzy hierarchical model of the nodal stations which are considered as 3D objects. In the second step, the stations are automatically located on any given CT image of the thorax by using the hierarchical fuzzy model and object recognition algorithms. Based on 23 data sets used for model building, 22 independent data sets for testing, and 10 lymph node stations, a mean localization accuracy of within 1-6 voxels has been achieved by the AAR-LN system.

  19. AUTOMATIC MODEL SELECTION FOR 3D RECONSTRUCTION OF BUILDINGS FROM SATELLITE IMAGARY

    Directory of Open Access Journals (Sweden)

    T. Partovi

    2013-09-01

    Full Text Available Through the improvements of satellite sensor and matching technology, the derivation of 3D models from space borne stereo data obtained a lot of interest for various applications such as mobile navigation, urban planning, telecommunication, and tourism. The automatic reconstruction of 3D building models from space borne point cloud data is still an active research topic. The challenging problem in this field is the relatively low quality of the Digital Surface Model (DSM generated by stereo matching of satellite data comparing to airborne LiDAR data. In order to establish an efficient method to achieve high quality models and complete automation from the mentioned DSM, in this paper a new method based on a model-driven strategy is proposed. For improving the results, refined orthorectified panchromatic images are introduced into the process as additional data. The idea of this method is based on ridge line extraction and analysing height values in direction of and perpendicular to the ridgeline direction. After applying pre-processing to the orthorectified data, some feature descriptors are extracted from the DSM, to improve the automatic ridge line detection. Applying RANSAC a line is fitted to each group of ridge points. Finally these ridge lines are refined by matching them or closing gaps. In order to select the type of roof model the heights of point in extension of the ridge line and height differences perpendicular to the ridge line are analysed. After roof model selection, building edge information is extracted from canny edge detection and parameters derived from the roof parts. Then the best model is fitted to extracted façade roofs based on detected type of model. Each roof is modelled independently and final 3D buildings are reconstructed by merging the roof models with the corresponding walls.

  20. Automatic Model Selection for 3d Reconstruction of Buildings from Satellite Imagary

    Science.gov (United States)

    Partovi, T.; Arefi, H.; Krauß, T.; Reinartz, P.

    2013-09-01

    Through the improvements of satellite sensor and matching technology, the derivation of 3D models from space borne stereo data obtained a lot of interest for various applications such as mobile navigation, urban planning, telecommunication, and tourism. The automatic reconstruction of 3D building models from space borne point cloud data is still an active research topic. The challenging problem in this field is the relatively low quality of the Digital Surface Model (DSM) generated by stereo matching of satellite data comparing to airborne LiDAR data. In order to establish an efficient method to achieve high quality models and complete automation from the mentioned DSM, in this paper a new method based on a model-driven strategy is proposed. For improving the results, refined orthorectified panchromatic images are introduced into the process as additional data. The idea of this method is based on ridge line extraction and analysing height values in direction of and perpendicular to the ridgeline direction. After applying pre-processing to the orthorectified data, some feature descriptors are extracted from the DSM, to improve the automatic ridge line detection. Applying RANSAC a line is fitted to each group of ridge points. Finally these ridge lines are refined by matching them or closing gaps. In order to select the type of roof model the heights of point in extension of the ridge line and height differences perpendicular to the ridge line are analysed. After roof model selection, building edge information is extracted from canny edge detection and parameters derived from the roof parts. Then the best model is fitted to extracted façade roofs based on detected type of model. Each roof is modelled independently and final 3D buildings are reconstructed by merging the roof models with the corresponding walls.

  1. Automatic Extraction of IndoorGML Core Model from OpenStreetMap

    Science.gov (United States)

    Mirvahabi, S. S.; Abbaspour, R. A.

    2015-12-01

    Navigation has become an essential component of human life and a necessary component in many fields. Because of the increasing size and complexity of buildings, a unified data model for navigation analysis and exchange of information. IndoorGML describes an appropriate data model and XML schema of indoor spatial information that focuses on modelling indoor spaces. Collecting spatial data by professional and commercial providers often need to spend high cost and time, which is the major reason that VGI emerged. One of the most popular VGI projects is OpenStreetMap (OSM). In this paper, a new approach is proposed for the automatic generation of IndoorGML data core file from OSM data file. The output of this approach is the file of core data model that can be used alongside the navigation data model for navigation application of indoor space.

  2. An inverse radiative transfer model of the vegetation canopy based on automatic differentiation

    International Nuclear Information System (INIS)

    This paper presents an inverse model of radiation transfer processes occurring in the solar domain in vegetation plant canopies. It uses a gradient method to minimize the misfit between model simulation and observed radiant fluxes plus the deviation from prior information on the unknown model parameters. The second derivative of the misfit approximates uncertainty ranges for the estimated model parameters. In a second step, uncertainties are propagated from parameters to simulated radiant fluxes via the model's first derivative. All derivative information is provided by a highly efficient code generated via automatic differentiation of the radiative transfer code. The paper further derives and evaluates an approach for avoiding secondary minima of the misfit. The approach exploits the smooth dependence of the solution on the observations, and relies on a database of solutions for a discretized version of the observation space

  3. Prediction model for outcome after low-back surgery: individualized likelihood of complication, hospital readmission, return to work, and 12-month improvement in functional disability.

    Science.gov (United States)

    McGirt, Matthew J; Sivaganesan, Ahilan; Asher, Anthony L; Devin, Clinton J

    2015-12-01

    OBJECT Lumbar spine surgery has been demonstrated to be efficacious for many degenerative spine conditions. However, there is wide variability in outcome after spine surgery at the individual patient level. All stakeholders in spine care will benefit from identification of the unique patient or disease subgroups that are least likely to benefit from surgery, are prone to costly complications, and have increased health care utilization. There remains a large demand for individual patient-level predictive analytics to guide decision support to optimize outcomes at the patient and population levels. METHODS One thousand eight hundred three consecutive patients undergoing spine surgery for various degenerative lumbar diagnoses were prospectively enrolled and followed for 1 year. A comprehensive patient interview and health assessment was performed at baseline and at 3 and 12 months after surgery. All predictive covariates were selected a priori. Eighty percent of the sample was randomly selected for model development, and 20% for model validation. Linear regression was performed with Bayesian model averaging to model 12-month ODI (Oswestry Disability Index). Logistic regression with Bayesian model averaging was used to model likelihood of complications, 30-day readmission, need for inpatient rehabilitation, and return to work. Goodness-of-fit was assessed via R(2) for 12-month ODI and via the c-statistic, area under the receiver operating characteristic curve (AUC), for the categorical endpoints. Discrimination (predictive performance) was assessed, using R(2) for the ODI model and the c-statistic for the categorical endpoint models. Calibration was assessed using a plot of predicted versus observed values for the ODI model and the Hosmer-Lemeshow test for the categorical endpoint models. RESULTS On average, all patient-reported outcomes (PROs) were improved after surgery (ODI baseline vs 12 month: 50.4 vs 29.5%, p < 0.001). Complications occurred in 121 patients (6

  4. A semi-automatic method for developing an anthropomorphic numerical model of dielectric anatomy by MRI

    International Nuclear Information System (INIS)

    Complex permittivity values have a dominant role in the overall consideration of interaction between radiofrequency electromagnetic fields and living matter, and in related applications such as electromagnetic dosimetry. There are still some concerns about the accuracy of published data and about their variability due to the heterogeneous nature of biological tissues. The aim of this study is to provide an alternative semi-automatic method by which numerical dielectric human models for dosimetric studies can be obtained. Magnetic resonance imaging (MRI) tomography was used to acquire images. A new technique was employed to correct nonuniformities in the images and frequency-dependent transfer functions to correlate image intensity with complex permittivity were used. The proposed method provides frequency-dependent models in which permittivity and conductivity vary with continuity-even in the same tissue-reflecting the intrinsic realistic spatial dispersion of such parameters. The human model is tested with an FDTD (finite difference time domain) algorithm at different frequencies; the results of layer-averaged and whole-body-averaged SAR (specific absorption rate) are compared with published work, and reasonable agreement has been found. Due to the short time needed to obtain a whole body model, this semi-automatic method may be suitable for efficient study of various conditions that can determine large differences in the SAR distribution, such as body shape, posture, fat-to-muscle ratio, height and weight

  5. A 6D CAD Model for the Automatic Assessment of Building Sustainability

    Directory of Open Access Journals (Sweden)

    Ping Yung

    2014-08-01

    Full Text Available Current building assessment methods limit themselves in their environmental impact by failing to consider the other two aspects of sustainability: the economic and the social. They tend to be complex and costly to run, and therefore are of limited value in comparing design options. This paper proposes and develops a model for the automatic assessment of a building’s sustainability life cycle with the building information modelling (BIM approach and its enabling technologies. A 6D CAD model is developed which could be used as a design aid instead of as a post-construction evaluation tool. 6D CAD includes 3D design as well as a fourth dimension (schedule, a fifth dimension (cost and a sixth dimension (sustainability. The model can automatically derive quantities (5D, calculate economic (5D and 6D, environmental and social impacts (6D, and evaluate the sustainability performance of alternative design options. The sustainability assessment covers the life cycle stages of a building, namely material production, construction, operation, maintenance, demolition and disposal.

  6. Automatic, Global and Dynamic Student Modeling in a Ubiquitous Learning Environment

    Directory of Open Access Journals (Sweden)

    Sabine Graf

    2009-03-01

    Full Text Available Ubiquitous learning allows students to learn at any time and any place. Adaptivity plays an important role in ubiquitous learning, aiming at providing students with adaptive and personalized learning material, activities, and information at the right place and the right time. However, for providing rich adaptivity, the student model needs to be able to gather a variety of information about the students. In this paper, an automatic, global, and dynamic student modeling approach is introduced, which aims at identifying and frequently updating information about students’ progress, learning styles, interests and knowledge level, problem solving abilities, preferences for using the system, social connectivity, and current location. This information is gathered in an automatic way, using students’ behavior and actions in different learning situations provided by different components/services of the ubiquitous learning environment. By providing a comprehensive student model, students can be supported by rich adaptivity in every component/service of the learning environment. Furthermore, the information in the student model can help in giving teachers a better understanding about the students’ learning process.

  7. Focusing on media body ideal images triggers food intake among restrained eaters: a test of restraint theory and the elaboration likelihood model.

    Science.gov (United States)

    Boyce, Jessica A; Kuijer, Roeline G

    2014-04-01

    Although research consistently shows that images of thin women in the media (media body ideals) affect women negatively (e.g., increased weight dissatisfaction and food intake), this effect is less clear among restrained eaters. The majority of experiments demonstrate that restrained eaters - identified with the Restraint Scale - consume more food than do other participants after viewing media body ideal images; whereas a minority of experiments suggest that such images trigger restrained eaters' dietary restraint. Weight satisfaction and mood results are just as variable. One reason for these inconsistent results might be that different methods of image exposure (e.g., slideshow vs. film) afford varying levels of attention. Therefore, we manipulated attention levels and measured participants' weight satisfaction and food intake. We based our hypotheses on the elaboration likelihood model and on restraint theory. We hypothesised that advertent (i.e., processing the images via central routes of persuasion) and inadvertent (i.e., processing the images via peripheral routes of persuasion) exposure would trigger differing degrees of weight dissatisfaction and dietary disinhibition among restrained eaters (cf. restraint theory). Participants (N = 174) were assigned to one of four conditions: advertent or inadvertent exposure to media or control images. The dependent variables were measured in a supposedly unrelated study. Although restrained eaters' weight satisfaction was not significantly affected by either media exposure condition, advertent (but not inadvertent) media exposure triggered restrained eaters' eating. These results suggest that teaching restrained eaters how to pay less attention to media body ideal images might be an effective strategy in media-literary interventions. PMID:24854816

  8. Empirical likelihood inference for diffusion processes with jumps

    Institute of Scientific and Technical Information of China (English)

    2010-01-01

    In this paper, we consider the empirical likelihood inference for the jump-diffusion model. We construct the confidence intervals based on the empirical likelihood for the infinitesimal moments in the jump-diffusion models. They are better than the confidence intervals which are based on the asymptotic normality of point estimates.

  9. Automatic Seamline Network Generation for Urban Orthophoto Mosaicking with the Use of a Digital Surface Model

    Directory of Open Access Journals (Sweden)

    Qi Chen

    2014-12-01

    Full Text Available Intelligent seamline selection for image mosaicking is an area of active research in the fields of massive data processing, computer vision, photogrammetry and remote sensing. In mosaicking applications for digital orthophoto maps (DOMs, the visual transition in mosaics is mainly caused by differences in positioning accuracy, image tone and relief displacement of high ground objects between overlapping DOMs. Among these three factors, relief displacement, which prevents the seamless mosaicking of images, is relatively more difficult to address. To minimize visual discontinuities, many optimization algorithms have been studied for the automatic selection of seamlines to avoid high ground objects. Thus, a new automatic seamline selection algorithm using a digital surface model (DSM is proposed. The main idea of this algorithm is to guide a seamline toward a low area on the basis of the elevation information in a DSM. Given that the elevation of a DSM is not completely synchronous with a DOM, a new model, called the orthoimage elevation synchronous model (OESM, is derived and introduced. OESM can accurately reflect the elevation information for each DOM unit. Through the morphological processing of the OESM data in the overlapping area, an initial path network is obtained for seamline selection. Subsequently, a cost function is defined on the basis of several measurements, and Dijkstra’s algorithm is adopted to determine the least-cost path from the initial network. Finally, the proposed algorithm is employed for automatic seamline network construction; the effective mosaic polygon of each image is determined, and a seamless mosaic is generated. The experiments with three different datasets indicate that the proposed method meets the requirements for seamline network construction. In comparative trials, the generated seamlines pass through fewer ground objects with low time consumption.

  10. Composite likelihood estimation of demographic parameters

    Directory of Open Access Journals (Sweden)

    Garrigan Daniel

    2009-11-01

    Full Text Available Abstract Background Most existing likelihood-based methods for fitting historical demographic models to DNA sequence polymorphism data to do not scale feasibly up to the level of whole-genome data sets. Computational economies can be achieved by incorporating two forms of pseudo-likelihood: composite and approximate likelihood methods. Composite likelihood enables scaling up to large data sets because it takes the product of marginal likelihoods as an estimator of the likelihood of the complete data set. This approach is especially useful when a large number of genomic regions constitutes the data set. Additionally, approximate likelihood methods can reduce the dimensionality of the data by summarizing the information in the original data by either a sufficient statistic, or a set of statistics. Both composite and approximate likelihood methods hold promise for analyzing large data sets or for use in situations where the underlying demographic model is complex and has many parameters. This paper considers a simple demographic model of allopatric divergence between two populations, in which one of the population is hypothesized to have experienced a founder event, or population bottleneck. A large resequencing data set from human populations is summarized by the joint frequency spectrum, which is a matrix of the genomic frequency spectrum of derived base frequencies in two populations. A Bayesian Metropolis-coupled Markov chain Monte Carlo (MCMCMC method for parameter estimation is developed that uses both composite and likelihood methods and is applied to the three different pairwise combinations of the human population resequence data. The accuracy of the method is also tested on data sets sampled from a simulated population model with known parameters. Results The Bayesian MCMCMC method also estimates the ratio of effective population size for the X chromosome versus that of the autosomes. The method is shown to estimate, with reasonable

  11. Accurate determination of phase arrival times using autoregressive likelihood estimation

    Directory of Open Access Journals (Sweden)

    G. Kvaerna

    1994-06-01

    Full Text Available We have investigated the potential automatic use of an onset picker based on autoregressive likelihood estimation. Both a single component version and a three component version of this method have been tested on data from events located in the Khibiny Massif of the Kola peninsula, recorded at the Apatity array, the Apatity three component station and the ARCESS array. Using this method, we have been able to estimate onset times to an accuracy (standard deviation of about 0.05 s for P-phases and 0.15 0.20 s for S phases. These accuracies are as good as for analyst picks, and are considerably better than the accuracies of the current onset procedure used for processing of regional array data at NORSAR. In another application, we have developed a generic procedure to reestimate the onsets of all types of first arriving P phases. By again applying the autoregressive likelihood technique, we have obtained automatic onset times of a quality such that 70% of the automatic picks are within 0.1 s of the best manual pick. For the onset time procedure currently used at NORSAR, the corresponding number is 28%. Clearly, automatic reestimation of first arriving P onsets using the autoregressive likelihood technique has the potential of significantly reducing the retiming efforts of the analyst.

  12. Unidirectional high fiber content composites: Automatic 3D FE model generation and damage simulation

    DEFF Research Database (Denmark)

    Qing, Hai; Mishnaevsky, Leon

    2009-01-01

    A new method and a software code for the automatic generation of 3D micromechanical FE models of unidirectional long-fiber-reinforced composite (LFRC) with high fiber volume fraction with random fiber arrangement are presented. The fiber arrangement in the cross-section is generated through random...... movements of fibers from their initial regular hexagonal arrangement. Damageable layers are introduced into the fibers to take into account the random distribution of the fiber strengths. A series of computational experiments on the glass fibers reinforced polymer epoxy matrix composite is performed to...

  13. Maximum likelihood topographic map formation.

    Science.gov (United States)

    Van Hulle, Marc M

    2005-03-01

    We introduce a new unsupervised learning algorithm for kernel-based topographic map formation of heteroscedastic gaussian mixtures that allows for a unified account of distortion error (vector quantization), log-likelihood, and Kullback-Leibler divergence. PMID:15802004

  14. Sequential Clustering based Facial Feature Extraction Method for Automatic Creation of Facial Models from Orthogonal Views

    CERN Document Server

    Ghahari, Alireza

    2009-01-01

    Multiview 3D face modeling has attracted increasing attention recently and has become one of the potential avenues in future video systems. We aim to make more reliable and robust automatic feature extraction and natural 3D feature construction from 2D features detected on a pair of frontal and profile view face images. We propose several heuristic algorithms to minimize possible errors introduced by prevalent nonperfect orthogonal condition and noncoherent luminance. In our approach, we first extract the 2D features that are visible to both cameras in both views. Then, we estimate the coordinates of the features in the hidden profile view based on the visible features extracted in the two orthogonal views. Finally, based on the coordinates of the extracted features, we deform a 3D generic model to perform the desired 3D clone modeling. Present study proves the scope of resulted facial models for practical applications like face recognition and facial animation.

  15. A Model for Semi-Automatic Composition of Educational Content from Open Repositories of Learning Objects

    Directory of Open Access Journals (Sweden)

    Paula Andrea Rodríguez Marín

    2014-04-01

    Full Text Available Learning objects (LOs repositories are important in building educational content and should allow search, retrieval and composition processes to be successfully developed to reach educational goals. However, such processes require so much time-consuming and not always provide the desired results. Thus, the aim of this paper is to propose a model for the semiautomatic composition of LOs, which are automatically recovered from open repositories. For the development of model, various text similarity measures are discussed, while for calibration and validation some comparison experiments were performed using the results obtained by teachers. Experimental results show that when using a value of k (number of LOs selected of at least 3, the percentage of similarities between the model and such made by experts exceeds 75%. To conclude, it can be established that the model proposed allows teachers to save time and effort for LOs selection by performing a pre-filter process.

  16. Invariants and Likelihood Ratio Statistics

    OpenAIRE

    McCullagh, P.; Cox, D. R.

    1986-01-01

    Because the likelihood ratio statistic is invariant under reparameterization, it is possible to make a large-sample expansion of the statistic itself and of its expectation in terms of invariants. In particular, the Bartlett adjustment factor can be expressed in terms of invariant combinations of cumulants of the first two log-likelihood derivatives. Such expansions are given, first for a scalar parameter and then for vector parameters. Geometrical interpretation is given where possible and s...

  17. Analytic Methods for Cosmological Likelihoods

    OpenAIRE

    Taylor, A. N.; Kitching, T. D.

    2010-01-01

    We present general, analytic methods for Cosmological likelihood analysis and solve the "many-parameters" problem in Cosmology. Maxima are found by Newton's Method, while marginalization over nuisance parameters, and parameter errors and covariances are estimated by analytic marginalization of an arbitrary likelihood function with flat or Gaussian priors. We show that information about remaining parameters is preserved by marginalization. Marginalizing over all parameters, we find an analytic...

  18. Automatic processing and modeling of GPR data for pavement thickness and properties

    Science.gov (United States)

    Olhoeft, Gary R.; Smith, Stanley S., III

    2000-04-01

    A GSSI SIR-8 with 1 GHz air-launched horn antennas has been modified to acquire data from a moving vehicle. Algorithms have been developed to acquire the data, and to automatically calibrate, position, process, and full waveform model it without operator intervention. Vehicle suspension system bounce is automatically compensated (for varying antenna height). Multiple scans are modeled by full waveform inversion that is remarkably robust and relatively insensitive to noise. Statistical parameters and histograms are generated for the thickness and dielectric permittivity of concrete or asphalt pavements. The statistical uncertainty with which the thickness is determined is given with each thickness measurement, along with the dielectric permittivity of the pavement material and of the subgrade material at each location. Permittivities are then converted into equivalent density and water content. Typical statistical uncertainties in thickness are better than 0.4 cm in 20 cm thick pavement. On a Pentium laptop computer, the data may be processed and modeled to have cross-sectional images and computed pavement thickness displayed in real time at highway speeds.

  19. Semi-automatic registration of 3D orthodontics models from photographs

    Science.gov (United States)

    Destrez, Raphaël.; Treuillet, Sylvie; Lucas, Yves; Albouy-Kissi, Benjamin

    2013-03-01

    In orthodontics, a common practice used to diagnose and plan the treatment is the dental cast. After digitization by a CT-scan or a laser scanner, the obtained 3D surface models can feed orthodontics numerical tools for computer-aided diagnosis and treatment planning. One of the pre-processing critical steps is the 3D registration of dental arches to obtain the occlusion of these numerical models. For this task, we propose a vision based method to automatically compute the registration based on photos of patient mouth. From a set of matched singular points between two photos and the dental 3D models, the rigid transformation to apply to the mandible to be in contact with the maxillary may be computed by minimizing the reprojection errors. During a precedent study, we established the feasibility of this visual registration approach with a manual selection of singular points. This paper addresses the issue of automatic point detection. Based on a priori knowledge, histogram thresholding and edge detection are used to extract specific points in 2D images. Concurrently, curvatures information detects 3D corresponding points. To improve the quality of the final registration, we also introduce a combined optimization of the projection matrix with the 2D/3D point positions. These new developments are evaluated on real data by considering the reprojection errors and the deviation angles after registration in respect to the manual reference occlusion realized by a specialist.

  20. Modeling and automatic feedback control of tremor: adaptive estimation of deep brain stimulation.

    Directory of Open Access Journals (Sweden)

    Muhammad Rehan

    Full Text Available This paper discusses modeling and automatic feedback control of (postural and rest tremor for adaptive-control-methodology-based estimation of deep brain stimulation (DBS parameters. The simplest linear oscillator-based tremor model, between stimulation amplitude and tremor, is investigated by utilizing input-output knowledge. Further, a nonlinear generalization of the oscillator-based tremor model, useful for derivation of a control strategy involving incorporation of parametric-bound knowledge, is provided. Using the Lyapunov method, a robust adaptive output feedback control law, based on measurement of the tremor signal from the fingers of a patient, is formulated to estimate the stimulation amplitude required to control the tremor. By means of the proposed control strategy, an algorithm is developed for estimation of DBS parameters such as amplitude, frequency and pulse width, which provides a framework for development of an automatic clinical device for control of motor symptoms. The DBS parameter estimation results for the proposed control scheme are verified through numerical simulations.

  1. Automatic Lameness Detection in a Milking Robot : Instrumentation, measurement software, algorithms for data analysis and a neural network model

    OpenAIRE

    Pastell, Matti

    2007-01-01

    The aim of this thesis is to develop a fully automatic lameness detection system that operates in a milking robot. The instrumentation, measurement software, algorithms for data analysis and a neural network model for lameness detection were developed. Automatic milking has become a common practice in dairy husbandry, and in the year 2006 about 4000 farms worldwide used over 6000 milking robots. There is a worldwide movement with the objective of fully automating every process from feedi...

  2. Semi-Automatic Building Models and FAÇADE Texture Mapping from Mobile Phone Images

    Science.gov (United States)

    Jeong, J.; Kim, T.

    2016-06-01

    Research on 3D urban modelling has been actively carried out for a long time. Recently the need of 3D urban modelling research is increased rapidly due to improved geo-web services and popularized smart devices. Nowadays 3D urban models provided by, for example, Google Earth use aerial photos for 3D urban modelling but there are some limitations: immediate update for the change of building models is difficult, many buildings are without 3D model and texture, and large resources for maintaining and updating are inevitable. To resolve the limitations mentioned above, we propose a method for semi-automatic building modelling and façade texture mapping from mobile phone images and analyze the result of modelling with actual measurements. Our method consists of camera geometry estimation step, image matching step, and façade mapping step. Models generated from this method were compared with actual measurement value of real buildings. Ratios of edge length of models and measurements were compared. Result showed 5.8% average error of length ratio. Through this method, we could generate a simple building model with fine façade textures without expensive dedicated tools and dataset.

  3. Likelihood-based inference with singular information matrix

    OpenAIRE

    Rotnitzky, Andrea; David R Cox; Bottai, Matteo; Robins, James

    2000-01-01

    We consider likelihood-based asymptotic inference for a p-dimensional parameter θ of an identifiable parametric model with singular information matrix of rank p-1 at θ=θ* and likelihood differentiable up to a specific order. We derive the asymptotic distribution of the likelihood ratio test statistics for the simple null hypothesis that θ=θ* and of the maximum likelihood estimator (MLE) of θ when θ=θ*. We show that there exists a reparametrization such that the MLE of the last p-1 components ...

  4. GRACE/SUSY Automatic Generation of Tree Amplitudes in the Minimal Supersymmetric Standard Model

    CERN Document Server

    Fujimoto, J

    2003-01-01

    GRACE/SUSY is a program package for generating the tree-level amplitude and evaluating the corresponding cross section of processes of the minimal supersymmetric extension of the standard model (MSSM). The Higgs potential adopted in the system, however, is assumed to have a more general form indicated by the two-Higgs-doublet model. This system is an extension of GRACE for the standard model(SM) of the electroweak and strong interactions. For a given MSSM process the Feynman graphs and amplitudes at tree-level are automatically created. The Monte-Carlo phase space integration by means of BASES gives the total and differential cross sections. When combined with SPRING, an event generator, the program package provides us with the simulation of the SUSY particle productions.

  5. Learning to Automatically Detect Features for Mobile Robots Using Second-Order Hidden Markov Models

    Directory of Open Access Journals (Sweden)

    Richard Washington

    2008-11-01

    Full Text Available In this paper, we propose a new method based on Hidden Markov Models to interpret temporal sequences of sensor data from mobile robots to automatically detect features. Hidden Markov Models have been used for a long time in pattern recognition, especially in speech recognition. Their main advantages over other methods (such as neural networks are their ability to model noisy temporal signals of variable length. We show in this paper that this approach is well suited for interpretation of temporal sequences of mobile-robot sensor data. We present two distinct experiments and results: the first one in an indoor environment where a mobile robot learns to detect features like open doors or T- intersections, the second one in an outdoor environment where a different mobile robot has to identify situations like climbing a hill or crossing a rock.

  6. A Parallel Interval Computation Model for Global Optimization with Automatic Load Balancing

    Institute of Scientific and Technical Information of China (English)

    Yong Wu; Arun Kumar

    2012-01-01

    In this paper,we propose a decentralized parallel computation model for global optimization using interval analysis.The model is adaptive to any number of processors and the workload is automatically and evenly distributed among all processors by alternative message passing.The problems received by each processor are processed based on their local dominance properties,which avoids unnecessary interval evaluations.Further,the problem is treated as a whole at the beginning of computation so that no initial decomposition scheme is required.Numerical experiments indicate that the model works well and is stable with different number of parallel processors,distributes the load evenly among the processors,and provides an impressive speedup,especially when the problem is time-consuming to solve.

  7. Lightning Protection Performance Assessment of Transmission Line Based on ATP model Automatic Generation

    Directory of Open Access Journals (Sweden)

    Luo Hanwu

    2016-01-01

    Full Text Available This paper presents a novel method to solve the initial lightning breakdown current by combing ATP and MATLAB simulation software effectively, with the aims to evaluate the lightning protection performance of transmission line. Firstly, the executable ATP simulation model is generated automatically according to the required information such as power source parameters, tower parameters, overhead line parameters, grounding resistance and lightning current parameters, etc. through an interface program coded by MATLAB. Then, the data are extracted from the generated LIS files which can be obtained by executing the ATP simulation model, the occurrence of transmission lie breakdown can be determined by the relative data in LIS file. The lightning current amplitude should be reduced when the breakdown occurs, and vice the verse. Thus the initial lightning breakdown current of a transmission line with given parameters can be determined accurately by continuously changing the lightning current amplitude, which is realized by a loop computing algorithm that is coded by MATLAB software. The method proposed in this paper can generate the ATP simulation program automatically, and facilitates the lightning protection performance assessment of transmission line.

  8. Automatic Lung Tumor Segmentation on PET/CT Images Using Fuzzy Markov Random Field Model

    Directory of Open Access Journals (Sweden)

    Yu Guo

    2014-01-01

    Full Text Available The combination of positron emission tomography (PET and CT images provides complementary functional and anatomical information of human tissues and it has been used for better tumor volume definition of lung cancer. This paper proposed a robust method for automatic lung tumor segmentation on PET/CT images. The new method is based on fuzzy Markov random field (MRF model. The combination of PET and CT image information is achieved by using a proper joint posterior probability distribution of observed features in the fuzzy MRF model which performs better than the commonly used Gaussian joint distribution. In this study, the PET and CT simulation images of 7 non-small cell lung cancer (NSCLC patients were used to evaluate the proposed method. Tumor segmentations with the proposed method and manual method by an experienced radiation oncologist on the fused images were performed, respectively. Segmentation results obtained with the two methods were similar and Dice’s similarity coefficient (DSC was 0.85 ± 0.013. It has been shown that effective and automatic segmentations can be achieved with this method for lung tumors which locate near other organs with similar intensities in PET and CT images, such as when the tumors extend into chest wall or mediastinum.

  9. Automatic representation of urban terrain models for simulations on the example of VBS2

    Science.gov (United States)

    Bulatov, Dimitri; Häufel, Gisela; Solbrig, Peter; Wernerus, Peter

    2014-10-01

    Virtual simulations have been on the rise together with the fast progress of rendering engines and graphics hardware. Especially in military applications, offensive actions in modern peace-keeping missions have to be quick, firm and precise, especially under the conditions of asymmetric warfare, non-cooperative urban terrain and rapidly developing situations. Going through the mission in simulation can prepare the minds of soldiers and leaders, increase selfconfidence and tactical awareness, and finally save lives. This work is dedicated to illustrate the potential and limitations of integration of semantic urban terrain models into a simulation. Our system of choice is Virtual Battle Space 2, a simulation system created by Bohemia Interactive System. The topographic object types that we are able to export into this simulation engine are either results of the sensor data evaluation (building, trees, grass, and ground), which is done fully-automatically, or entities obtained from publicly available sources (streets and water-areas), which can be converted into the system-proper format with a few mouse clicks. The focus of this work lies in integrating of information about building façades into the simulation. We are inspired by state-of the art methods that allow for automatic extraction of doors and windows in laser point clouds captured from building walls and thus increase the level of details of building models. As a consequence, it is important to simulate these animationable entities. Doing so, we are able to make accessible some of the buildings in the simulation.

  10. Model-based vision system for automatic recognition of structures in dental radiographs

    Science.gov (United States)

    Acharya, Raj S.; Samarabandu, Jagath K.; Hausmann, E.; Allen, K. A.

    1991-07-01

    X-ray diagnosis of destructive periodontal disease requires assessing serial radiographs by an expert to determine the change in the distance between cemento-enamel junction (CEJ) and the bone crest. To achieve this without the subjectivity of a human expert, a knowledge based system is proposed to automatically locate the two landmarks which are the CEJ and the level of alveolar crest at its junction with the periodontal ligament space. This work is a part of an ongoing project to automatically measure the distance between CEJ and the bone crest along a line parallel to the axis of the tooth. The approach presented in this paper is based on identifying a prominent feature such as the tooth boundary using local edge detection and edge thresholding to establish a reference and then using model knowledge to process sub-regions in locating the landmarks. Segmentation techniques invoked around these regions consists of a neural-network like hierarchical refinement scheme together with local gradient extraction, multilevel thresholding and ridge tracking. Recognition accuracy is further improved by first locating the easily identifiable parts of the bone surface and the interface between the enamel and the dentine and then extending these boundaries towards the periodontal ligament space and the tooth boundary respectively. The system is realized as a collection of tools (or knowledge sources) for pre-processing, segmentation, primary and secondary feature detection and a control structure based on the blackboard model to coordinate the activities of these tools.

  11. Growing local likelihood network: Emergence of communities

    Science.gov (United States)

    Chen, S.; Small, M.

    2015-10-01

    In many real situations, networks grow only via local interactions. New nodes are added to the growing network with information only pertaining to a small subset of existing nodes. Multilevel marketing, social networks, and disease models can all be depicted as growing networks based on local (network path-length) distance information. In these examples, all nodes whose distance from a chosen center is less than d form a subgraph. Hence, we grow networks with information only from these subgraphs. Moreover, we use a likelihood-based method, where at each step we modify the networks by changing their likelihood to be closer to the expected degree distribution. Combining the local information and the likelihood method, we grow networks that exhibit novel features. We discover that the likelihood method, over certain parameter ranges, can generate networks with highly modulated communities, even when global information is not available. Communities and clusters are abundant in real-life networks, and the method proposed here provides a natural mechanism for the emergence of communities in scale-free networks. In addition, the algorithmic implementation of network growth via local information is substantially faster than global methods and allows for the exploration of much larger networks.

  12. Automatic generation of predictive dynamic models reveals nuclear phosphorylation as the key Msn2 control mechanism.

    Science.gov (United States)

    Sunnåker, Mikael; Zamora-Sillero, Elias; Dechant, Reinhard; Ludwig, Christina; Busetto, Alberto Giovanni; Wagner, Andreas; Stelling, Joerg

    2013-05-28

    Predictive dynamical models are critical for the analysis of complex biological systems. However, methods to systematically develop and discriminate among systems biology models are still lacking. We describe a computational method that incorporates all hypothetical mechanisms about the architecture of a biological system into a single model and automatically generates a set of simpler models compatible with observational data. As a proof of principle, we analyzed the dynamic control of the transcription factor Msn2 in Saccharomyces cerevisiae, specifically the short-term mechanisms mediating the cells' recovery after release from starvation stress. Our method determined that 12 of 192 possible models were compatible with available Msn2 localization data. Iterations between model predictions and rationally designed phosphoproteomics and imaging experiments identified a single-circuit topology with a relative probability of 99% among the 192 models. Model analysis revealed that the coupling of dynamic phenomena in Msn2 phosphorylation and transport could lead to efficient stress response signaling by establishing a rate-of-change sensor. Similar principles could apply to mammalian stress response pathways. Systematic construction of dynamic models may yield detailed insight into nonobvious molecular mechanisms. PMID:23716718

  13. Slow Dynamics Model of Compressed Air Energy Storage and Battery Storage Technologies for Automatic Generation Control

    Energy Technology Data Exchange (ETDEWEB)

    Krishnan, Venkat; Das, Trishna

    2016-05-01

    Increasing variable generation penetration and the consequent increase in short-term variability makes energy storage technologies look attractive, especially in the ancillary market for providing frequency regulation services. This paper presents slow dynamics model for compressed air energy storage and battery storage technologies that can be used in automatic generation control studies to assess the system frequency response and quantify the benefits from storage technologies in providing regulation service. The paper also represents the slow dynamics model of the power system integrated with storage technologies in a complete state space form. The storage technologies have been integrated to the IEEE 24 bus system with single area, and a comparative study of various solution strategies including transmission enhancement and combustion turbine have been performed in terms of generation cycling and frequency response performance metrics.

  14. Automatic corpus callosum segmentation using a deformable active Fourier contour model

    Science.gov (United States)

    Vachet, Clement; Yvernault, Benjamin; Bhatt, Kshamta; Smith, Rachel G.; Gerig, Guido; Cody Hazlett, Heather; Styner, Martin

    2012-03-01

    The corpus callosum (CC) is a structure of interest in many neuroimaging studies of neuro-developmental pathology such as autism. It plays an integral role in relaying sensory, motor and cognitive information from homologous regions in both hemispheres. We have developed a framework that allows automatic segmentation of the corpus callosum and its lobar subdivisions. Our approach employs constrained elastic deformation of flexible Fourier contour model, and is an extension of Szekely's 2D Fourier descriptor based Active Shape Model. The shape and appearance model, derived from a large mixed population of 150+ subjects, is described with complex Fourier descriptors in a principal component shape space. Using MNI space aligned T1w MRI data, the CC segmentation is initialized on the mid-sagittal plane using the tissue segmentation. A multi-step optimization strategy, with two constrained steps and a final unconstrained step, is then applied. If needed, interactive segmentation can be performed via contour repulsion points. Lobar connectivity based parcellation of the corpus callosum can finally be computed via the use of a probabilistic CC subdivision model. Our analysis framework has been integrated in an open-source, end-to-end application called CCSeg both with a command line and Qt-based graphical user interface (available on NITRC). A study has been performed to quantify the reliability of the semi-automatic segmentation on a small pediatric dataset. Using 5 subjects randomly segmented 3 times by two experts, the intra-class correlation coefficient showed a superb reliability (0.99). CCSeg is currently applied to a large longitudinal pediatric study of brain development in autism.

  15. AUTOMATIC TAGGING OF PERSIAN WEB PAGES BASED ON N-GRAM LANGUAGE MODELS USING MAPREDUCE

    Directory of Open Access Journals (Sweden)

    Saeed Shahrivari

    2015-07-01

    Full Text Available Page tagging is one of the most important facilities for increasing the accuracy of information retrieval in the web. Tags are simple pieces of data that usually consist of one or several words, and briefly describe a page. Tags provide useful information about a page and can be used for boosting the accuracy of searching, document clustering, and result grouping. The most accurate solution to page tagging is using human experts. However, when the number of pages is large, humans cannot be used, and some automatic solutions should be used instead. We propose a solution called PerTag which can automatically tag a set of Persian web pages. PerTag is based on n-gram models and uses the tf-idf method plus some effective Persian language rules to select proper tags for each web page. Since our target is huge sets of web pages, PerTag is built on top of the MapReduce distributed computing framework. We used a set of more than 500 million Persian web pages during our experiments, and extracted tags for each page using a cluster of 40 machines. The experimental results show that PerTag is both fast and accurate

  16. On the likelihood of forests

    Science.gov (United States)

    Shang, Yilun

    2016-08-01

    How complex a network is crucially impacts its function and performance. In many modern applications, the networks involved have a growth property and sparse structures, which pose challenges to physicists and applied mathematicians. In this paper, we introduce the forest likelihood as a plausible measure to gauge how difficult it is to construct a forest in a non-preferential attachment way. Based on the notions of admittable labeling and path construction, we propose algorithms for computing the forest likelihood of a given forest. Concrete examples as well as the distributions of forest likelihoods for all forests with some fixed numbers of nodes are presented. Moreover, we illustrate the ideas on real-life networks, including a benzenoid tree, a mathematical family tree, and a peer-to-peer network.

  17. Development of Monte Carlo automatic modeling functions of MCAM for TRIPOLI-ITER application

    Science.gov (United States)

    Lu, L.; Lee, Y. K.; Zhang, J. J.; Li, Y.; Zeng, Q.; Wu, Y. C.

    2009-07-01

    TRIPOLI is a Monte Carlo particle transport code simulating the three-dimensional transport of neutrons and photons with the Monte Carlo method, and it can be used for many applications to nuclear devices with complex geometries; however, modeling of a complex geometry is a time-consuming and error-prone task. The recently developed functions of Monte Carlo Automatic Modeling (MCAM) system, which is an interface code that can facilitate Monte Carlo modeling by employing the CAD technology, have implemented the bidirectional conversion between the CAD model and the TRIPOLI computation model. In this study, different geometric representations of CAD system and TRIPOLI code and the methodology of bidirectional conversion between them were introduced. A TRIPOLI input file of International Thermonuclear Experimental Reactor (ITER) benchmark model, which was distributed to validate the Monte Carlo modeling tools, was created and applied to simulate D-T fusion neutron source sampling and calculate first wall loading. Then the results were compared with that of Monte Carlo N-Particle (MCNP) and the good agreements present the feasibility and validity.

  18. Automatic tuning of liver tissue model using simulated annealing and genetic algorithm heuristic approaches

    Science.gov (United States)

    Sulaiman, Salina; Bade, Abdullah; Lee, Rechard; Tanalol, Siti Hasnah

    2014-07-01

    Mass Spring Model (MSM) is a highly efficient model in terms of calculations and easy implementation. Mass, spring stiffness coefficient and damping constant are three major components of MSM. This paper focuses on identifying the coefficients of spring stiffness and damping constant using automated tuning method by optimization in generating human liver model capable of responding quickly. To achieve the objective two heuristic approaches are used, namely Simulated Annealing (SA) and Genetic Algorithm (GA) on the human liver model data set. The properties of the mechanical heart, which are taken into consideration, are anisotropy and viscoelasticity. Optimization results from SA and GA are then implemented into the MSM to model two human hearts, each with its SA or GA construction parameters. These techniques are implemented while making FEM construction parameters as benchmark. Step size response of both models are obtained after MSMs were solved using Fourth Order Runge-Kutta (RK4) to compare the elasticity response of both models. Remodelled time using manual calculation methods was compared against heuristic optimization methods of SA and GA in showing that model with automatic construction is more realistic in terms of realtime interaction response time. Liver models generated using SA and GA optimization techniques are compared with liver model from manual calculation. It shows that the reconstruction time required for 1000 repetitions of SA and GA is faster than the manual method. Meanwhile comparison between construction time of SA and GA model indicates that model SA is faster than GA with varying rates of time 0.110635 seconds/1000 repetitions. Real-time interaction of mechanical properties is dependent on rate of time and speed of remodelling process. Thus, the SA and GA have proven to be suitable in enhancing realism of simulated real-time interaction in liver remodelling.

  19. Modeling Earthen Dike Stability: Sensitivity Analysis and Automatic Calibration of Diffusivities Based on Live Sensor Data

    CERN Document Server

    Melnikova, N B; Sloot, P M A

    2012-01-01

    The paper describes concept and implementation details of integrating a finite element module for dike stability analysis Virtual Dike into an early warning system for flood protection. The module operates in real-time mode and includes fluid and structural sub-models for simulation of porous flow through the dike and for dike stability analysis. Real-time measurements obtained from pore pressure sensors are fed into the simulation module, to be compared with simulated pore pressure dynamics. Implementation of the module has been performed for a real-world test case - an earthen levee protecting a sea-port in Groningen, the Netherlands. Sensitivity analysis and calibration of diffusivities have been performed for tidal fluctuations. An algorithm for automatic diffusivities calibration for a heterogeneous dike is proposed and studied. Analytical solutions describing tidal propagation in one-dimensional saturated aquifer are employed in the algorithm to generate initial estimates of diffusivities.

  20. A semi-automatic multiple view texture mapping for the surface model extracted by laser scanning

    Science.gov (United States)

    Zhang, Zhichao; Huang, Xianfeng; Zhang, Fan; Chang, Yongmin; Li, Deren

    2008-12-01

    Laser scanning is an effective way to acquire geometry data of the cultural heritage with complex architecture. After generating the 3D model of the object, it's difficult to do the exactly texture mapping for the real object. we take effort to create seamless texture maps for a virtual heritage of arbitrary topology. Texture detail is acquired directly from the real object in a light condition as uniform as we can make. After preprocessing, images are then registered on the 3D mesh by a semi-automatic way. Then we divide the mesh into mesh patches overlapped with each other according to the valid texture area of each image. An optimal correspondence between mesh patches and sections of the acquired images is built. Then, a smoothing approach is proposed to erase the seam between different images that map on adjacent mesh patches, based on texture blending. The obtained result with a Buddha of Dunhuang Mogao Grottoes is presented and discussed.

  1. Automatic generation of virtual worlds from architectural and mechanical CAD models

    International Nuclear Information System (INIS)

    Accelerator projects like the XFEL or the planned linear collider TESLA involve extensive architectural and mechanical design work, resulting in a variety of CAD models. The CAD models will be showing different parts of the project, like e.g. the different accelerator components or parts of the building complexes, and they will be created and stored by different groups in different formats. A complete CAD model of the accelerator and its buildings is thus difficult to obtain and would also be extremely huge and difficult to handle. This thesis describes the design and prototype development of a tool which automatically creates virtual worlds from different CAD models. The tool will enable the user to select a required area for visualization on a map, and then create a 3D-model of the selected area which can be displayed in a web-browser. The thesis first discusses the system requirements and provides some background on data visualization. Then, it introduces the system architecture, the algorithms and the used technologies, and finally demonstrates the capabilities of the system using two case studies. (orig.)

  2. A physics-based defects model and inspection algorithm for automatic visual inspection

    Science.gov (United States)

    Xie, Yu; Ye, Yutang; Zhang, Jing; Liu, Li; Liu, Lin

    2014-01-01

    The representation of physical characteristics is the most essential feature of mathematical models used for the detection of defects in automatic inspection systems. However, the feature of defects and formation of the defect image are not considered enough in traditional algorithms. This paper presents a mathematical model for defect inspection, denoted as the localized defects image model (LDIM), is different because it modeling the features of manual inspection, using a local defect merit function to quantify the cost that a pixel is defective. This function comprises two components: color deviation and color fluctuation. Parameters related to statistical data of the background region of images are also taken into consideration. Test results demonstrate that the model matches the definition of defects, as defined by international industrial standards IPC-A-610D and IPC-A-600G. Furthermore, the proposed approach enhances small defects to improve detection rates. Evaluation using a defects images database returned a 100% defect inspection rate with 0% false detection. Proving that this method could be practically applied in manufacture to quantify inspection standards and minimize false alarms resulting from human error.

  3. Parameter estimation in distributed hydrological catchment modelling using automatic calibration with multiple objectives

    Science.gov (United States)

    Madsen, Henrik

    A consistent framework for parameter estimation in distributed hydrological catchment modelling using automatic calibration is formulated. The framework focuses on the different steps in the estimation process from model parameterisation and selection of calibration parameters, formulation of calibration criteria, and choice of optimisation algorithm. The calibration problem is formulated in a general multi-objective context in which different objective functions that measure individual process descriptions can be optimised simultaneously. Within this framework it is possible to tailor the model calibration to the specific objectives of the model application being considered. A test example is presented that illustrates the use of the calibration framework for parameter estimation in the MIKE SHE integrated and distributed hydrological modelling system. A significant trade-off between the performance of the groundwater level simulations and the catchment runoff is observed in this case, defining a Pareto front with a very sharp structure. The Pareto optimum solution corresponding to a proposed balanced aggregated objective function is seen to provide a proper balance between the two objectives. Compared to a manual expert calibration, the balanced Pareto optimum solution provides generally better simulation of the runoff, whereas virtually similar performance is obtained for the groundwater level simulations.

  4. Automatic 3D segmentation of spinal cord MRI using propagated deformable models

    Science.gov (United States)

    De Leener, B.; Cohen-Adad, J.; Kadoury, S.

    2014-03-01

    Spinal cord diseases or injuries can cause dysfunction of the sensory and locomotor systems. Segmentation of the spinal cord provides measures of atrophy and allows group analysis of multi-parametric MRI via inter-subject registration to a template. All these measures were shown to improve diagnostic and surgical intervention. We developed a framework to automatically segment the spinal cord on T2-weighted MR images, based on the propagation of a deformable model. The algorithm is divided into three parts: first, an initialization step detects the spinal cord position and orientation by using the elliptical Hough transform on multiple adjacent axial slices to produce an initial tubular mesh. Second, a low-resolution deformable model is iteratively propagated along the spinal cord. To deal with highly variable contrast levels between the spinal cord and the cerebrospinal fluid, the deformation is coupled with a contrast adaptation at each iteration. Third, a refinement process and a global deformation are applied on the low-resolution mesh to provide an accurate segmentation of the spinal cord. Our method was evaluated against a semi-automatic edge-based snake method implemented in ITK-SNAP (with heavy manual adjustment) by computing the 3D Dice coefficient, mean and maximum distance errors. Accuracy and robustness were assessed from 8 healthy subjects. Each subject had two volumes: one at the cervical and one at the thoracolumbar region. Results show a precision of 0.30 +/- 0.05 mm (mean absolute distance error) in the cervical region and 0.27 +/- 0.06 mm in the thoracolumbar region. The 3D Dice coefficient was of 0.93 for both regions.

  5. Shrinkage Effect in Ancestral Maximum Likelihood

    CERN Document Server

    Mossel, Elchanan; Steel, Mike

    2008-01-01

    Ancestral maximum likelihood (AML) is a method that simultaneously reconstructs a phylogenetic tree and ancestral sequences from extant data (sequences at the leaves). The tree and ancestral sequences maximize the probability of observing the given data under a Markov model of sequence evolution, in which branch lengths are also optimized but constrained to take the same value on any edge across all sequence sites. AML differs from the more usual form of maximum likelihood (ML) in phylogenetics because ML averages over all possible ancestral sequences. ML has long been known to be statistically consistent -- that is, it converges on the correct tree with probability approaching 1 as the sequence length grows. However, the statistical consistency of AML has not been formally determined, despite informal remarks in a literature that dates back 20 years. In this short note we prove a general result that implies that AML is statistically inconsistent. In particular we show that AML can `shrink' short edges in a t...

  6. Multi-Channel Maximum Likelihood Pitch Estimation

    DEFF Research Database (Denmark)

    Christensen, Mads Græsbøll

    2012-01-01

    . This essentially means that the model allows for different conditions in the various channels, like different signal-to-noise ratios, microphone characteristics and reverberation. Moreover, the method does not assume that a certain array structure is used but rather relies on a more general model and......In this paper, a method for multi-channel pitch estimation is proposed. The method is a maximum likelihood estimator and is based on a parametric model where the signals in the various channels share the same fundamental frequency but can have different amplitudes, phases, and noise characteristics...

  7. AUTOMATIC TOPOLOGY DERIVATION FROM IFC BUILDING MODEL FOR IN-DOOR INTELLIGENT NAVIGATION

    Directory of Open Access Journals (Sweden)

    S. J. Tang

    2015-05-01

    Full Text Available With the goal to achieve an accuracy navigation within the building environment, it is critical to explore a feasible way for building the connectivity relationships among 3D geographical features called in-building topology network. Traditional topology construction approaches for indoor space always based on 2D maps or pure geometry model, which remained information insufficient problem. Especially, an intelligent navigation for different applications depends mainly on the precise geometry and semantics of the navigation network. The trouble caused by existed topology construction approaches can be smoothed by employing IFC building model which contains detailed semantic and geometric information. In this paper, we present a method which combined a straight media axis transformation algorithm (S-MAT with IFC building model to reconstruct indoor geometric topology network. This derived topology aimed at facilitating the decision making for different in-building navigation. In this work, we describe a multi-step deviation process including semantic cleaning, walkable features extraction, Multi-Storey 2D Mapping and S-MAT implementation to automatically generate topography information from existing indoor building model data given in IFC.

  8. AN APPROACH THAT AUTOMATICALLY DETERMINES PART CONTACT RELATIONS IN COMPUTER AIDED ASSEMBLY MODELING

    Directory of Open Access Journals (Sweden)

    Cem SİNANOĞLU

    2002-03-01

    Full Text Available This study describes an approach for modeling of an assembly system which is, one of the main problems encountered during assembly. In this approach the wire-frame modeling of the assembly system is used. In addition, each part is drawn in a different color. Assembly drawing and its various approaches are scanned along three different (-x, -y, -z axis. Scanning is done automatically the software developed. The color codes obtained by scanning and representing different assembly parts are assessed by the software along the six axes of Cartesian coordinate. Then contact matrices are formed to represent the relations among the assembly parts. These matrices are complete enough to represent an assembly modeling. This approach was applied for various assembly systems. These assembly systems are as follows; pincer, hinge and clutch systems. One of the basic advantages of this approach is that the wire-frame modeling of the assembly system can be formed through various CAD programs; and it can be applied to assembly systems contain many parts.

  9. Multiobjective Optimal Algorithm for Automatic Calibration of Daily Streamflow Forecasting Model

    Directory of Open Access Journals (Sweden)

    Yi Liu

    2016-01-01

    Full Text Available Single-objection function cannot describe the characteristics of the complicated hydrologic system. Consequently, it stands to reason that multiobjective functions are needed for calibration of hydrologic model. The multiobjective algorithms based on the theory of nondominate are employed to solve this multiobjective optimal problem. In this paper, a novel multiobjective optimization method based on differential evolution with adaptive Cauchy mutation and Chaos searching (MODE-CMCS is proposed to optimize the daily streamflow forecasting model. Besides, to enhance the diversity performance of Pareto solutions, a more precise crowd distance assigner is presented in this paper. Furthermore, the traditional generalized spread metric (SP is sensitive with the size of Pareto set. A novel diversity performance metric, which is independent of Pareto set size, is put forward in this research. The efficacy of the new algorithm MODE-CMCS is compared with the nondominated sorting genetic algorithm II (NSGA-II on a daily streamflow forecasting model based on support vector machine (SVM. The results verify that the performance of MODE-CMCS is superior to the NSGA-II for automatic calibration of hydrologic model.

  10. EMPIRICAL LIKELIHOOD RATIO CONFIDENCE INTERVALS FOR VARIOUS DIFFERENCES OF TWO POPULATIONS

    Institute of Scientific and Technical Information of China (English)

    2000-01-01

    Recently the empirical likelihood has been shown to be very useful in nonparametric models. Qin combined the empirical likelihood thought and the parametric likelihood method to construct confidence intervals for the difference of two population means in a semiparametric model. In this paper, we use the empirical likelihood thought to construct confidence intervals for some differences of two populations in a nonparametric model. A version of Wilks' theorem is developed.

  11. Automatic Segmentation of Wrist Bones in CT Using a Statistical Wrist Shape + Pose Model.

    Science.gov (United States)

    Anas, Emran Mohammad Abu; Rasoulian, Abtin; Seitel, Alexander; Darras, Kathryn; Wilson, David; John, Paul St; Pichora, David; Mousavi, Parvin; Rohling, Robert; Abolmaesumi, Purang

    2016-08-01

    Segmentation of the wrist bones in CT images has been frequently used in different clinical applications including arthritis evaluation, bone age assessment and image-guided interventions. The major challenges include non-uniformity and spongy textures of the bone tissue as well as narrow inter-bone spaces. In this work, we propose an automatic wrist bone segmentation technique for CT images based on a statistical model that captures the shape and pose variations of the wrist joint across 60 example wrists at nine different wrist positions. To establish the correspondences across the training shapes at neutral positions, the wrist bone surfaces are jointly aligned using a group-wise registration framework based on a Gaussian Mixture Model. Principal component analysis is then used to determine the major modes of shape variations. The variations in poses not only across the population but also across different wrist positions are incorporated in two pose models. An intra-subject pose model is developed by utilizing the similarity transforms at all wrist positions across the population. Further, an inter-subject pose model is used to model the pose variations across different wrist positions. For segmentation of the wrist bones in CT images, the developed model is registered to the edge point cloud extracted from the CT volume through an expectation maximization based probabilistic approach. Residual registration errors are corrected by application of a non-rigid registration technique. We validate the proposed segmentation method by registering the wrist model to a total of 66 unseen CT volumes of average voxel size of 0.38 mm. We report a mean surface distance error of 0.33 mm and a mean Jaccard index of 0.86. PMID:26890640

  12. Likelihood analysis of parity violation in the compound nucleus

    International Nuclear Information System (INIS)

    We discuss the determination of the root mean-squared matrix element of the parity-violating interaction between compound-nuclear states using likelihood analysis. We briefly review the relevant features of the statistical model of the compound nucleus and the formalism of likelihood analysis. We then discuss the application of likelihood analysis to data on panty-violating longitudinal asymmetries. The reliability of the extracted value of the matrix element and errors assigned to the matrix element is stressed. We treat the situations where the spins of the p-wave resonances are not known and known using experimental data and Monte Carlo techniques. We conclude that likelihood analysis provides a reliable way to determine M and its confidence interval. We briefly discuss some problems associated with the normalization of the likelihood function

  13. Automatic calibration of a global flow routing model in the Amazon basin using virtual SWOT data

    Science.gov (United States)

    Rogel, P. Y.; Mouffe, M.; Getirana, A.; Ricci, S. M.; Lion, C.; Mognard, N. M.; Biancamaria, S.; Boone, A.

    2012-12-01

    The Surface Water and Ocean Topography (SWOT) wide swath altimetry mission will provide a global coverage of surface water elevation, which will be used to help correct water height and discharge prediction from hydrological models. Here, the aim is to investigate the use of virtually generated SWOT data to improve water height and discharge simulation using calibration of model parameters (like river width, river depth and roughness coefficient). In this work, we use the HyMAP model to estimate water height and discharge on the Amazon catchment area. Before reaching the river network, surface and subsurface runoff are delayed by a set of linear and independent reservoirs. The flow routing is performed by the kinematic wave equation.. Since the SWOT mission has not yet been launched, virtual SWOT data are generated with a set of true parameters for HyMAP as well as measurement errors from a SWOT data simulator (i.e. a twin experiment approach is implemented). These virtual observations are used to calibrate key parameters of HyMAP through the minimization of a cost function defining the difference between the simulated and observed water heights over a one-year simulation period. The automatic calibration procedure is achieved using the MOCOM-UA multicriteria global optimization algorithm as well as the local optimization algorithm BC-DFO that is considered as a computational cost saving alternative. First, to reduce the computational cost of the calibration procedure, each spatially distributed parameter (Manning coefficient, river width and river depth) is corrupted through the multiplication of a spatially uniform factor that is the only factor optimized. In this case, it is shown that, when the measurement errors are small, the true water heights and discharges are easily retrieved. Because of equifinality, the true parameters are not always identified. A spatial correction of the model parameters is then investigated and the domain is divided into 4 regions

  14. Modelling the adoption of automatic milking systems in Noord-Holland

    Directory of Open Access Journals (Sweden)

    Matteo Floridi

    2013-05-01

    Full Text Available Innovation and new technology adoption represent two central elements for the business and industry development process in agriculture. One of the most relevant innovations in dairy farms is the robotisation of the milking process through the adoption of Automatic Milking Systems (AMS. The purpose of this paper is to assess the impact of selected Common Agricultural Policy measures on the adoption of AMS in dairy farms. The model developed is a dynamic farm-household model that is able to simulate the adoption of AMS taking into account the allocation of productive factors between on-farm and off-farm activities. The model simulates the decision to replace a traditional milking system with AMS using a Real Options approach that allows farmers to choose the optimal timing of investments. Results show that the adoption of AMS, and the timing of such a decision, is strongly affected by policy uncertainty and market conditions. The effect of this uncertainty is to postpone the decision to adopt the new technology until farmers have gathered enough information to reduce the negative effects of the technological lock-in. AMS adoption results in an increase in farm size and herd size due to the reduction in the labour required for milking operations.

  15. Dynamic Data Driven Applications Systems (DDDAS) modeling for automatic target recognition

    Science.gov (United States)

    Blasch, Erik; Seetharaman, Guna; Darema, Frederica

    2013-05-01

    The Dynamic Data Driven Applications System (DDDAS) concept uses applications modeling, mathematical algorithms, and measurement systems to work with dynamic systems. A dynamic systems such as Automatic Target Recognition (ATR) is subject to sensor, target, and the environment variations over space and time. We use the DDDAS concept to develop an ATR methodology for multiscale-multimodal analysis that seeks to integrated sensing, processing, and exploitation. In the analysis, we use computer vision techniques to explore the capabilities and analogies that DDDAS has with information fusion. The key attribute of coordination is the use of sensor management as a data driven techniques to improve performance. In addition, DDDAS supports the need for modeling from which uncertainty and variations are used within the dynamic models for advanced performance. As an example, we use a Wide-Area Motion Imagery (WAMI) application to draw parallels and contrasts between ATR and DDDAS systems that warrants an integrated perspective. This elementary work is aimed at triggering a sequence of deeper insightful research towards exploiting sparsely sampled piecewise dense WAMI measurements - an application where the challenges of big-data with regards to mathematical fusion relationships and high-performance computations remain significant and will persist. Dynamic data-driven adaptive computations are required to effectively handle the challenges with exponentially increasing data volume for advanced information fusion systems solutions such as simultaneous target tracking and ATR.

  16. EXPERIMENTS WITH UAS IMAGERY FOR AUTOMATIC MODELING OF POWER LINE 3D GEOMETRY

    Directory of Open Access Journals (Sweden)

    G. Jóźków

    2015-08-01

    Full Text Available The ideal mapping technology for transmission line inspection is the airborne LiDAR executed from helicopter platforms. It allows for full 3D geometry extraction in highly automated manner. Large scale aerial images can be also used for this purpose, however, automation is possible only for finding transmission line positions (2D geometry, and the sag needs to be estimated manually. For longer lines, these techniques are less expensive than ground surveys, yet they are still expensive. UAS technology has the potential to reduce these costs, especially if using inexpensive platforms with consumer grade cameras. This study investigates the potential of using high resolution UAS imagery for automatic modeling of transmission line 3D geometry. The key point of this experiment was to employ dense matching algorithms to appropriately acquired UAS images to have points created also on wires. This allowed to model the 3D geometry of transmission lines similarly to LiDAR acquired point clouds. Results showed that the transmission line modeling is possible with a high internal accuracy for both, horizontal and vertical directions, even when wires were represented by a partial (sparse point cloud.

  17. Automatic Sex Determination of Skulls Based on a Statistical Shape Model

    Directory of Open Access Journals (Sweden)

    Li Luo

    2013-01-01

    Full Text Available Sex determination from skeletons is an important research subject in forensic medicine. Previous skeletal sex assessments are through subjective visual analysis by anthropologists or metric analysis of sexually dimorphic features. In this work, we present an automatic sex determination method for 3D digital skulls, in which a statistical shape model for skulls is constructed, which projects the high-dimensional skull data into a low-dimensional shape space, and Fisher discriminant analysis is used to classify skulls in the shape space. This method combines the advantages of metrical and morphological methods. It is easy to use without professional qualification and tedious manual measurement. With a group of Chinese skulls including 127 males and 81 females, we choose 92 males and 58 females to establish the discriminant model and validate the model with the other skulls. The correct rate is 95.7% and 91.4% for females and males, respectively. Leave-one-out test also shows that the method has a high accuracy.

  18. A computer program to automatically generate state equations and macro-models. [for network analysis and design

    Science.gov (United States)

    Garrett, S. J.; Bowers, J. C.; Oreilly, J. E., Jr.

    1978-01-01

    A computer program, PROSE, that produces nonlinear state equations from a simple topological description of an electrical or mechanical network is described. Unnecessary states are also automatically eliminated, so that a simplified terminal circuit model is obtained. The program also prints out the eigenvalues of a linearized system and the sensitivities of the eigenvalue of largest magnitude.

  19. Maximum-likelihood absorption tomography

    International Nuclear Information System (INIS)

    Maximum-likelihood methods are applied to the problem of absorption tomography. The reconstruction is done with the help of an iterative algorithm. We show how the statistics of the illuminating beam can be incorporated into the reconstruction. The proposed reconstruction method can be considered as a useful alternative in the extreme cases where the standard ill-posed direct-inversion methods fail. (authors)

  20. Support Vector Machine Model for Automatic Detection and Classification of Seismic Events

    Science.gov (United States)

    Barros, Vesna; Barros, Lucas

    2016-04-01

    The automated processing of multiple seismic signals to detect, localize and classify seismic events is a central tool in both natural hazards monitoring and nuclear treaty verification. However, false detections and missed detections caused by station noise and incorrect classification of arrivals are still an issue and the events are often unclassified or poorly classified. Thus, machine learning techniques can be used in automatic processing for classifying the huge database of seismic recordings and provide more confidence in the final output. Applied in the context of the International Monitoring System (IMS) - a global sensor network developed for the Comprehensive Nuclear-Test-Ban Treaty (CTBT) - we propose a fully automatic method for seismic event detection and classification based on a supervised pattern recognition technique called the Support Vector Machine (SVM). According to Kortström et al., 2015, the advantages of using SVM are handleability of large number of features and effectiveness in high dimensional spaces. Our objective is to detect seismic events from one IMS seismic station located in an area of high seismicity and mining activity and classify them as earthquakes or quarry blasts. It is expected to create a flexible and easily adjustable SVM method that can be applied in different regions and datasets. Taken a step further, accurate results for seismic stations could lead to a modification of the model and its parameters to make it applicable to other waveform technologies used to monitor nuclear explosions such as infrasound and hydroacoustic waveforms. As an authorized user, we have direct access to all IMS data and bulletins through a secure signatory account. A set of significant seismic waveforms containing different types of events (e.g. earthquake, quarry blasts) and noise is being analysed to train the model and learn the typical pattern of the signal from these events. Moreover, comparing the performance of the support

  1. Maximum likelihood estimation of the attenuated ultrasound pulse

    DEFF Research Database (Denmark)

    Rasmussen, Klaus Bolding

    1994-01-01

    The attenuated ultrasound pulse is divided into two parts: a stationary basic pulse and a nonstationary attenuation pulse. A standard ARMA model is used for the basic pulse, and a nonstandard ARMA model is derived for the attenuation pulse. The maximum likelihood estimator of the attenuated...... ultrasound pulse, which includes a maximum likelihood attenuation estimator, is derived. The results of this correspondence are of great importance for deconvolution and attenuation imaging in medical ultrasound...

  2. Automatic simplification of solid models for engineering analysis independent of modeling sequences

    International Nuclear Information System (INIS)

    Although solid models can represent complex and detailed geometry of parts, it is often necessary to simplify solid models by removing the detailed geometry in some applications such as finite element analysis and similarity assessment of CAD models. There are no standards for judging the goodness of a simplification method, but one essential criterion would be that it should generate a consistent and acceptable simplification for the same solid model, regardless of how the solid model has been created. Since a design-feature-based approach is tightly dependent on modeling sequences and designer's modeling preferences, it sometimes produces inconsistent and unacceptable simplifications. In this paper, a new method is proposed to simplify solid models of machined parts. Independently of user-specified design features, this method directly recognizes and generates subtractive features from the final model of the part, and then simplifies the solid model by removing the detailed geometry by using these subtractive features

  3. Automatic extraction of soft tissues from 3D MRI head images using model driven analysis

    International Nuclear Information System (INIS)

    This paper presents an automatic extraction system (called TOPS-3D : Top Down Parallel Pattern Recognition System for 3D Images) of soft tissues from 3D MRI head images by using model driven analysis algorithm. As the construction of system TOPS we developed, two concepts have been considered in the design of system TOPS-3D. One is the system having a hierarchical structure of reasoning using model information in higher level, and the other is a parallel image processing structure used to extract plural candidate regions for a destination entity. The new points of system TOPS-3D are as follows. (1) The TOPS-3D is a three-dimensional image analysis system including 3D model construction and 3D image processing techniques. (2) A technique is proposed to increase connectivity between knowledge processing in higher level and image processing in lower level. The technique is realized by applying opening operation of mathematical morphology, in which a structural model function defined in higher level by knowledge representation is immediately used to the filter function of opening operation as image processing in lower level. The system TOPS-3D applied to 3D MRI head images consists of three levels. First and second levels are reasoning part, and third level is image processing part. In experiments, we applied 5 samples of 3D MRI head images with size 128 x 128 x 128 pixels to the system TOPS-3D to extract the regions of soft tissues such as cerebrum, cerebellum and brain stem. From the experimental results, the system is robust for variation of input data by using model information, and the position and shape of soft tissues are extracted corresponding to anatomical structure. (author)

  4. Insight into model mechanisms through automatic parameter fitting: a new methodological framework for model development

    OpenAIRE

    Tøndel, Kristin; Niederer, Steven A.; Land, Sander; Smith, Nicolas P

    2014-01-01

    Background Striking a balance between the degree of model complexity and parameter identifiability, while still producing biologically feasible simulations using modelling is a major challenge in computational biology. While these two elements of model development are closely coupled, parameter fitting from measured data and analysis of model mechanisms have traditionally been performed separately and sequentially. This process produces potential mismatches between model and data complexities...

  5. Automatic and quantitative measurement of collagen gel contraction using model-guided segmentation

    Science.gov (United States)

    Chen, Hsin-Chen; Yang, Tai-Hua; Thoreson, Andrew R.; Zhao, Chunfeng; Amadio, Peter C.; Sun, Yung-Nien; Su, Fong-Chin; An, Kai-Nan

    2013-08-01

    Quantitative measurement of collagen gel contraction plays a critical role in the field of tissue engineering because it provides spatial-temporal assessment (e.g., changes of gel area and diameter during the contraction process) reflecting the cell behavior and tissue material properties. So far the assessment of collagen gels relies on manual segmentation, which is time-consuming and suffers from serious intra- and inter-observer variability. In this study, we propose an automatic method combining various image processing techniques to resolve these problems. The proposed method first detects the maximal feasible contraction range of circular references (e.g., culture dish) and avoids the interference of irrelevant objects in the given image. Then, a three-step color conversion strategy is applied to normalize and enhance the contrast between the gel and background. We subsequently introduce a deformable circular model which utilizes regional intensity contrast and circular shape constraint to locate the gel boundary. An adaptive weighting scheme was employed to coordinate the model behavior, so that the proposed system can overcome variations of gel boundary appearances at different contraction stages. Two measurements of collagen gels (i.e., area and diameter) can readily be obtained based on the segmentation results. Experimental results, including 120 gel images for accuracy validation, showed high agreement between the proposed method and manual segmentation with an average dice similarity coefficient larger than 0.95. The results also demonstrated obvious improvement in gel contours obtained by the proposed method over two popular, generic segmentation methods.

  6. Artificial neural networks for automatic modelling of the pectus excavatum corrective prosthesis

    Science.gov (United States)

    Rodrigues, Pedro L.; Moreira, António H. J.; Rodrigues, Nuno F.; Pinho, ACM; Fonseca, Jaime C.; Correia-Pinto, Jorge; Vilaça, João. L.

    2014-03-01

    Pectus excavatum is the most common deformity of the thorax and usually comprises Computed Tomography (CT) examination for pre-operative diagnosis. Aiming at the elimination of the high amounts of CT radiation exposure, this work presents a new methodology for the replacement of CT by a laser scanner (radiation-free) in the treatment of pectus excavatum using personally modeled prosthesis. The complete elimination of CT involves the determination of ribs external outline, at the maximum sternum depression point for prosthesis placement, based on chest wall skin surface information, acquired by a laser scanner. The developed solution resorts to artificial neural networks trained with data vectors from 165 patients. Scaled Conjugate Gradient, Levenberg-Marquardt, Resilient Back propagation and One Step Secant gradient learning algorithms were used. The training procedure was performed using the soft tissue thicknesses, determined using image processing techniques that automatically segment the skin and rib cage. The developed solution was then used to determine the ribs outline in data from 20 patient scanners. Tests revealed that ribs position can be estimated with an average error of about 6.82+/-5.7 mm for the left and right side of the patient. Such an error range is well below current prosthesis manual modeling (11.7+/-4.01 mm) even without CT imagiology, indicating a considerable step forward towards CT replacement by a 3D scanner for prosthesis personalization.

  7. CRYPTOGRAPHIC SECURE CLOUD STORAGE MODEL WITH ANONYMOUS AUTHENTICATION AND AUTOMATIC FILE RECOVERY

    Directory of Open Access Journals (Sweden)

    Sowmiya Murthy

    2014-10-01

    Full Text Available We propose a secure cloud storage model that addresses security and storage issues for cloud computing environments. Security is achieved by anonymous authentication which ensures that cloud users remain anonymous while getting duly authenticated. For achieving this goal, we propose a digital signature based authentication scheme with a decentralized architecture for distributed key management with multiple Key Distribution Centers. Homomorphic encryption scheme using Paillier public key cryptosystem is used for encrypting the data that is stored in the cloud. We incorporate a query driven approach for validating the access policies defined by an individual user for his/her data i.e. the access is granted to a requester only if his credentials matches with the hidden access policy. Further, since data is vulnerable to losses or damages due to the vagaries of the network, we propose an automatic retrieval mechanism where lost data is recovered by data replication and file replacement with string matching algorithm. We describe a prototype implementation of our proposed model.

  8. Development of a new model to evaluate the probability of automatic plant trips for pressurized water reactors

    International Nuclear Information System (INIS)

    In order to improve the reliability of plant operations for pressurized water reactors, a new fault tree model was developed to evaluate the probability of automatic plant trips. This model consists of fault trees for sixteen systems. It has the following features: (1) human errors and transmission line incidents are modeled by the existing data, (2) the repair of failed components is considered to calculate the failure probability of components, (3) uncertainty analysis is performed by an exact method. From the present results, it is confirmed that the obtained upper and lower bound values of the automatic plant trip probability are within the existing data bound in Japan. Thereby this model can be applicable to the prediction of plant performance and reliability. (author)

  9. Multi-objective automatic calibration of hydrodynamic models - development of the concept and an application in the Mekong Delta

    OpenAIRE

    Nguyen, Viet-Dung

    2011-01-01

    Automatic and multi-objective calibration of hydrodynamic models is still underdeveloped, in particular, in comparison with other fields such as hydrological modeling. This is for several reasons: lack of appropriate data, the high degree of computational time demanded, and a suitable framework. These aspects are aggravated in large-scale applications. There are recent developments, however, that improve both the data and the computing constraints. Remote sensing, especially radar-based techn...

  10. A transition-constrained discrete hidden Markov model for automatic sleep staging

    Directory of Open Access Journals (Sweden)

    Pan Shing-Tai

    2012-08-01

    Full Text Available Abstract Background Approximately one-third of the human lifespan is spent sleeping. To diagnose sleep problems, all-night polysomnographic (PSG recordings including electroencephalograms (EEGs, electrooculograms (EOGs and electromyograms (EMGs, are usually acquired from the patient and scored by a well-trained expert according to Rechtschaffen & Kales (R&K rules. Visual sleep scoring is a time-consuming and subjective process. Therefore, the development of an automatic sleep scoring method is desirable. Method The EEG, EOG and EMG signals from twenty subjects were measured. In addition to selecting sleep characteristics based on the 1968 R&K rules, features utilized in other research were collected. Thirteen features were utilized including temporal and spectrum analyses of the EEG, EOG and EMG signals, and a total of 158 hours of sleep data were recorded. Ten subjects were used to train the Discrete Hidden Markov Model (DHMM, and the remaining ten were tested by the trained DHMM for recognition. Furthermore, the 2-fold cross validation was performed during this experiment. Results Overall agreement between the expert and the results presented is 85.29%. With the exception of S1, the sensitivities of each stage were more than 81%. The most accurate stage was SWS (94.9%, and the least-accurately classified stage was S1 ( Conclusion The results of the experiments demonstrate that the proposed method significantly enhances the recognition rate when compared with prior studies.

  11. Automatic 3D object recognition and reconstruction based on neuro-fuzzy modelling

    Science.gov (United States)

    Samadzadegan, Farhad; Azizi, Ali; Hahn, Michael; Lucas, Curo

    Three-dimensional object recognition and reconstruction (ORR) is a research area of major interest in computer vision and photogrammetry. Virtual cities, for example, is one of the exciting application fields of ORR which became very popular during the last decade. Natural and man-made objects of cities such as trees and buildings are complex structures and automatic recognition and reconstruction of these objects from digital aerial images but also other data sources is a big challenge. In this paper a novel approach for object recognition is presented based on neuro-fuzzy modelling. Structural, textural and spectral information is extracted and integrated in a fuzzy reasoning process. The learning capability of neural networks is introduced to the fuzzy recognition process by taking adaptable parameter sets into account which leads to the neuro-fuzzy approach. Object reconstruction follows recognition seamlessly by using the recognition output and the descriptors which have been extracted for recognition. A first successful application of this new ORR approach is demonstrated for the three object classes 'buildings', 'cars' and 'trees' by using aerial colour images of an urban area of the town of Engen in Germany.

  12. Automatic hippocampus localization in histological images using PSO-based deformable models

    OpenAIRE

    Roberto Ugolotti; Pablo Mesejo; Stefano Cagnoni; Mario Giacobini; Ferdinando Di Cunto

    2011-01-01

    The Allen Brain Atlas (ABA) is a cellular-resolution, genome-wide map of gene expression in the mouse brain which allows users to compare gene expression patterns in neuroanatomical structures. The correct localization of the structures is the first step to carry on this comparison in an automatic way. In this paper we present a completely automatic tool for the localization of the hippocampus that can be easily adapted also to other subcortical structures. This goal is achieved in two distin...

  13. An automatic method for producing robust regression models from hyperspectral data using multiple simple genetic algorithms

    Science.gov (United States)

    Sykas, Dimitris; Karathanassi, Vassilia

    2015-06-01

    This paper presents a new method for automatically determining the optimum regression model, which enable the estimation of a parameter. The concept lies on the combination of k spectral pre-processing algorithms (SPPAs) that enhance spectral features correlated to the desired parameter. Initially a pre-processing algorithm uses as input a single spectral signature and transforms it according to the SPPA function. A k-step combination of SPPAs uses k preprocessing algorithms serially. The result of each SPPA is used as input to the next SPPA, and so on until the k desired pre-processed signatures are reached. These signatures are then used as input to three different regression methods: the Normalized band Difference Regression (NDR), the Multiple Linear Regression (MLR) and the Partial Least Squares Regression (PLSR). Three Simple Genetic Algorithms (SGAs) are used, one for each regression method, for the selection of the optimum combination of k SPPAs. The performance of the SGAs is evaluated based on the RMS error of the regression models. The evaluation not only indicates the selection of the optimum SPPA combination but also the regression method that produces the optimum prediction model. The proposed method was applied on soil spectral measurements in order to predict Soil Organic Matter (SOM). In this study, the maximum value assigned to k was 3. PLSR yielded the highest accuracy while NDR's accuracy was satisfactory compared to its complexity. MLR method showed severe drawbacks due to the presence of noise in terms of collinearity at the spectral bands. Most of the regression methods required a 3-step combination of SPPAs for achieving the highest performance. The selected preprocessing algorithms were different for each regression method since each regression method handles with a different way the explanatory variables.

  14. Risk assessment models in genetics clinic for array comparative genomic hybridization: Clinical information can be used to predict the likelihood of an abnormal result in patients

    Science.gov (United States)

    Marano, Rachel M.; Mercurio, Laura; Kanter, Rebecca; Doyle, Richard; Abuelo, Dianne; Morrow, Eric M.; Shur, Natasha

    2013-01-01

    Array comparative genomic hybridization (aCGH) testing can diagnose chromosomal microdeletions and duplications too small to be detected by conventional cytogenetic techniques. We need to consider which patients are more likely to receive a diagnosis from aCGH testing versus patients that have lower likelihood and may benefit from broader genome wide scanning. We retrospectively reviewed charts of a population of 200 patients, 117 boys and 83 girls, who underwent aCGH testing in Genetics Clinic at Rhode Island hospital between 1 January/2008 and 31 December 2010. Data collected included sex, age at initial clinical presentation, aCGH result, history of seizures, autism, dysmorphic features, global developmental delay/intellectual disability, hypotonia and failure to thrive. aCGH analysis revealed abnormal results in 34 (17%) and variants of unknown significance in 24 (12%). Patients with three or more clinical diagnoses had a 25.0% incidence of abnormal aCGH findings, while patients with two or fewer clinical diagnoses had a 12.5% incidence of abnormal aCGH findings. Currently, we provide families with a range of 10–30% of a diagnosis with aCGH testing. With increased clinical complexity, patients have an increased probability of having an abnormal aCGH result. With this, we can provide individualized risk estimates for each patient.

  15. Automatic sequences

    CERN Document Server

    Haeseler, Friedrich

    2003-01-01

    Automatic sequences are sequences which are produced by a finite automaton. Although they are not random they may look as being random. They are complicated, in the sense of not being not ultimately periodic, they may look rather complicated, in the sense that it may not be easy to name the rule by which the sequence is generated, however there exists a rule which generates the sequence. The concept automatic sequences has special applications in algebra, number theory, finite automata and formal languages, combinatorics on words. The text deals with different aspects of automatic sequences, in particular:· a general introduction to automatic sequences· the basic (combinatorial) properties of automatic sequences· the algebraic approach to automatic sequences· geometric objects related to automatic sequences.

  16. Monte Carlo maximum likelihood estimation for discretely observed diffusion processes

    OpenAIRE

    Beskos, Alexandros; Papaspiliopoulos, Omiros; Roberts, Gareth

    2009-01-01

    This paper introduces a Monte Carlo method for maximum likelihood inference in the context of discretely observed diffusion processes. The method gives unbiased and a.s.\\@ continuous estimators of the likelihood function for a family of diffusion models and its performance in numerical examples is computationally efficient. It uses a recently developed technique for the exact simulation of diffusions, and involves no discretization error. We show that, under regularity conditions, the Monte C...

  17. Simulated Maximum Likelihood using Tilted Importance Sampling

    OpenAIRE

    Christian N. Brinch

    2008-01-01

    Abstract: This paper develops the important distinction between tilted and simple importance sampling as methods for simulating likelihood functions for use in simulated maximum likelihood. It is shown that tilted importance sampling removes a lower bound to simulation error for given importance sample size that is inherent in simulated maximum likelihood using simple importance sampling, the main method for simulating likelihood functions in the statistics literature. In addit...

  18. Automatic detection of alpine rockslides in continuous seismic data using hidden Markov models

    Science.gov (United States)

    Dammeier, Franziska; Moore, Jeffrey R.; Hammer, Conny; Haslinger, Florian; Loew, Simon

    2016-02-01

    Data from continuously recording permanent seismic networks can contain information about rockslide occurrence and timing complementary to eyewitness observations and thus aid in construction of robust event catalogs. However, detecting infrequent rockslide signals within large volumes of continuous seismic waveform data remains challenging and often requires demanding manual intervention. We adapted an automatic classification method using hidden Markov models to detect rockslide signals in seismic data from two stations in central Switzerland. We first processed 21 known rockslides, with event volumes spanning 3 orders of magnitude and station event distances varying by 1 order of magnitude, which resulted in 13 and 19 successfully classified events at the two stations. Retraining the models to incorporate seismic noise from the day of the event improved the respective results to 16 and 19 successful classifications. The missed events generally had low signal-to-noise ratio and small to medium volumes. We then processed nearly 14 years of continuous seismic data from the same two stations to detect previously unknown events. After postprocessing, we classified 30 new events as rockslides, of which we could verify three through independent observation. In particular, the largest new event, with estimated volume of 500,000 m3, was not generally known within the Swiss landslide community, highlighting the importance of regional seismic data analysis even in densely populated mountainous regions. Our method can be easily implemented as part of existing earthquake monitoring systems, and with an average event detection rate of about two per month, manual verification would not significantly increase operational workload.

  19. Assessing Compatibility of Direct Detection Data: Halo-Independent Global Likelihood Analyses

    OpenAIRE

    Gelmini, Graciela B.; Huh, Ji-Haeng; Witte, Samuel J.

    2016-01-01

    We present two different halo-independent methods utilizing a global maximum likelihood that can assess the compatibility of dark matter direct detection data given a particular dark matter model. The global likelihood we use is comprised of at least one extended likelihood and an arbitrary number of Poisson or Gaussian likelihoods. In the first method we find the global best fit halo function and construct a two sided pointwise confidence band, which can then be compared with those derived f...

  20. Smoothed log-concave maximum likelihood estimation with applications

    CERN Document Server

    Chen, Yining

    2011-01-01

    We study the smoothed log-concave maximum likelihood estimator of a probability distribution on $\\mathbb{R}^d$. This is a fully automatic nonparametric density estimator, obtained as a canonical smoothing of the log-concave maximum likelihood estimator. We demonstrate its attractive features both through an analysis of its theoretical properties and a simulation study. Moreover, we show how the estimator can be used as an intermediate stage of more involved procedures, such as constructing a classifier or estimating a functional of the density. Here again, the use of the estimator can be justified both on theoretical grounds and through its finite sample performance, and we illustrate its use in a breast cancer diagnosis (classification) problem.

  1. Finite element analysis of osteosynthesis screw fixation in the bone stock: an appropriate method for automatic screw modelling.

    Science.gov (United States)

    Wieding, Jan; Souffrant, Robert; Fritsche, Andreas; Mittelmeier, Wolfram; Bader, Rainer

    2012-01-01

    The use of finite element analysis (FEA) has grown to a more and more important method in the field of biomedical engineering and biomechanics. Although increased computational performance allows new ways to generate more complex biomechanical models, in the area of orthopaedic surgery, solid modelling of screws and drill holes represent a limitation of their use for individual cases and an increase of computational costs. To cope with these requirements, different methods for numerical screw modelling have therefore been investigated to improve its application diversity. Exemplarily, fixation was performed for stabilization of a large segmental femoral bone defect by an osteosynthesis plate. Three different numerical modelling techniques for implant fixation were used in this study, i.e. without screw modelling, screws as solid elements as well as screws as structural elements. The latter one offers the possibility to implement automatically generated screws with variable geometry on arbitrary FE models. Structural screws were parametrically generated by a Python script for the automatic generation in the FE-software Abaqus/CAE on both a tetrahedral and a hexahedral meshed femur. Accuracy of the FE models was confirmed by experimental testing using a composite femur with a segmental defect and an identical osteosynthesis plate for primary stabilisation with titanium screws. Both deflection of the femoral head and the gap alteration were measured with an optical measuring system with an accuracy of approximately 3 µm. For both screw modelling techniques a sufficient correlation of approximately 95% between numerical and experimental analysis was found. Furthermore, using structural elements for screw modelling the computational time could be reduced by 85% using hexahedral elements instead of tetrahedral elements for femur meshing. The automatically generated screw modelling offers a realistic simulation of the osteosynthesis fixation with screws in the adjacent

  2. Finite element analysis of osteosynthesis screw fixation in the bone stock: an appropriate method for automatic screw modelling.

    Directory of Open Access Journals (Sweden)

    Jan Wieding

    Full Text Available The use of finite element analysis (FEA has grown to a more and more important method in the field of biomedical engineering and biomechanics. Although increased computational performance allows new ways to generate more complex biomechanical models, in the area of orthopaedic surgery, solid modelling of screws and drill holes represent a limitation of their use for individual cases and an increase of computational costs. To cope with these requirements, different methods for numerical screw modelling have therefore been investigated to improve its application diversity. Exemplarily, fixation was performed for stabilization of a large segmental femoral bone defect by an osteosynthesis plate. Three different numerical modelling techniques for implant fixation were used in this study, i.e. without screw modelling, screws as solid elements as well as screws as structural elements. The latter one offers the possibility to implement automatically generated screws with variable geometry on arbitrary FE models. Structural screws were parametrically generated by a Python script for the automatic generation in the FE-software Abaqus/CAE on both a tetrahedral and a hexahedral meshed femur. Accuracy of the FE models was confirmed by experimental testing using a composite femur with a segmental defect and an identical osteosynthesis plate for primary stabilisation with titanium screws. Both deflection of the femoral head and the gap alteration were measured with an optical measuring system with an accuracy of approximately 3 µm. For both screw modelling techniques a sufficient correlation of approximately 95% between numerical and experimental analysis was found. Furthermore, using structural elements for screw modelling the computational time could be reduced by 85% using hexahedral elements instead of tetrahedral elements for femur meshing. The automatically generated screw modelling offers a realistic simulation of the osteosynthesis fixation with

  3. Model of automatic fuel management for the Atucha II nuclear central with the PUMA IV code

    International Nuclear Information System (INIS)

    The Atucha II central is a heavy water power station and natural uranium. For this reason and due to the first floor reactivity excess that have this type of reactors, it is necessary to carry out a continuous fuel management and with the central in power (for the case of Atucha II every 0.7 days approximately). To maintain in operation these centrals and to achieve a good fuels economy, different types of negotiate of fuels that include areas and roads where the fuels displace inside the core are proved; it is necessary to prove the great majority of these managements in long periods in order to corroborate the behavior of the power station and the burnt of extraction of the fuel elements. To carry out this work it is of great help that a program implements the approaches to continue in each replacement, using the roads and areas of each administration type to prove, and this way to obtain as results the one regulations execution in the time and the average burnt of extraction of the fuel elements, being fundamental this last data for the operator company of the power station. To carry out the previous work it is necessary that a physicist with experience in fuel management proves each one of the possible managements, even those that quickly can be discarded if its don't fulfill with the regulatory standards or its possess an average extraction burnt too much low. For this it is of fundamental help that with an automatic model the different administrations are proven and lastly the physicist analyzes the more important cases. The pattern in question not only allows to program different types of roads and areas of fuel management, but rather it also foresees the possibility to disable some of the approaches. (Author)

  4. Study of Automatic Forest Road Design Model Considering Shallow Landslides with LiDAR Data of Funyu Experimental Forest

    OpenAIRE

    Saito, Masashi; Goshima, Msahiro; Aruga, Kazuhiro; Matsue, Keigo; Shuin, Yasuhiro; Tasaka, Toshiaki

    2013-01-01

    In this study, a model to automatically design a forest road considering shallow landslides using LiDAR data was examined. First, in order to develop a shallow landslide risk map of the Funyu Experimental Forest, a slope stability analysis was carried out using the infinite slope stability analysis formula. The soil depth was surveyed at 167 points using simple penetration tests, and the frequency distributions of the soil depth were estimated as logarithmic normal distributions. A soil depth...

  5. Automatic earthquake detection and classification with continuous hidden Markov models: a possible tool for monitoring Las Canadas caldera in Tenerife

    International Nuclear Information System (INIS)

    A possible interaction of (volcano-) tectonic earthquakes with the continuous seismic noise recorded in the volcanic island of Tenerife was recently suggested, but existing catalogues seem to be far from being self consistent, calling for the development of automatic detection and classification algorithms. In this work we propose the adoption of a methodology based on Hidden Markov Models (HMMs), widely used already in other fields, such as speech classification.

  6. Accurate determination of phase arrival times using autoregressive likelihood estimation

    OpenAIRE

    G. Kvaerna

    1994-01-01

    We have investigated the potential automatic use of an onset picker based on autoregressive likelihood estimation. Both a single component version and a three component version of this method have been tested on data from events located in the Khibiny Massif of the Kola peninsula, recorded at the Apatity array, the Apatity three component station and the ARCESS array. Using this method, we have been able to estimate onset times to an accuracy (standard deviation) of about 0.05 s for P-phases ...

  7. SU-E-T-50: Automatic Validation of Megavoltage Beams Modeled for Clinical Use in Radiation Therapy

    Energy Technology Data Exchange (ETDEWEB)

    Melchior, M [Terapia Radiante S.A., La Plata, Buenos Aires (Argentina); Salinas Aranda, F [Vidt Centro Medico, Ciudad Autonoma De Buenos Aires (Argentina); 21st Century Oncology, Ft. Myers, FL (United States); Sciutto, S [Universidad Nacional de La Plata, La Plata, Buenos Aires (Argentina); Dodat, D [Centro Medico Privado Dean Funes, La Plata, Buenos Aires (Argentina); Larragueta, N [Universidad Nacional de La Plata, La Plata, Buenos Aires (Argentina); Centro Medico Privado Dean Funes, La Plata, Buenos Aires (Argentina)

    2014-06-01

    Purpose: To automatically validate megavoltage beams modeled in XiO™ 4.50 (Elekta, Stockholm, Sweden) and Varian Eclipse™ Treatment Planning Systems (TPS) (Varian Associates, Palo Alto, CA, USA), reducing validation time before beam-on for clinical use. Methods: A software application that can automatically read and analyze DICOM RT Dose and W2CAD files was developed using MatLab integrated development environment.TPS calculated dose distributions, in DICOM RT Dose format, and dose values measured in different Varian Clinac beams, in W2CAD format, were compared. Experimental beam data used were those acquired for beam commissioning, collected on a water phantom with a 2D automatic beam scanning system.Two methods were chosen to evaluate dose distributions fitting: gamma analysis and point tests described in Appendix E of IAEA TECDOC-1583. Depth dose curves and beam profiles were evaluated for both open and wedged beams. Tolerance parameters chosen for gamma analysis are 3% and 3 mm dose and distance, respectively.Absolute dose was measured independently at points proposed in Appendix E of TECDOC-1583 to validate software results. Results: TPS calculated depth dose distributions agree with measured beam data under fixed precision values at all depths analyzed. Measured beam dose profiles match TPS calculated doses with high accuracy in both open and wedged beams. Depth and profile dose distributions fitting analysis show gamma values < 1. Relative errors at points proposed in Appendix E of TECDOC-1583 meet therein recommended tolerances.Independent absolute dose measurements at points proposed in Appendix E of TECDOC-1583 confirm software results. Conclusion: Automatic validation of megavoltage beams modeled for their use in the clinic was accomplished. The software tool developed proved efficient, giving users a convenient and reliable environment to decide whether to accept or not a beam model for clinical use. Validation time before beam-on for clinical use

  8. SU-E-T-50: Automatic Validation of Megavoltage Beams Modeled for Clinical Use in Radiation Therapy

    International Nuclear Information System (INIS)

    Purpose: To automatically validate megavoltage beams modeled in XiO™ 4.50 (Elekta, Stockholm, Sweden) and Varian Eclipse™ Treatment Planning Systems (TPS) (Varian Associates, Palo Alto, CA, USA), reducing validation time before beam-on for clinical use. Methods: A software application that can automatically read and analyze DICOM RT Dose and W2CAD files was developed using MatLab integrated development environment.TPS calculated dose distributions, in DICOM RT Dose format, and dose values measured in different Varian Clinac beams, in W2CAD format, were compared. Experimental beam data used were those acquired for beam commissioning, collected on a water phantom with a 2D automatic beam scanning system.Two methods were chosen to evaluate dose distributions fitting: gamma analysis and point tests described in Appendix E of IAEA TECDOC-1583. Depth dose curves and beam profiles were evaluated for both open and wedged beams. Tolerance parameters chosen for gamma analysis are 3% and 3 mm dose and distance, respectively.Absolute dose was measured independently at points proposed in Appendix E of TECDOC-1583 to validate software results. Results: TPS calculated depth dose distributions agree with measured beam data under fixed precision values at all depths analyzed. Measured beam dose profiles match TPS calculated doses with high accuracy in both open and wedged beams. Depth and profile dose distributions fitting analysis show gamma values < 1. Relative errors at points proposed in Appendix E of TECDOC-1583 meet therein recommended tolerances.Independent absolute dose measurements at points proposed in Appendix E of TECDOC-1583 confirm software results. Conclusion: Automatic validation of megavoltage beams modeled for their use in the clinic was accomplished. The software tool developed proved efficient, giving users a convenient and reliable environment to decide whether to accept or not a beam model for clinical use. Validation time before beam-on for clinical use

  9. Automatic Segmentation of Vertebrae from Radiographs: A Sample-Driven Active Shape Model Approach

    DEFF Research Database (Denmark)

    Mysling, Peter; Petersen, Peter Kersten; Nielsen, Mads; Lillholm, Martin

    2011-01-01

    Segmentation of vertebral contours is an essential task in the design of automatic tools for vertebral fracture assessment. In this paper, we propose a novel segmentation technique which does not require operator interaction. The proposed technique solves the segmentation problem in a hierarchical...

  10. Machine Beats Experts: Automatic Discovery of Skill Models for Data-Driven Online Course Refinement

    Science.gov (United States)

    Matsuda, Noboru; Furukawa, Tadanobu; Bier, Norman; Faloutsos, Christos

    2015-01-01

    How can we automatically determine which skills must be mastered for the successful completion of an online course? Large-scale online courses (e.g., MOOCs) often contain a broad range of contents frequently intended to be a semester's worth of materials; this breadth often makes it difficult to articulate an accurate set of skills and knowledge…

  11. 76 FR 31454 - Special Conditions: Gulfstream Model GVI Airplane; Automatic Speed Protection for Design Dive Speed

    Science.gov (United States)

    2011-06-01

    ... for Gulfstream GVI airplanes was published in the Federal Register on February 16, 2011 (76 FR 8917...; Automatic Speed Protection for Design Dive Speed AGENCY: Federal Aviation Administration (FAA), DOT. ACTION... high speed protection system. These special conditions contain the additional safety standards that...

  12. 76 FR 8917 - Special Conditions: Gulfstream Model GVI Airplane; Automatic Speed Protection for Design Dive Speed

    Science.gov (United States)

    2011-02-16

    ...; Automatic Speed Protection for Design Dive Speed AGENCY: Federal Aviation Administration (FAA), DOT. ACTION... design features include a high speed protection system. These proposed special conditions contain the... Design Features The GVI is equipped with a high speed protection system that limits nose down...

  13. Performance Modelling of Automatic Identification System with Extended Field of View

    DEFF Research Database (Denmark)

    Lauersen, Troels; Mortensen, Hans Peter; Pedersen, Nikolaj Bisgaard;

    2010-01-01

    This paper deals with AIS (Automatic Identification System) behavior, to investigate the severity of packet collisions in an extended field of view (FOV). This is an important issue for satellite-based AIS, and the main goal is a feasibility study to find out to what extent an increased FOV is...

  14. Fusing moving average model and stationary wavelet decomposition for automatic incident detection: case study of Tokyo Expressway

    Directory of Open Access Journals (Sweden)

    Qinghua Liu

    2014-12-01

    Full Text Available Traffic congestion is a growing problem in urban areas all over the world. The transport sector has been in full swing event study on intelligent transportation system for automatic detection. The functionality of automatic incident detection on expressways is a primary objective of advanced traffic management system. In order to save lives and prevent secondary incidents, accurate and prompt incident detection is necessary. This paper presents a methodology that integrates moving average (MA model with stationary wavelet decomposition for automatic incident detection, in which parameters of layer coefficient are extracted from the difference between the upstream and downstream occupancy. Unlike other wavelet-based method presented before, firstly it smooths the raw data with MA model. Then it uses stationary wavelet to decompose, which can achieve accurate reconstruction of the signal, and does not shift the signal transfer coefficients. Thus, it can detect the incidents more accurately. The threshold to trigger incident alarm is also adjusted according to normal traffic condition with congestion. The methodology is validated with real data from Tokyo Expressway ultrasonic sensors. Experimental results show that it is accurate and effective, and that it can differentiate traffic accident from other condition such as recurring traffic congestion.

  15. Usefulness and limitations of dK random graph models to predict interactions and functional homogeneity in biological networks under a pseudo-likelihood parameter estimation approach

    OpenAIRE

    Luan Yihui; Nunez-Iglesias Juan; Wang Wenhui; Sun Fengzhu

    2009-01-01

    Abstract Background Many aspects of biological functions can be modeled by biological networks, such as protein interaction networks, metabolic networks, and gene coexpression networks. Studying the statistical properties of these networks in turn allows us to infer biological function. Complex statistical network models can potentially more accurately describe the networks, but it is not clear whether such complex models are better suited to find biologically meaningful subnetworks. Results ...

  16. Improved maximum likelihood reconstruction of complex multi-generational pedigrees.

    Science.gov (United States)

    Sheehan, Nuala A; Bartlett, Mark; Cussens, James

    2014-11-01

    The reconstruction of pedigrees from genetic marker data is relevant to a wide range of applications. Likelihood-based approaches aim to find the pedigree structure that gives the highest probability to the observed data. Existing methods either entail an exhaustive search and are hence restricted to small numbers of individuals, or they take a more heuristic approach and deliver a solution that will probably have high likelihood but is not guaranteed to be optimal. By encoding the pedigree learning problem as an integer linear program we can exploit efficient optimisation algorithms to construct pedigrees guaranteed to have maximal likelihood for the standard situation where we have complete marker data at unlinked loci and segregation of genes from parents to offspring is Mendelian. Previous work demonstrated efficient reconstruction of pedigrees of up to about 100 individuals. The modified method that we present here is not so restricted: we demonstrate its applicability with simulated data on a real human pedigree structure of over 1600 individuals. It also compares well with a very competitive approximate approach in terms of solving time and accuracy. In addition to identifying a maximum likelihood pedigree, we can obtain any number of pedigrees in decreasing order of likelihood. This is useful for assessing the uncertainty of a maximum likelihood solution and permits model averaging over high likelihood pedigrees when this would be appropriate. More importantly, when the solution is not unique, as will often be the case for large pedigrees, it enables investigation into the properties of maximum likelihood pedigree estimates which has not been possible up to now. Crucially, we also have a means of assessing the behaviour of other approximate approaches which all aim to find a maximum likelihood solution. Our approach hence allows us to properly address the question of whether a reasonably high likelihood solution that is easy to obtain is practically as

  17. AUTOCASK (AUTOmatic Generation of 3-D CASK models). A microcomputer based system for shipping cask design review analysis

    International Nuclear Information System (INIS)

    AUTOCASK (AUTOmatic Generation of 3-D CASK models) is a microcomputer-based system of computer programs and databases developed at the Lawrence Livermore National Laboratory (LLNL) for the structural analysis of shipping casks for radioactive material. Model specification is performed on the microcomputer, and the analyses are performed on an engineering workstation or mainframe computer. AUTOCASK is based on 80386/80486 compatible microcomputers. The system is composed of a series of menus, input programs, display programs, a mesh generation program, and archive programs. All data is entered through fill-in-the-blank input screens that contain descriptive data requests

  18. Automatic bladder segmentation on CBCT for multiple plan ART of bladder cancer using a patient-specific bladder model

    International Nuclear Information System (INIS)

    In multiple plan adaptive radiotherapy (ART) strategies of bladder cancer, a library of plans corresponding to different bladder volumes is created based on images acquired in early treatment sessions. Subsequently, the plan for the smallest PTV safely covering the bladder on cone-beam CT (CBCT) is selected as the plan of the day. The aim of this study is to develop an automatic bladder segmentation approach suitable for CBCT scans and test its ability to select the appropriate plan from the library of plans for such an ART procedure. Twenty-three bladder cancer patients with a planning CT and on average 11.6 CBCT scans were included in our study. For each patient, all CBCT scans were matched to the planning CT on bony anatomy. Bladder contours were manually delineated for each planning CT (for model building) and CBCT (for model building and validation). The automatic segmentation method consisted of two steps. A patient-specific bladder deformation model was built from the training data set of each patient (the planning CT and the first five CBCT scans). Then, the model was applied to automatically segment bladders in the validation data of the same patient (the remaining CBCT scans). Principal component analysis (PCA) was applied to the training data to model patient-specific bladder deformation patterns. The number of PCA modes for each patient was chosen such that the bladder shapes in the training set could be represented by such number of PCA modes with less than 0.1 cm mean residual error. The automatic segmentation started from the bladder shape of a reference CBCT, which was adjusted by changing the weight of each PCA mode. As a result, the segmentation contour was deformed consistently with the training set to fit the bladder in the validation image. A cost function was defined by the absolute difference between the directional gradient field of reference CBCT sampled on the corresponding bladder contour and the directional gradient field of validation

  19. Automatic single questionnaire intensity (SQI, EMS98 scale) estimation using ranking models built on the existing BCSF database

    Science.gov (United States)

    Schlupp, A.; Sira, C.; Schmitt, K.; Schaming, M.

    2013-12-01

    In charge of intensity estimations in France, BCSF has collected and manually analyzed more than 47000 online individual macroseismic questionnaires since 2000 up to intensity VI. These macroseismic data allow us to estimate one SQI value (Single Questionnaire Intensity) for each form following the EMS98 scale. The reliability of the automatic intensity estimation is important as they are today used for automatic shakemaps communications and crisis management. Today, the automatic intensity estimation at BCSF is based on the direct use of thumbnails selected on a menu by the witnesses. Each thumbnail corresponds to an EMS-98 intensity value, allowing us to quickly issue an intensity map of the communal intensity by averaging the SQIs at each city. Afterwards an expert, to determine a definitive SQI, manually analyzes each form. This work is time consuming and not anymore suitable considering the increasing number of testimonies at BCSF. Nevertheless, it can take into account incoherent answers. We tested several automatic methods (USGS algorithm, Correlation coefficient, Thumbnails) (Sira et al. 2013, IASPEI) and compared them with 'expert' SQIs. These methods gave us medium score (between 50 to 60% of well SQI determined and 35 to 40% with plus one or minus one intensity degree). The best fit was observed with the thumbnails. Here, we present new approaches based on 3 statistical ranking methods as 1) Multinomial logistic regression model, 2) Discriminant analysis DISQUAL and 3) Support vector machines (SVMs). The two first methods are standard methods, while the third one is more recent. Theses methods could be applied because the BCSF has already in his database more then 47000 forms and because their questions and answers are well adapted for a statistical analysis. The ranking models could then be used as automatic method constrained on expert analysis. The performance of the automatic methods and the reliability of the estimated SQI can be evaluated thanks to

  20. Automated segmentation method for the 3D ultrasound carotid image based on geometrically deformable model with automatic merge function

    Science.gov (United States)

    Li, Xiang; Wang, Zigang; Lu, Hongbing; Liang, Zhengrong

    2002-05-01

    Stenosis of the carotid is the most common cause of the stroke. The accurate measurement of the volume of the carotid and visualization of its shape are helpful in improving diagnosis and minimizing the variability of assessment of the carotid disease. Due to the complex anatomic structure of the carotid, it is mandatory to define the initial contours in every slice, which is very difficult and usually requires tedious manual operations. The purpose of this paper is to propose an automatic segmentation method, which automatically provides the contour of the carotid from the 3-D ultrasound image and requires minimum user interaction. In this paper, we developed the Geometrically Deformable Model (GDM) with automatic merge function. In our algorithm, only two initial contours in the topmost slice and four parameters are needed in advance. Simulated 3-D ultrasound image was used to test our algorithm. 3-D display of the carotid obtained by our algorithm showed almost identical shape with true 3-D carotid image. In addition, experimental results also demonstrated that error of the volume measurement of the carotid based on the three different initial contours is less that 1% and its speed was a very fast.

  1. Face Prediction Model for an Automatic Age-invariant Face Recognition System

    OpenAIRE

    Yadav, Poonam

    2015-01-01

    Automated face recognition and identification softwares are becoming part of our daily life; it finds its abode not only with Facebook's auto photo tagging, Apple's iPhoto, Google's Picasa, Microsoft's Kinect, but also in Homeland Security Department's dedicated biometric face detection systems. Most of these automatic face identification systems fail where the effects of aging come into the picture. Little work exists in the literature on the subject of face prediction that accounts for agin...

  2. Automatic meshing method for optimisation of the fusion zone dimensions in Finite Element models of welds

    OpenAIRE

    DECROOS Koenraad; OHMS Carsten; Petrov, Roumen; Seefeldt, Marc; Verhaeghe, Frederik; Kestens, Leo

    2013-01-01

    A new method has been designed to automatically adapt the geometry of the fusion zone of a weld according to the temperature calculations when the thermal welding heat source parameters are known. In the material definition in a Finite Element code for welding stress calculations, the fusion zone material has different properties than the base material since, among others, the temperature at which the material is stress free is the melting temperature instead of room temperature. In this work...

  3. Modeling of an automatic CAD-based feature recognition and retrieval system for group technology application

    OpenAIRE

    Moskalenko, Stanislav

    2014-01-01

    In recent time, many researches have come up with new different approaches and means for Computer-Aided Design (CAD) and Computer-Aided Manufacturing (CAM) integration. Computer-Aided Process Planning (CAPP) is considered to be a bridge that connects these both technologies. CAPP may involve such an important technique as automatic feature extraction - a procedure that is engaged in process plans generation to be used in producing a designed part. Also in terms of CAD, the feature extraction ...

  4. Regional Image Features Model for Automatic Classification between Normal and Glaucoma in Fundus and Scanning Laser Ophthalmoscopy (SLO) Images.

    Science.gov (United States)

    Haleem, Muhammad Salman; Han, Liangxiu; Hemert, Jano van; Fleming, Alan; Pasquale, Louis R; Silva, Paolo S; Song, Brian J; Aiello, Lloyd Paul

    2016-06-01

    Glaucoma is one of the leading causes of blindness worldwide. There is no cure for glaucoma but detection at its earliest stage and subsequent treatment can aid patients to prevent blindness. Currently, optic disc and retinal imaging facilitates glaucoma detection but this method requires manual post-imaging modifications that are time-consuming and subjective to image assessment by human observers. Therefore, it is necessary to automate this process. In this work, we have first proposed a novel computer aided approach for automatic glaucoma detection based on Regional Image Features Model (RIFM) which can automatically perform classification between normal and glaucoma images on the basis of regional information. Different from all the existing methods, our approach can extract both geometric (e.g. morphometric properties) and non-geometric based properties (e.g. pixel appearance/intensity values, texture) from images and significantly increase the classification performance. Our proposed approach consists of three new major contributions including automatic localisation of optic disc, automatic segmentation of disc, and classification between normal and glaucoma based on geometric and non-geometric properties of different regions of an image. We have compared our method with existing approaches and tested it on both fundus and Scanning laser ophthalmoscopy (SLO) images. The experimental results show that our proposed approach outperforms the state-of-the-art approaches using either geometric or non-geometric properties. The overall glaucoma classification accuracy for fundus images is 94.4 % and accuracy of detection of suspicion of glaucoma in SLO images is 93.9 %. PMID:27086033

  5. Composite likelihood and two-stage estimation in family studies

    DEFF Research Database (Denmark)

    Andersen, Elisabeth Anne Wreford

    2004-01-01

    In this paper register based family studies provide the motivation for linking a two-stage estimation procedure in copula models for multivariate failure time data with a composite likelihood approach. The asymptotic properties of the estimators in both parametric and semi-parametric models are...

  6. Profile likelihood maps of a 15-dimensional MSSM

    NARCIS (Netherlands)

    C. Strege; G. Bertone; G.J. Besjes; S. Caron; R. Ruiz de Austri; A. Strubig; R. Trotta

    2014-01-01

    We present statistically convergent profile likelihood maps obtained via global fits of a phenomenological Minimal Supersymmetric Standard Model with 15 free parameters (the MSSM-15), based on over 250M points. We derive constraints on the model parameters from direct detection limits on dark matter

  7. Automatic anatomical labeling of the complete cerebral vasculature in mouse models.

    Science.gov (United States)

    Ghanavati, Sahar; Lerch, Jason P; Sled, John G

    2014-07-15

    Study of cerebral vascular structure broadens our understanding of underlying variations, such as pathologies that can lead to cerebrovascular disorders. The development of high resolution 3D imaging modalities has provided us with the raw material to study the blood vessels in small animals such as mice. However, the high complexity and 3D nature of the cerebral vasculature make comparison and analysis of the vessels difficult, time-consuming and laborious. Here we present a framework for automated segmentation and recognition of the cerebral vessels in high resolution 3D images that addresses this need. The vasculature is segmented by following vessel center lines starting from automatically generated seeds and the vascular structure is represented as a graph. Each vessel segment is represented as an edge in the graph and has local features such as length, diameter, and direction, and relational features representing the connectivity of the vessel segments. Using these features, each edge in the graph is automatically labeled with its anatomical name using a stochastic relaxation algorithm. We have validated our method on micro-CT images of C57Bl/6J mice. A leave-one-out test performed on the labeled data set demonstrated the recognition rate for all vessels including major named vessels and their minor branches to be >75%. This automatic segmentation and recognition methods facilitate the comparison of blood vessels in large populations of subjects and allow us to study cerebrovascular variations. PMID:24680868

  8. Maximum likelihood estimation of fractionally cointegrated systems

    DEFF Research Database (Denmark)

    Lasak, Katarzyna

    In this paper we consider a fractionally cointegrated error correction model and investigate asymptotic properties of the maximum likelihood (ML) estimators of the matrix of the cointe- gration relations, the degree of fractional cointegration, the matrix of the speed of adjustment to the...... equilibrium parameters and the variance-covariance matrix of the error term. We show that using ML principles to estimate jointly all parameters of the fractionally cointegrated system we obtain consistent estimates and provide their asymptotic distributions. The cointegration matrix is asymptotically mixed...... any influence on the long-run relationship. The rate of convergence of the estimators of the long-run relationships depends on the coin- tegration degree but it is optimal for the strong cointegration case considered. We also prove that misspecification of the degree of fractional cointegation does...

  9. A Maximum Likelihood Estimator of a Markov Model for Disease Activity in Crohn's Disease and Ulcerative Colitis for Annually Aggregated Partial Observations

    DEFF Research Database (Denmark)

    Borg, Søren; Persson, U.; Jess, T.; Thomsen, Ole Østergaard; Ljung, T.; Riis, L.; Munkholm, P.

    2010-01-01

    Hospital, Copenhagen, Denmark, during 1991 to 1993. The data were aggregated over calendar years; for each year, the number of relapses and the number of surgical operations were recorded. Our aim was to estimate Markov models for disease activity in CD and UC, in terms of relapse and remission, with a...

  10. Automatic urban building boundary extraction from high resolution aerial images using an innovative model of active contours

    Science.gov (United States)

    Ahmadi, Salman; Zoej, M. J. Valadan; Ebadi, Hamid; Moghaddam, Hamid Abrishami; Mohammadzadeh, Ali

    2010-06-01

    To present a new method for building boundary detection and extraction based on the active contour model, is the main objective of this research. Classical models of this type are associated with several shortcomings; they require extensive initialization, they are sensitive to noise, and adjustment issues often become problematic with complex images. In this research a new model of active contours has been proposed that is optimized for the automatic building extraction. This new active contour model, in comparison to the classical ones, can detect and extract the building boundaries more accurately, and is capable of avoiding detection of the boundaries of features in the neighborhood of buildings such as streets and trees. Finally, the detected building boundaries are generalized to obtain a regular shape for building boundaries. Tests with our proposed model demonstrate excellent accuracy in terms of building boundary extraction. However, due to the radiometric similarity between building roofs and the image background, our system fails to recognize a few buildings.

  11. Efficient Strategies for Calculating Blockwise Likelihoods Under the Coalescent.

    Science.gov (United States)

    Lohse, Konrad; Chmelik, Martin; Martin, Simon H; Barton, Nicholas H

    2016-02-01

    The inference of demographic history from genome data is hindered by a lack of efficient computational approaches. In particular, it has proved difficult to exploit the information contained in the distribution of genealogies across the genome. We have previously shown that the generating function (GF) of genealogies can be used to analytically compute likelihoods of demographic models from configurations of mutations in short sequence blocks (Lohse et al. 2011). Although the GF has a simple, recursive form, the size of such likelihood calculations explodes quickly with the number of individuals and applications of this framework have so far been mainly limited to small samples (pairs and triplets) for which the GF can be written by hand. Here we investigate several strategies for exploiting the inherent symmetries of the coalescent. In particular, we show that the GF of genealogies can be decomposed into a set of equivalence classes that allows likelihood calculations from nontrivial samples. Using this strategy, we automated blockwise likelihood calculations for a general set of demographic scenarios in Mathematica. These histories may involve population size changes, continuous migration, discrete divergence, and admixture between multiple populations. To give a concrete example, we calculate the likelihood for a model of isolation with migration (IM), assuming two diploid samples without phase and outgroup information. We demonstrate the new inference scheme with an analysis of two individual butterfly genomes from the sister species Heliconius melpomene rosina and H. cydno. PMID:26715666

  12. An Approach Using a 1D Hydraulic Model, Landsat Imaging and Generalized Likelihood Uncertainty Estimation for an Approximation of Flood Discharge

    OpenAIRE

    Seung Oh Lee; Yongchul Shin; Kyudong Yeo; Younghun Jung; Venkatesh Merwade

    2013-01-01

    Collection and investigation of flood information are essential to understand the nature of floods, but this has proved difficult in data-poor environments, or in developing or under-developed countries due to economic and technological limitations. The development of remote sensing data, GIS, and modeling techniques have, therefore, proved to be useful tools in the analysis of the nature of floods. Accordingly, this study attempts to estimate a flood discharge using the generalized likelihoo...

  13. Sediment characterization in intertidal zone of the Bourgneuf bay using the Automatic Modified Gaussian Model (AMGM)

    Science.gov (United States)

    Verpoorter, C.; Carrère, V.; Combe, J.-P.; Le Corre, L.

    2009-04-01

    Understanding of the uppermost layer of cohesive sediment beds provides important clues for predicting future sediment behaviours. Sediment consolidation, grain size, water content and biological slimes (EPS: extracellular polymeric substances) were found to be significant factors influencing erosion resistance. The surface spectral signatures of mudflat sediments reflect such bio-geophysical parameters. The overall shape of the spectrum, also called a continuum, is a function of grain size and moisture content. Composition translates into specific absorption features. Finally, the chlorophyll-a concentration derived from the strength of the absorption at 675 nm, is a good proxy for biofilm biomass. Bourgneuf Bay site, south of the Loire river estuary, France, was chosen to represent a range of physical and biological influences on sediment erodability. Field spectral measurements and samples of sediments were collected during various field campaigns. An ASD Fieldspec 3 spectroradiometer was used to produce sediment reflectance hyperspectra in the wavelength range 350-2500 nm. We have developed an automatic procedure based on the Modified Gaussian Model that uses, as the first step, the Spectroscopic Derivative Analysis (SDA) to extract from spectra the bio-geophysical properties on mudflat sediments (Verpoorter et al., 2007). This AMGM algorithm is a powerfull tool to deconvolve spectra into two components, first gaussian curves for the absorptions bands, and second a straight line in the wavenumber range for the continuum. We are investigating the possibility of including other approaches, as the inverse gaussian band centred on 2800 nm initially developed by Whiting et al., (2006) to estimate water content. Additionally, soils samples were analysed to determine moisture content, grain size (laser grain size analyses), organic matter content, carbonate content (calcimetry) and clay content. X-ray diffraction analysis was performed on selected non

  14. Likelihood analysis of earthquake focal mechanism distributions

    CERN Document Server

    Kagan, Y Y

    2014-01-01

    In our paper published earlier we discussed forecasts of earthquake focal mechanism and ways to test the forecast efficiency. Several verification methods were proposed, but they were based on ad-hoc, empirical assumptions, thus their performance is questionable. In this work we apply a conventional likelihood method to measure a skill of forecast. The advantage of such an approach is that earthquake rate prediction can in principle be adequately combined with focal mechanism forecast, if both are based on the likelihood scores, resulting in a general forecast optimization. To calculate the likelihood score we need to compare actual forecasts or occurrences of predicted events with the null hypothesis that the mechanism's 3-D orientation is random. For double-couple source orientation the random probability distribution function is not uniform, which complicates the calculation of the likelihood value. To better understand the resulting complexities we calculate the information (likelihood) score for two rota...

  15. Penalized maximum likelihood estimation and variable selection in geostatistics

    CERN Document Server

    Chu, Tingjin; Wang, Haonan; 10.1214/11-AOS919

    2012-01-01

    We consider the problem of selecting covariates in spatial linear models with Gaussian process errors. Penalized maximum likelihood estimation (PMLE) that enables simultaneous variable selection and parameter estimation is developed and, for ease of computation, PMLE is approximated by one-step sparse estimation (OSE). To further improve computational efficiency, particularly with large sample sizes, we propose penalized maximum covariance-tapered likelihood estimation (PMLE$_{\\mathrm{T}}$) and its one-step sparse estimation (OSE$_{\\mathrm{T}}$). General forms of penalty functions with an emphasis on smoothly clipped absolute deviation are used for penalized maximum likelihood. Theoretical properties of PMLE and OSE, as well as their approximations PMLE$_{\\mathrm{T}}$ and OSE$_{\\mathrm{T}}$ using covariance tapering, are derived, including consistency, sparsity, asymptotic normality and the oracle properties. For covariance tapering, a by-product of our theoretical results is consistency and asymptotic normal...

  16. Automatic Construction of 3D Basic-Semantic Models of Inhabited Interiors Using Laser Scanners and RFID Sensors

    Science.gov (United States)

    Valero, Enrique; Adan, Antonio; Cerrada, Carlos

    2012-01-01

    This paper is focused on the automatic construction of 3D basic-semantic models of inhabited interiors using laser scanners with the help of RFID technologies. This is an innovative approach, in whose field scarce publications exist. The general strategy consists of carrying out a selective and sequential segmentation from the cloud of points by means of different algorithms which depend on the information that the RFID tags provide. The identification of basic elements of the scene, such as walls, floor, ceiling, windows, doors, tables, chairs and cabinets, and the positioning of their corresponding models can then be calculated. The fusion of both technologies thus allows a simplified 3D semantic indoor model to be obtained. This method has been tested in real scenes under difficult clutter and occlusion conditions, and has yielded promising results. PMID:22778609

  17. Automatic application of ICRP biokinetic models in voxel phantoms for in vivo counting and internal dose assessment

    International Nuclear Information System (INIS)

    As part of the improvement of calibration techniques of in vivo counting, the Laboratory of Internal Dose Assessment of the Institute of Radiological Protection and Nuclear Safety has developed a computer tool, 'OEDIPE', to model internal contamination, to simulate in vivo counting and to calculate internal dose. The first version of this software could model sources located in a single organ. As the distribution of the contamination evolves from the time of intake according to the biokinetics of the radionuclide, a new facility has been added to the software first to allow complex heterogeneous source modelling and then to automatically integrate the distribution of the contamination in the different tissues estimated by biokinetic calculation at any time since the intake. These new developments give the opportunity to study the influence of the biokinetics on the in vivo counting, leading to a better assessment of the calibration factors and the corresponding uncertainties. (authors)

  18. Conceptual Model for Automatic Early Warning Information System of Infectious Diseases Based on Internet Reporting Surveillance System

    Institute of Scientific and Technical Information of China (English)

    JIA-QI MA; LI-PING WANG; XUAO-PENG QI; XIAO-MING SHI; GONG-HUAN YANG

    2007-01-01

    Objective To establish a conceptual model of automatic early warning of infectious diseases based on internet reporting surveillance system,with a view to realizing an automated warning system on a daily basis and timely identifying potential outbreaks of infectious diseases. Methods The statistic conceptual model was established using historic surveillance data with movable percentile method.Results Based on the infectious disease surveillance information platform,the conceptualmodelfor early warning was established.The parameter,threshold,and revised sensitivity and specificity of early warning value were changed to realize dynamic alert of infectious diseases on a daily basis.Conclusion The instructive conceptual model of dynamic alert can be used as a validating tool in institutions of infectious disease surveillance in different districts.

  19. A Rayleigh Doppler frequency estimator derived from maximum likelihood theory

    DEFF Research Database (Denmark)

    Hansen, Henrik; Affes, Sofiéne; Mermelstein, Paul

    1999-01-01

    capacities in low and high speed situations. We derive a Doppler frequency estimator using the maximum likelihood method and Jakes model (1974) of a Rayleigh fading channel. This estimator requires an FFT and simple post-processing only. Its performance is verified through simulations and found to yield good...

  20. A Rayleigh Doppler Frequency Estimator Derived from Maximum Likelihood Theory

    DEFF Research Database (Denmark)

    Hansen, Henrik; Affes, Sofiene; Mermelstein, Paul

    1999-01-01

    capacities in low and high speed situations.We derive a Doppler frequency estimatorusing the maximum likelihood method and Jakes model [\\ref{Jakes}] of a Rayleigh fading channel. This estimator requires an FFT and simple post-processing only. Its performance is verifiedthrough simulations and found to yield...

  1. Likelihood-based inference for clustered line transect data

    DEFF Research Database (Denmark)

    Waagepetersen, Rasmus; Schweder, Tore

    2006-01-01

    The uncertainty in estimation of spatial animal density from line transect surveys depends on the degree of spatial clustering in the animal population. To quantify the clustering we model line transect data as independent thinnings of spatial shot-noise Cox processes. Likelihood-based inference...

  2. Likelihood-based inference for clustered line transect data

    DEFF Research Database (Denmark)

    Waagepetersen, Rasmus Plenge; Schweder, Tore

    The uncertainty in estimation of spatial animal density from line transect surveys depends on the degree of spatial clustering in the animal population. To quantify the clustering we model line transect data as independent thinnings of spatial shot-noise Cox processes. Likelihood-based inference...

  3. Planck 2013 results. XV. CMB power spectra and likelihood

    DEFF Research Database (Denmark)

    Tauber, Jan; Bartlett, J.G.; Bucher, M.;

    2014-01-01

    estimate of the CMB angular power spectrum from Planck over three decades in multipole moment, covering 22500. The main source of uncertainty at 1500 is cosmic variance. Uncertainties in small-scale foreground modelling and instrumental noise dominate the error budget at higher s. For <50, our likelihood...

  4. Automatic ECG wave extraction in long-term recordings using Gaussian mesa function models and nonlinear probability estimators.

    Science.gov (United States)

    Dubois, Rémi; Maison-Blanche, Pierre; Quenet, Brigitte; Dreyfus, Gérard

    2007-12-01

    This paper describes the automatic extraction of the P, Q, R, S and T waves of electrocardiographic recordings (ECGs), through the combined use of a new machine-learning algorithm termed generalized orthogonal forward regression (GOFR) and of a specific parameterized function termed Gaussian mesa function (GMF). GOFR breaks up the heartbeat signal into Gaussian mesa functions, in such a way that each wave is modeled by a single GMF; the model thus generated is easily interpretable by the physician. GOFR is an essential ingredient in a global procedure that locates the R wave after some simple pre-processing, extracts the characteristic shape of each heart beat, assigns P, Q, R, S and T labels through automatic classification, discriminates normal beats (NB) from abnormal beats (AB), and extracts features for diagnosis. The efficiency of the detection of the QRS complex, and of the discrimination of NB from AB, is assessed on the MIT and AHA databases; the labeling of the P and T wave is validated on the QTDB database. PMID:17997186

  5. Automatic calibration of a parsimonious ecohydrological model in a sparse basin using the spatio-temporal variation of the NDVI

    Science.gov (United States)

    Ruiz-Pérez, Guiomar; Manfreda, Salvatore; Caylor, Kelly; Francés, Félix

    2016-04-01

    Drylands are extensive, covering 30% of the Earth's land surface and 50% of Africa. In these water-controlled areas, vegetation plays a key role in the water cycle. Ecohydrological models provide a tool to investigate the relationships between vegetation and water resources. However, studies in Africa often face the problem that many ecohydrological models have quite extensive parametrical requirements, while available data are scarce. Therefore, there is a need for searching new sources of information such as satellite data. The advantages of the use of satellite data in dry regions has been deeply demonstrated and studied. But, the use of this kind of data forces to introduce the concept of spatio-temporal information. In this context, we have to deal with the fact that there is a lack in terms of statistics and methodologies to incorporate the spatio-temporal data during the calibration and validation processes. This research wants to be a contribution in that sense. The used ecohydrological model was calibrated in the Upper Ewaso river basin in Kenya only using NDVI (Normalized Difference Vegetation Index) data from MODIS. An automatic calibration methodology based on Singular Value Decomposition techniques was proposed in order to calibrate the model taking into account the temporal variation and, also, the spatial pattern of the observed NDVI and the simulated LAI. The obtained results have demonstrated: (1) the satellite data is an extraordinary useful tool of information and it can be used to implement ecohydrological models in dry regions; (2) the proposed model calibrated only using satellite data is able to reproduce the vegetation dynamics (in time and in space) and, also, the observed discharge at the outlet point; and (3) the proposed automatic calibration methodology works satisfactorily and it includes spatio-temporal data, in other words, it takes into account the temporal variation and the spatial pattern of the analyzed data.

  6. A hybrid model for automatic identification of risk factors for heart disease.

    Science.gov (United States)

    Yang, Hui; Garibaldi, Jonathan M

    2015-12-01

    Coronary artery disease (CAD) is the leading cause of death in both the UK and worldwide. The detection of related risk factors and tracking their progress over time is of great importance for early prevention and treatment of CAD. This paper describes an information extraction system that was developed to automatically identify risk factors for heart disease in medical records while the authors participated in the 2014 i2b2/UTHealth NLP Challenge. Our approaches rely on several nature language processing (NLP) techniques such as machine learning, rule-based methods, and dictionary-based keyword spotting to cope with complicated clinical contexts inherent in a wide variety of risk factors. Our system achieved encouraging performance on the challenge test data with an overall micro-averaged F-measure of 0.915, which was competitive to the best system (F-measure of 0.927) of this challenge task. PMID:26375492

  7. A Semi-Automatic Image-Based Close Range 3D Modeling Pipeline Using a Multi-Camera Configuration

    Directory of Open Access Journals (Sweden)

    Po-Chia Yeh

    2012-08-01

    Full Text Available The generation of photo-realistic 3D models is an important task for digital recording of cultural heritage objects. This study proposes an image-based 3D modeling pipeline which takes advantage of a multi-camera configuration and multi-image matching technique that does not require any markers on or around the object. Multiple digital single lens reflex (DSLR cameras are adopted and fixed with invariant relative orientations. Instead of photo-triangulation after image acquisition, calibration is performed to estimate the exterior orientation parameters of the multi-camera configuration which can be processed fully automatically using coded targets. The calibrated orientation parameters of all cameras are applied to images taken using the same camera configuration. This means that when performing multi-image matching for surface point cloud generation, the orientation parameters will remain the same as the calibrated results, even when the target has changed. Base on this invariant character, the whole 3D modeling pipeline can be performed completely automatically, once the whole system has been calibrated and the software was seamlessly integrated. Several experiments were conducted to prove the feasibility of the proposed system. Images observed include that of a human being, eight Buddhist statues, and a stone sculpture. The results for the stone sculpture, obtained with several multi-camera configurations were compared with a reference model acquired by an ATOS-I 2M active scanner. The best result has an absolute accuracy of 0.26 mm and a relative accuracy of 1:17,333. It demonstrates the feasibility of the proposed low-cost image-based 3D modeling pipeline and its applicability to a large quantity of antiques stored in a museum.

  8. A deformable-model approach to semi-automatic segmentation of CT images demonstrated by application to the spinal canal

    International Nuclear Information System (INIS)

    Because of the importance of accurately defining the target in radiation treatment planning, we have developed a deformable-template algorithm for the semi-automatic delineation of normal tissue structures on computed tomography (CT) images. We illustrate the method by applying it to the spinal canal. Segmentation is performed in three steps: (a) partial delineation of the anatomic structure is obtained by wavelet-based edge detection; (b) a deformable-model template is fitted to the edge set by chamfer matching; and (c) the template is relaxed away from its original shape into its final position. Appropriately chosen ranges for the model parameters limit the deformations of the template, accounting for interpatient variability. Our approach differs from those used in other deformable models in that it does not inherently require the modeling of forces. Instead, the spinal canal was modeled using Fourier descriptors derived from four sets of manually drawn contours. Segmentation was carried out, without manual intervention, on five CT data sets and the algorithm's performance was judged subjectively by two radiation oncologists. Two assessments were considered: in the first, segmentation on a random selection of 100 axial CT images was compared with the corresponding contours drawn manually by one of six dosimetrists, also chosen randomly; in the second assessment, the segmentation of each image in the five evaluable CT sets (a total of 557 axial images) was rated as either successful, unsuccessful, or requiring further editing. Contours generated by the algorithm were more likely than manually drawn contours to be considered acceptable by the oncologists. The mean proportions of acceptable contours were 93% (automatic) and 69% (manual). Automatic delineation of the spinal canal was deemed to be successful on 91% of the images, unsuccessful on 2% of the images, and requiring further editing on 7% of the images. Our deformable template algorithm thus gives a robust

  9. Rare Variants Detection with Kernel Machine Learning Based on Likelihood Ratio Test

    OpenAIRE

    Zeng, Ping; Zhao, Yang; Zhang, Liwei; Huang, Shuiping; Chen, Feng

    2014-01-01

    This paper mainly utilizes likelihood-based tests to detect rare variants associated with a continuous phenotype under the framework of kernel machine learning. Both the likelihood ratio test (LRT) and the restricted likelihood ratio test (ReLRT) are investigated. The relationship between the kernel machine learning and the mixed effects model is discussed. By using the eigenvalue representation of LRT and ReLRT, their exact finite sample distributions are obtained in a simulation manner. Num...

  10. A convex pseudo-likelihood framework for high dimensional partial correlation estimation with convergence guarantees

    OpenAIRE

    Khare, Kshitij; Oh, Sang-Yun; Rajaratnam, Bala

    2013-01-01

    Sparse high dimensional graphical model selection is a topic of much interest in modern day statistics. A popular approach is to apply l1-penalties to either (1) parametric likelihoods, or, (2) regularized regression/pseudo-likelihoods, with the latter having the distinct advantage that they do not explicitly assume Gaussianity. As none of the popular methods proposed for solving pseudo-likelihood based objective functions have provable convergence guarantees, it is not clear if corresponding...

  11. Towards SWOT data assimilation for hydrology : automatic calibration of global flow routing model parameters in the Amazon basin

    Science.gov (United States)

    Mouffe, M.; Getirana, A.; Ricci, S. M.; Lion, C.; Biancamaria, S.; Boone, A.; Mognard, N. M.; Rogel, P.

    2011-12-01

    The Surface Water and Ocean Topography (SWOT) mission is a swath mapping radar interferometer that will provide global measurements of water surface elevation (WSE). The revisit time depends upon latitude and varies from two (low latitudes) to ten (high latitudes) per 22-day orbit repeat period. The high resolution and the global coverage of the SWOT data open the way for new hydrology studies. Here, the aim is to investigate the use of virtually generated SWOT data to improve discharge simulation using data assimilation techniques. In the framework of the SWOT virtual mission (VM), this study presents the first results of the automatic calibration of a global flow routing (GFR) scheme using SWOT VM measurements for the Amazon basin. The Hydrological Modeling and Analysis Platform (HyMAP) is used along with the MOCOM-UA multi-criteria global optimization algorithm. HyMAP has a 0.25-degree spatial resolution and runs at the daily time step to simulate discharge, water levels and floodplains. The surface runoff and baseflow drainage derived from the Interactions Sol-Biosphère-Atmosphère (ISBA) model are used as inputs for HyMAP. Previous works showed that the use of ENVISAT data enables the reduction of the uncertainty on some of the hydrological model parameters, such as river width and depth, Manning roughness coefficient and groundwater time delay. In the framework of the SWOT preparation work, the automatic calibration procedure was applied using SWOT VM measurements. For this Observing System Experiment (OSE), the synthetical data were obtained applying an instrument simulator (representing realistic SWOT errors) for one hydrological year to HYMAP simulated WSE using a "true" set of parameters. Only pixels representing rivers larger than 100 meters within the Amazon basin are considered to produce SWOT VM measurements. The automatic calibration procedure leads to the estimation of optimal parametersminimizing objective functions that formulate the difference

  12. Asymptotic Likelihood Distribution for Correlated & Constrained Systems

    CERN Document Server

    Agarwal, Ujjwal

    2016-01-01

    It describes my work as summer student at CERN. The report discusses the asymptotic distribution of the likelihood ratio for total no. of parameters being h and 2 out of these being are constrained and correlated.

  13. Likelihood Analysis for Mega Pixel Maps

    Science.gov (United States)

    Kogut, Alan J.

    1999-01-01

    The derivation of cosmological parameters from astrophysical data sets routinely involves operations counts which scale as O(N(exp 3) where N is the number of data points. Currently planned missions, including MAP and Planck, will generate sky maps with N(sub d) = 10(exp 6) or more pixels. Simple "brute force" analysis, applied to such mega-pixel data, would require years of computing even on the fastest computers. We describe an algorithm which allows estimation of the likelihood function in the direct pixel basis. The algorithm uses a conjugate gradient approach to evaluate X2 and a geometric approximation to evaluate the determinant. Monte Carlo simulations provide a correction to the determinant, yielding an unbiased estimate of the likelihood surface in an arbitrary region surrounding the likelihood peak. The algorithm requires O(N(sub d)(exp 3/2) operations and O(Nd) storage for each likelihood evaluation, and allows for significant parallel computation.

  14. Exclusion probabilities and likelihood ratios with applications to mixtures.

    Science.gov (United States)

    Slooten, Klaas-Jan; Egeland, Thore

    2016-01-01

    The statistical evidence obtained from mixed DNA profiles can be summarised in several ways in forensic casework including the likelihood ratio (LR) and the Random Man Not Excluded (RMNE) probability. The literature has seen a discussion of the advantages and disadvantages of likelihood ratios and exclusion probabilities, and part of our aim is to bring some clarification to this debate. In a previous paper, we proved that there is a general mathematical relationship between these statistics: RMNE can be expressed as a certain average of the LR, implying that the expected value of the LR, when applied to an actual contributor to the mixture, is at least equal to the inverse of the RMNE. While the mentioned paper presented applications for kinship problems, the current paper demonstrates the relevance for mixture cases, and for this purpose, we prove some new general properties. We also demonstrate how to use the distribution of the likelihood ratio for donors of a mixture, to obtain estimates for exceedance probabilities of the LR for non-donors, of which the RMNE is a special case corresponding to L R>0. In order to derive these results, we need to view the likelihood ratio as a random variable. In this paper, we describe how such a randomization can be achieved. The RMNE is usually invoked only for mixtures without dropout. In mixtures, artefacts like dropout and drop-in are commonly encountered and we address this situation too, illustrating our results with a basic but widely implemented model, a so-called binary model. The precise definitions, modelling and interpretation of the required concepts of dropout and drop-in are not entirely obvious, and we attempt to clarify them here in a general likelihood framework for a binary model. PMID:26160753

  15. 广义线性模型拟似然估计的弱相合性%Weak Consistency of Quasi-Maximum Likelihood Estimates in Generalized Linear Models

    Institute of Scientific and Technical Information of China (English)

    张戈; 吴黎军

    2013-01-01

    研究了广义线性模型在非典则联结情形下的拟似然方程Ln(β)=∑XiH(X’iβ)Λ-1(X’iβ)(yi-h(X'iβ))=0的解(β)n在一定条件下的弱相合性,证明了收敛速度i=1(β)n-(β)0≠Op(λn-1/2)以及拟似然估计的弱相合性的必要条件是:当n→∞时,S-1n→0.%In this paper, we study the solution β^n of quasi-maximum likelihood equation Ln(β) = ∑i=1n XiH(X'iβ)Λ-1(X'iβ) (yi -h(X'iβ ) = 0 for generalized linear models. Under the assumption of an unnatural link function and other some mild conditions, we prove the convergence rate β^n - β0 ≠ op(Λn-1/2) and necessary conditions is when n→∞ , we have S-1n→0.

  16. Automatic Verification of Biochemical Network Using Model Checking Method%基于模型校核的生化网络自动辨别方法

    Institute of Scientific and Technical Information of China (English)

    Jinkyung Kim; Younghee Lee; Il Moon

    2008-01-01

    This study focuses on automatic searching and verifying methods for the reachability, transition logics and hierarchical structure in all possible paths of biological processes using model checking. The automatic search and verification for alternative paths within complex and large networks in biological process can provide a consid-erable amount of solutions, which is difficult to handle manually. Model checking is an automatic method for veri-fying if a circuit or a condition, expressed as a concurrent transition system, satisfies a set of properties expressed ina temporal logic, such as computational tree logic (CTL). This article represents that model checking is feasible in biochemical network verification and it shows certain advantages over simulation for querying and searching of special behavioral properties in biochemical processes.

  17. Likelihood functions for the analysis of single-molecule binned photon sequences

    International Nuclear Information System (INIS)

    Graphical abstract: Folding of a protein with attached fluorescent dyes, the underlying conformational trajectory of interest, and the observed binned photon trajectory. Highlights: ►A sequence of photon counts can be analyzed using a likelihood function. ► The exact likelihood function for a two-state kinetic model is provided. ► Several approximations are considered for an arbitrary kinetic model. ► Improved likelihood functions are obtained to treat sequences of FRET efficiencies. - Abstract: We consider the analysis of a class of experiments in which the number of photons in consecutive time intervals is recorded. Sequence of photon counts or, alternatively, of FRET efficiencies can be studied using likelihood-based methods. For a kinetic model of the conformational dynamics and state-dependent Poisson photon statistics, the formalism to calculate the exact likelihood that this model describes such sequences of photons or FRET efficiencies is developed. Explicit analytic expressions for the likelihood function for a two-state kinetic model are provided. The important special case when conformational dynamics are so slow that at most a single transition occurs in a time bin is considered. By making a series of approximations, we eventually recover the likelihood function used in hidden Markov models. In this way, not only is insight gained into the range of validity of this procedure, but also an improved likelihood function can be obtained.

  18. High-order Composite Likelihood Inference for Max-Stable Distributions and Processes

    KAUST Repository

    Castruccio, Stefano

    2015-09-29

    In multivariate or spatial extremes, inference for max-stable processes observed at a large collection of locations is a very challenging problem in computational statistics, and current approaches typically rely on less expensive composite likelihoods constructed from small subsets of data. In this work, we explore the limits of modern state-of-the-art computational facilities to perform full likelihood inference and to efficiently evaluate high-order composite likelihoods. With extensive simulations, we assess the loss of information of composite likelihood estimators with respect to a full likelihood approach for some widely-used multivariate or spatial extreme models, we discuss how to choose composite likelihood truncation to improve the efficiency, and we also provide recommendations for practitioners. This article has supplementary material online.

  19. Modeling the relationship between rapid automatized naming and literacy skills across languages varying in orthographic consistency.

    Science.gov (United States)

    Georgiou, George K; Aro, Mikko; Liao, Chen-Huei; Parrila, Rauno

    2016-03-01

    The purpose of this study was twofold: (a) to contrast the prominent theoretical explanations of the rapid automatized naming (RAN)-reading relationship across languages varying in orthographic consistency (Chinese, English, and Finnish) and (b) to examine whether the same accounts can explain the RAN-spelling relationship. In total, 304 Grade 4 children (102 Chinese-speaking Taiwanese children, 117 English-speaking Canadian children, and 85 Finnish-speaking children) were assessed on measures of RAN, speed of processing, phonological processing, orthographic processing, reading fluency, and spelling. The results of path analysis indicated that RAN had a strong direct effect on reading fluency that was of the same size across languages and that only in English was a small proportion of its predictive variance mediated by orthographic processing. In contrast, RAN did not exert a significant direct effect on spelling, and a substantial proportion of its predictive variance was mediated by phonological processing (in Chinese and Finnish) and orthographic processing (in English). Given that RAN predicted reading fluency equally well across languages and that phonological/orthographic processing had very little to do with this relationship, we argue that the reason why RAN is related to reading fluency should be sought in domain-general factors such as serial processing and articulation. PMID:26615467

  20. Applying exclusion likelihoods from LHC searches to extended Higgs sectors

    Science.gov (United States)

    Bechtle, Philip; Heinemeyer, Sven; Stål, Oscar; Stefaniak, Tim; Weiglein, Georg

    2015-09-01

    LHC searches for non-standard Higgs bosons decaying into tau lepton pairs constitute a sensitive experimental probe for physics beyond the Standard Model (BSM), such as supersymmetry (SUSY). Recently, the limits obtained from these searches have been presented by the CMS collaboration in a nearly model-independent fashion - as a narrow resonance model - based on the full dataset. In addition to publishing a exclusion limit, the full likelihood information for the narrow resonance model has been released. This provides valuable information that can be incorporated into global BSM fits. We present a simple algorithm that maps an arbitrary model with multiple neutral Higgs bosons onto the narrow resonance model and derives the corresponding value for the exclusion likelihood from the CMS search. This procedure has been implemented into the public computer code HiggsBounds (version 4.2.0 and higher). We validate our implementation by cross-checking against the official CMS exclusion contours in three Higgs benchmark scenarios in the Minimal Supersymmetric Standard Model (MSSM), and find very good agreement. Going beyond validation, we discuss the combined constraints of the search and the rate measurements of the SM-like Higgs at in a recently proposed MSSM benchmark scenario, where the lightest Higgs boson obtains SM-like couplings independently of the decoupling of the heavier Higgs states. Technical details for how to access the likelihood information within HiggsBounds are given in the appendix. The program is available at http://higgsbounds.hepforge.org.

  1. Maximum Likelihood Estimation and Inference With Examples in R, SAS and ADMB

    CERN Document Server

    Millar, Russell B

    2011-01-01

    This book takes a fresh look at the popular and well-established method of maximum likelihood for statistical estimation and inference. It begins with an intuitive introduction to the concepts and background of likelihood, and moves through to the latest developments in maximum likelihood methodology, including general latent variable models and new material for the practical implementation of integrated likelihood using the free ADMB software. Fundamental issues of statistical inference are also examined, with a presentation of some of the philosophical debates underlying the choice of statis

  2. Automatic Differentiation Variational Inference

    OpenAIRE

    Kucukelbir, Alp; Tran, Dustin; Ranganath, Rajesh; Gelman, Andrew; Blei, David M.

    2016-01-01

    Probabilistic modeling is iterative. A scientist posits a simple model, fits it to her data, refines it according to her analysis, and repeats. However, fitting complex models to large data is a bottleneck in this process. Deriving algorithms for new models can be both mathematically and computationally challenging, which makes it difficult to efficiently cycle through the steps. To this end, we develop automatic differentiation variational inference (ADVI). Using our method, the scientist on...

  3. Building the Tangent and Adjoint codes of the Ocean General Circulation Model OPA with the Automatic Differentiation tool TAPENADE

    CERN Document Server

    Tber, Moulay Hicham; Vidard, Arthur; Dauvergne, Benjamin

    2007-01-01

    The ocean general circulation model OPA is developed by the LODYC team at Paris VI university. OPA has recently undergone a major rewriting, migrating to FORTRAN95, and its adjoint code needs to be rebuilt. For earlier versions, the adjoint of OPA was written by hand at a high development cost. We use the Automatic Differentiation tool TAPENADE to build mechanicaly the tangent and adjoint codes of OPA. We validate the differentiated codes by comparison with divided differences, and also with an identical twin experiment. We apply state-of-the-art methods to improve the performance of the adjoint code. In particular we implement the Griewank and Walther's binomial checkpointing algorithm which gives us an optimal trade-off between time and memory consumption. We apply a specific strategy to differentiate the iterative linear solver that comes from the implicit time stepping scheme

  4. An Automatic Segmentation and Classification Framework Based on PCNN Model for Single Tooth in MicroCT Images

    Science.gov (United States)

    Wang, Liansheng; Li, Shusheng; Chen, Rongzhen; Liu, Sze-Yu; Chen, Jyh-Cheng

    2016-01-01

    Accurate segmentation and classification of different anatomical structures of teeth from medical images plays an essential role in many clinical applications. Usually, the anatomical structures of teeth are manually labelled by experienced clinical doctors, which is time consuming. However, automatic segmentation and classification is a challenging task because the anatomical structures and surroundings of the tooth in medical images are rather complex. Therefore, in this paper, we propose an effective framework which is designed to segment the tooth with a Selective Binary and Gaussian Filtering Regularized Level Set (GFRLS) method improved by fully utilizing three dimensional (3D) information, and classify the tooth by employing unsupervised learning Pulse Coupled Neural Networks (PCNN) model. In order to evaluate the proposed method, the experiments are conducted on the different datasets of mandibular molars and the experimental results show that our method can achieve better accuracy and robustness compared to other four state of the art clustering methods. PMID:27322421

  5. Pattern-based Automatic Translation of Structured Power System Data to Functional Models for Decision Support Applications

    DEFF Research Database (Denmark)

    Heussen, Kai; Weckesser, Johannes Tilman Gabriel; Kullmann, Daniel

    2013-01-01

    Improved information and insight for decision support in operations and design are central promises of a smart grid. Well-structured information about the composition of power systems is increasingly becoming available in the domain, e.g. due to standard information models (e.g. CIM or IEC61850) or...... otherwise structured databases. More measurements and data do not automatically improve decisions, but there is an opportunity to capitalize on this information for decision support. With suitable reasoning strategies data can be contextualized and decision-relevant events can be promoted and identified....... This paper presents an approach to link available structured power system data directly to a functional representation suitable for diagnostic reasoning. The translation method is applied to test cases also illustrating decision support....

  6. Automatic Road Lighting System (ARLS) Model Based on Image Processing of Captured Video of Vehicle Toy Motion

    CERN Document Server

    Suprijadi,; Viridi, Sparisoma

    2011-01-01

    Using a vehicle toy as a moving object an automatic road lighting system (ARLS) model is constructed. A video camera with 25 fps is used to capture the vehicle toy motion as it moves in the test segment of the road. Captured images are then processed to calculate vehicle toy speed. This information of the speed together with position of vehicle toy is then used to switch on and off the lighting system along the path that passes by the vehicle toy. Length of the road test segment is 1 m, the video camera is positioned about 1.075 m above the test segment, and the vehicle toy dimension is 13 cm x 9.3 cm. Maximum speed that ARLS can handle is about 1.32 m/s with error less than 23.48 %. The highest performance is obtained about 91 % at speed 0.93 m/s.

  7. Mathematical modeling of automatic control system and synchronous machine in high reliability power supply systems in Kozloduy NPP - creating set up optimization based models

    International Nuclear Information System (INIS)

    The article presents the models of Automatic Control System (ACS) and synchronous motor and generator control of the reversible generator-engine groups of first category power supply section in the Kozloduy NPP units 1 to 4. The control parameters are magnetic field and tension. The research aims are optimal ACS setups, property control guaranties in accordance with the technical requirements. The used synchronous machine model is included in Matlab5.x library. For optimization the instruments of optimization toolbox - NCD out port block and plant actuator and created basic models of variable Discrete PID-regulator and PWM system are utilized. The results are applied for the setup of the real ACS. The results precision of the created models gives a possibility for real summary model development and the achieved models implementation in cases of fluctuations of AC/DC reversible electromechanical supply

  8. Automatic feature template generation for maximum entropy based intonational phrase break prediction

    Science.gov (United States)

    Zhou, You

    2013-03-01

    The prediction of intonational phrase (IP) breaks is important for both the naturalness and intelligibility of Text-to- Speech (TTS) systems. In this paper, we propose a maximum entropy (ME) model to predict IP breaks from unrestricted text, and evaluate various keyword selection approaches in different domains. Furthermore, we design a hierarchical clustering algorithm for automatic generation of feature templates, which minimizes the need for human supervision during ME model training. Results of comparative experiments show that, for the task of IP break prediction, ME model obviously outperforms classification and regression tree (CART), log-likelihood ratio is the best scoring measure of keyword selection, compared with manual templates, templates automatically generated by our approach greatly improves the F-score of ME based IP break prediction, and significantly reduces the size of ME model.

  9. Pendeteksian Outlier pada Regresi Nonlinier dengan Metode statistik Likelihood Displacement

    Directory of Open Access Journals (Sweden)

    Siti Tabi'atul Hasanah

    2012-11-01

    Full Text Available Outlier is an observation that much different (extreme from the other observational data, or data can be interpreted that do not follow the general pattern of the model. Sometimes outliers provide information that can not be provided by other data. That's why outliers should not just be eliminated. Outliers can also be an influential observation. There are many methods that can be used to detect of outliers. In previous studies done on outlier detection of linear regression. Next will be developed detection of outliers in nonlinear regression. Nonlinear regression here is devoted to multiplicative nonlinear regression. To detect is use of statistical method likelihood displacement. Statistical methods abbreviated likelihood displacement (LD is a method to detect outliers by removing the suspected outlier data. To estimate the parameters are used to the maximum likelihood method, so we get the estimate of the maximum. By using LD method is obtained i.e likelihood displacement is thought to contain outliers. Further accuracy of LD method in detecting the outliers are shown by comparing the MSE of LD with the MSE from the regression in general. Statistic test used is Λ. Initial hypothesis was rejected when proved so is an outlier.

  10. On Russian Roulette Estimates for Bayesian Inference with Doubly-Intractable Likelihoods

    OpenAIRE

    Lyne, Anne-Marie; Girolami, Mark; Atchadé, Yves; Strathmann, Heiko; Simpson, Daniel

    2013-01-01

    A large number of statistical models are “doubly-intractable”: the likelihood normalising term, which is a function of the model parameters, is intractable, as well as the marginal likelihood (model evidence). This means that standard inference techniques to sample from the posterior, such as Markov chain Monte Carlo (MCMC), cannot be used. Examples include, but are not confined to, massive Gaussian Markov random fields, autologistic models and Exponential random graph models. A number of app...

  11. Automatic calibration of a global hydrological model using satellite data as a proxy for stream flow data

    Science.gov (United States)

    Revilla-Romero, B.; Beck, H.; Salamon, P.; Burek, P.; Thielen, J.; de Roo, A.

    2014-12-01

    Model calibration and validation are commonly restricted due to the limited availability of historical in situ observational data. Several studies have demonstrated that using complementary remotely sensed datasets such as soil moisture for model calibration have led to improvements. The aim of this study was to evaluate the use of remotely sensed signal of the Global Flood Detection System (GFDS) as a proxy for stream flow data to calibrate a global hydrological model used in operational flood forecasting. This is done in different river basins located in Africa, South and North America for the time period 1998-2010 by comparing model calibration using the raw satellite signal as a proxy for river discharge with a model calibration using in situ stream flow observations. River flow is simulated using the LISFLOOD hydrological model for the flow routing in the river network and the groundwater mass balance. The model is set up on global coverage with horizontal grid resolution of 0.1 degree and daily time step for input/output data. Based on prior tests, a set of seven model parameters was used for calibration. The parameter space was defined by specifying lower and upper limits on each parameter. The objective functions considered were Pearson correlation (R), Nash-Sutcliffe Efficiency log (NSlog) and Kling-Gupta Efficiency (KGE') where both single- and multi-objective functions were employed. After multiple iterations, for each catchment, the algorithm generated a set of Pareto-optimal front of solutions. A single parameter set was selected which had the lowest distance to R=1 for the single-objective and NSlog=1 and KGE'=1 for the multi-objective function. The results of the different test river basins are compared against the performance obtained using the same objective functions by in situ discharge observations. Automatic calibration strategies of the global hydrological model using satellite data as a proxy for stream flow data are outlined and discussed.

  12. MATHEMATICAL MODEL OF TRANSIENT PROCESSES PERTAIN-ING TO THREE-IMPULSE SYSTEM FOR AUTOMATIC CONTROL OF STEAM GENERATOR WATER SUPPLY ON LOAD RELIEF

    Directory of Open Access Journals (Sweden)

    G. T. Kulakov

    2014-01-01

    Full Text Available The paper analyzes an operation of the standard three-impulse automatic control system (ACS for steam generator water supply. Mathematical model for checking its operational ability on load relief has been developed in the paper and this model makes it possible to determine maximum deviations of water level without execution of actual tests and any corrections in the plants for starting-up of technological protection  systems in accordance with water level in the drum.  The paper reveals reasons of static regulation errors while solving problems of internal and external distortions caused by expenditure of over-heated steam in the standard automatic control system. An actual significance of modernization pertaining to automatic control system for steam generator water supply has been substantiated in the paper.

  13. Identification and automatic segmentation of multiphasic cell growth using a linear hybrid model.

    Science.gov (United States)

    Hartmann, András; Neves, Ana Rute; Lemos, João M; Vinga, Susana

    2016-09-01

    This article considers a new mathematical model for the description of multiphasic cell growth. A linear hybrid model is proposed and it is shown that the two-parameter logistic model with switching parameters can be represented by a Switched affine AutoRegressive model with eXogenous inputs (SARX). The growth phases are modeled as continuous processes, while the switches between the phases are considered to be discrete events triggering a change in growth parameters. This framework provides an easily interpretable model, because the intrinsic behavior is the same along all the phases but with a different parameterization. Another advantage of the hybrid model is that it offers a simpler alternative to recent more complex nonlinear models. The growth phases and parameters from datasets of different microorganisms exhibiting multiphasic growth behavior such as Lactococcus lactis, Streptococcus pneumoniae, and Saccharomyces cerevisiae, were inferred. The segments and parameters obtained from the growth data are close to the ones determined by the experts. The fact that the model could explain the data from three different microorganisms and experiments demonstrates the strength of this modeling approach for multiphasic growth, and presumably other processes consisting of multiple phases. PMID:27424949

  14. Automatic Camera Control

    DEFF Research Database (Denmark)

    Burelli, Paolo; Preuss, Mike

    2014-01-01

    Automatically generating computer animations is a challenging and complex problem with applications in games and film production. In this paper, we investigate howto translate a shot list for a virtual scene into a series of virtual camera configurations — i.e automatically controlling the virtual...... camera. We approach this problem by modelling it as a dynamic multi-objective optimisation problem and show how this metaphor allows a much richer expressiveness than a classical single objective approach. Finally, we showcase the application of a multi-objective evolutionary algorithm to generate a shot...

  15. Automatic Generation of Individual Finite-Element Models for Computational Fluid Dynamics and Computational Structure Mechanics Simulations in the Arteries

    Science.gov (United States)

    Hazer, D.; Schmidt, E.; Unterhinninghofen, R.; Richter, G. M.; Dillmann, R.

    2009-08-01

    Abnormal hemodynamics and biomechanics of blood flow and vessel wall conditions in the arteries may result in severe cardiovascular diseases. Cardiovascular diseases result from complex flow pattern and fatigue of the vessel wall and are prevalent causes leading to high mortality each year. Computational Fluid Dynamics (CFD), Computational Structure Mechanics (CSM) and Fluid Structure Interaction (FSI) have become efficient tools in modeling the individual hemodynamics and biomechanics as well as their interaction in the human arteries. The computations allow non-invasively simulating patient-specific physical parameters of the blood flow and the vessel wall needed for an efficient minimally invasive treatment. The numerical simulations are based on the Finite Element Method (FEM) and require exact and individual mesh models to be provided. In the present study, we developed a numerical tool to automatically generate complex patient-specific Finite Element (FE) mesh models from image-based geometries of healthy and diseased vessels. The mesh generation is optimized based on the integration of mesh control functions for curvature, boundary layers and mesh distribution inside the computational domain. The needed mesh parameters are acquired from a computational grid analysis which ensures mesh-independent and stable simulations. Further, the generated models include appropriate FE sets necessary for the definition of individual boundary conditions, required to solve the system of nonlinear partial differential equations governed by the fluid and solid domains. Based on the results, we have performed computational blood flow and vessel wall simulations in patient-specific aortic models providing a physical insight into the pathological vessel parameters. Automatic mesh generation with individual awareness in terms of geometry and conditions is a prerequisite for performing fast, accurate and realistic FEM-based computations of hemodynamics and biomechanics in the

  16. Automatic registration of optical aerial imagery to a LiDAR point cloud for generation of city models

    Science.gov (United States)

    Abayowa, Bernard O.; Yilmaz, Alper; Hardie, Russell C.

    2015-08-01

    This paper presents a framework for automatic registration of both the optical and 3D structural information extracted from oblique aerial imagery to a Light Detection and Ranging (LiDAR) point cloud without prior knowledge of an initial alignment. The framework employs a coarse to fine strategy in the estimation of the registration parameters. First, a dense 3D point cloud and the associated relative camera parameters are extracted from the optical aerial imagery using a state-of-the-art 3D reconstruction algorithm. Next, a digital surface model (DSM) is generated from both the LiDAR and the optical imagery-derived point clouds. Coarse registration parameters are then computed from salient features extracted from the LiDAR and optical imagery-derived DSMs. The registration parameters are further refined using the iterative closest point (ICP) algorithm to minimize global error between the registered point clouds. The novelty of the proposed approach is in the computation of salient features from the DSMs, and the selection of matching salient features using geometric invariants coupled with Normalized Cross Correlation (NCC) match validation. The feature extraction and matching process enables the automatic estimation of the coarse registration parameters required for initializing the fine registration process. The registration framework is tested on a simulated scene and aerial datasets acquired in real urban environments. Results demonstrates the robustness of the framework for registering optical and 3D structural information extracted from aerial imagery to a LiDAR point cloud, when co-existing initial registration parameters are unavailable.

  17. A simple model for automatically measuring radon exhalation rate from medium surface

    International Nuclear Information System (INIS)

    A simple model to measure radon exhalation rate from medium surface is developed in this paper. This model is based on a combination of the “accumulation chamber” technique and a radon monitor. The radon monitor is used to perform measurement of radon concentration evolution inside the accumulation chamber, and radon exhalation rate is evaluated via nonlinear least-square fitting of the measured data. If the flow rate of the pump is high enough, radon concentration in the detector's internal cell becomes to be equal to that in the accumulation chamber quickly, and the simple model for measuring the radon exhalation rate can be generated analytically. Generally, the pump flow rate of radon monitor is low, not satisfying the condition. We find other sufficient conditions of this simplified model. On these conditions, the radon exhalation rate can be calculated accurately through this model even the flow rate of the pump is not so high. This method can be applied to develop and improve the instruments for measuring the radon exhalation rate. - Highlights: • We present a novel simple model for measuring radon exhalation rate based on the complex model we published before. • The algorithm based on the simple model is developed. • The radon exhalation rate can be obtained by nonlinear least squares fitting. • The applicable condition of this simple model is obtained

  18. A quantum framework for likelihood ratios

    CERN Document Server

    Bond, Rachael L; Ormerod, Thomas C

    2015-01-01

    The ability to calculate precise likelihood ratios is fundamental to many STEM areas, such as decision-making theory, biomedical science, and engineering. However, there is no assumption-free statistical methodology to achieve this. For instance, in the absence of data relating to covariate overlap, the widely used Bayes' theorem either defaults to the marginal probability driven "naive Bayes' classifier", or requires the use of compensatory expectation-maximization techniques. Equally, the use of alternative statistical approaches, such as multivariate logistic regression, may be confounded by other axiomatic conditions, e.g., low levels of co-linearity. This article takes an information-theoretic approach in developing a new statistical formula for the calculation of likelihood ratios based on the principles of quantum entanglement. In doing so, it is argued that this quantum approach demonstrates: that the likelihood ratio is a real quality of statistical systems; that the naive Bayes' classifier is a spec...

  19. Maximum mutual information vector quantization of log-likelihood ratios for memory efficient HARQ implementations

    OpenAIRE

    Danieli, Matteo; Forchhammer, Søren; Andersen, Jakob Dahl; Christensen, Lars P.B.; Skovgaard Christensen, Søren

    2010-01-01

    Modern mobile telecommunication systems, such as 3GPP LTE, make use of Hybrid Automatic Repeat reQuest (HARQ) for efficient and reliable communication between base stations and mobile terminals. To this purpose, marginal posterior probabilities of the received bits are stored in the form of log-likelihood ratios (LLR) in order to combine information sent across different transmissions due to requests. To mitigate the effects of ever-increasing data rates that call for larger HARQ memory, vect...

  20. Automatic control of finite element models for temperature-controlled radiofrequency ablation

    Directory of Open Access Journals (Sweden)

    Haemmerich Dieter

    2005-07-01

    Full Text Available Abstract Background The finite element method (FEM has been used to simulate cardiac and hepatic radiofrequency (RF ablation. The FEM allows modeling of complex geometries that cannot be solved by analytical methods or finite difference models. In both hepatic and cardiac RF ablation a common control mode is temperature-controlled mode. Commercial FEM packages don't support automating temperature control. Most researchers manually control the applied power by trial and error to keep the tip temperature of the electrodes constant. Methods We implemented a PI controller in a control program written in C++. The program checks the tip temperature after each step and controls the applied voltage to keep temperature constant. We created a closed loop system consisting of a FEM model and the software controlling the applied voltage. The control parameters for the controller were optimized using a closed loop system simulation. Results We present results of a temperature controlled 3-D FEM model of a RITA model 30 electrode. The control software effectively controlled applied voltage in the FEM model to obtain, and keep electrodes at target temperature of 100°C. The closed loop system simulation output closely correlated with the FEM model, and allowed us to optimize control parameters. Discussion The closed loop control of the FEM model allowed us to implement temperature controlled RF ablation with minimal user input.

  1. Uav Aerial Survey: Accuracy Estimation for Automatically Generated Dense Digital Surface Model and Orthothoto Plan

    Science.gov (United States)

    Altyntsev, M. A.; Arbuzov, S. A.; Popov, R. A.; Tsoi, G. V.; Gromov, M. O.

    2016-06-01

    A dense digital surface model is one of the products generated by using UAV aerial survey data. Today more and more specialized software are supplied with modules for generating such kind of models. The procedure for dense digital model generation can be completely or partly automated. Due to the lack of reliable criterion of accuracy estimation it is rather complicated to judge the generation validity of such models. One of such criterion can be mobile laser scanning data as a source for the detailed accuracy estimation of the dense digital surface model generation. These data may be also used to estimate the accuracy of digital orthophoto plans created by using UAV aerial survey data. The results of accuracy estimation for both kinds of products are presented in the paper.

  2. A composite likelihood approach for spatially correlated survival data.

    Science.gov (United States)

    Paik, Jane; Ying, Zhiliang

    2013-01-01

    The aim of this paper is to provide a composite likelihood approach to handle spatially correlated survival data using pairwise joint distributions. With e-commerce data, a recent question of interest in marketing research has been to describe spatially clustered purchasing behavior and to assess whether geographic distance is the appropriate metric to describe purchasing dependence. We present a model for the dependence structure of time-to-event data subject to spatial dependence to characterize purchasing behavior from the motivating example from e-commerce data. We assume the Farlie-Gumbel-Morgenstern (FGM) distribution and then model the dependence parameter as a function of geographic and demographic pairwise distances. For estimation of the dependence parameters, we present pairwise composite likelihood equations. We prove that the resulting estimators exhibit key properties of consistency and asymptotic normality under certain regularity conditions in the increasing-domain framework of spatial asymptotic theory. PMID:24223450

  3. Adaptive automat for compensating antenna gain losses in a model of equipment for tropospheric-scatter radio communications

    OpenAIRE

    Rudakov, V. I.

    2008-01-01

    This study deals with the problem of adaptive simultaneous compensation of antenna gain losses and quick and slow fadings of signal in a sample of equipment for tropospheric-scatter radio communications using an adaptive automat that ensures automatic equality of these two values and the specified information reliability.

  4. Computer-aided design of curved surfaces with automatic model generation

    Science.gov (United States)

    Staley, S. M.; Jerard, R. B.; White, P. R.

    1980-01-01

    The design and visualization of three-dimensional objects with curved surfaces have always been difficult. The paper given below describes a computer system which facilitates both the design and visualization of such surfaces. The system enhances the design of these surfaces by virtue of various interactive techniques coupled with the application of B-Spline theory. Visualization is facilitated by including a specially built model-making machine which produces three-dimensional foam models. Thus, the system permits the designer to produce an inexpensive model of the object which is suitable for evaluation and presentation.

  5. GRACE/SUSY Automatic Generation of Tree Amplitudes in the Minimal Supersymmetric Standard Model

    OpenAIRE

    Fujimoto, J

    2002-01-01

    GRACE/SUSY is a program package for generating the tree-level amplitude and evaluating the corresponding cross section of processes of the minimal supersymmetric extension of the standard model (MSSM). The Higgs potential adopted in the system, however, is assumed to have a more general form indicated by the two-Higgs-doublet model. This system is an extension of GRACE for the standard model(SM) of the electroweak and strong interactions. For a given MSSM process the Feynman graphs and amplit...

  6. Maximum Likelihood Estimation in Panels with Incidental Trends

    OpenAIRE

    Moon, Hyungsik; Phillips, Peter C. B.

    1999-01-01

    It is shown that the maximum likelihood estimator of a local to unity parameter can be consistently estimated with panel data when the cross section observations are independent. Consistency applies when there are no deterministic trends or when there is a homogeneous deterministic trend in the panel model. When there are heterogeneous deterministic trends the panel MLE of the local to unity parameter is inconsistent. This outcome provides a new instance of inconsistent ML estimation in dynam...

  7. Efficient Strategies for Calculating Blockwise Likelihoods Under the Coalescent

    OpenAIRE

    Lohse, Konrad; Chmelik, Martin; Simon H Martin; Nicholas H Barton

    2015-01-01

    The inference of demographic history from genome data is hindered by a lack of efficient computational approaches. In particular, it has proven difficult to exploit the information contained in the distribution of genealogies across the genome. We have previously shown that the generating function (GF) of genealogies can be used to analytically compute likelihoods of demographic models from configurations of mutations in short sequence blocks (Lohse et al. 2011). Although the GF has a simple,...

  8. ANALYSIS OF CONSUMER ATTITUDES TOWARD ORGANIC PRODUCE PURCHASE LIKELIHOOD

    OpenAIRE

    Byrne, Patrick J.; Toensmeyer, Ulrich C.; German, Carl L.; Muller, H. Reed

    1991-01-01

    This study demographically determines: which consumers are currently buying organic produce; consumer comparisons of organic and conventional produce; and consumer purchase likelihood of higher-priced organic produce. Data were collected from a Delaware consumer survey, dealing with fresh produce and food safety. Multinomial and ordered logit models were developed to generate marginal effects of age, gender, education, and income. Increasing age, males, and advancing education demonstrated po...

  9. Free Model of Sentence Classifier for Automatic Extraction of Topic Sentences

    OpenAIRE

    M.L. Khodra; D.H. Widyantoro; E.A. Aziz; B.R. Trilaksono

    2011-01-01

    This research employs free model that uses only sentential features without paragraph context to extract topic sentences of a paragraph. For finding optimal combination of features, corpus-based classification is used for constructing a sentence classifier as the model. The sentence classifier is trained by using Support Vector Machine (SVM). The experiment shows that position and meta-discourse features are more important than syntactic features to extract topic sentence, and the best perfor...

  10. Modelling the influence of automaticity of behaviour on physical activity motivation, intention and actual behaviour

    OpenAIRE

    Rietdijk, Yara

    2014-01-01

    In research and in practise social-cognitive models, such as the theory of planned behaviour (TPB), are used to predict physical activity behaviour. These models mainly focus on reflective cognitive processes. As a reflective process, intention is thought to be the most proximal predictor to behaviour. Nevertheless, research suggests that the relation between intention and actual behaviour, the so called intention-behaviour gap, is moderate. Many health-related actions in d...

  11. Automatic modeling of building interiors using low-cost sensor systems

    OpenAIRE

    Khosravani, Ali Mohammad

    2016-01-01

    Indoor reconstruction or 3D modeling of indoor scenes aims at representing the 3D shape of building interiors in terms of surfaces and volumes, using photographs, 3D point clouds or hypotheses. Due to advances in the range measurement sensors technology and vision algorithms, and at the same time an increased demand for indoor models by many applications, this topic of research has gained growing attention during the last years. The automation of the reconstruction process is still a challeng...

  12. AUTOMATIC TEXTURE RECONSTRUCTION OF 3D CITY MODEL FROM OBLIQUE IMAGES

    OpenAIRE

    Kang, Junhua; Deng, Fei; LI, XINWEI; WAN, FANG

    2016-01-01

    In recent years, the photorealistic 3D city models are increasingly important in various geospatial applications related to virtual city tourism, 3D GIS, urban planning, real-estate management. Besides the acquisition of high-precision 3D geometric data, texture reconstruction is also a crucial step for generating high-quality and visually realistic 3D models. However, most of the texture reconstruction approaches are probably leading to texture fragmentation and memory inefficiency....

  13. Automatic estimation of midline shift in patients with cerebral glioma based on enhanced voigt model and local symmetry.

    Science.gov (United States)

    Chen, Mingyang; Elazab, Ahmed; Jia, Fucang; Wu, Jianhuang; Li, Guanglin; Li, Xiaodong; Hu, Qingmao

    2015-12-01

    Cerebral glioma is one of the most aggressive space-occupying diseases, which will exhibit midline shift (MLS) due to mass effect. MLS has been used as an important feature for evaluating the pathological severity and patients' survival possibility. Automatic quantification of MLS is challenging due to deformation, complex shape and complex grayscale distribution. An automatic method is proposed and validated to estimate MLS in patients with gliomas diagnosed using magnetic resonance imaging (MRI). The deformed midline is approximated by combining mechanical model and local symmetry. An enhanced Voigt model which takes into account the size and spatial information of lesion is devised to predict the deformed midline. A composite local symmetry combining local intensity symmetry and local intensity gradient symmetry is proposed to refine the predicted midline within a local window whose size is determined according to the pinhole camera model. To enhance the MLS accuracy, the axial slice with maximum MSL from each volumetric data has been interpolated from a spatial resolution of 1 mm to 0.33 mm. The proposed method has been validated on 30 publicly available clinical head MRI scans presenting with MLS. It delineates the deformed midline with maximum MLS and yields a mean difference of 0.61 ± 0.27 mm, and average maximum difference of 1.89 ± 1.18 mm from the ground truth. Experiments show that the proposed method will yield better accuracy with the geometric center of pathology being the geometric center of tumor and the pathological region being the whole lesion. It has also been shown that the proposed composite local symmetry achieves significantly higher accuracy than the traditional local intensity symmetry and the local intensity gradient symmetry. To the best of our knowledge, for delineation of deformed midline, this is the first report on both quantification of gliomas and from MRI, which hopefully will provide valuable information for diagnosis

  14. Structural modeling of G-protein coupled receptors: An overview on automatic web-servers.

    Science.gov (United States)

    Busato, Mirko; Giorgetti, Alejandro

    2016-08-01

    Despite the significant efforts and discoveries during the last few years in G protein-coupled receptor (GPCR) expression and crystallization, the receptors with known structures to date are limited only to a small fraction of human GPCRs. The lack of experimental three-dimensional structures of the receptors represents a strong limitation that hampers a deep understanding of their function. Computational techniques are thus a valid alternative strategy to model three-dimensional structures. Indeed, recent advances in the field, together with extraordinary developments in crystallography, in particular due to its ability to capture GPCRs in different activation states, have led to encouraging results in the generation of accurate models. This, prompted the community of modelers to render their methods publicly available through dedicated databases and web-servers. Here, we present an extensive overview on these services, focusing on their advantages, drawbacks and their role in successful applications. Future challenges in the field of GPCR modeling, such as the predictions of long loop regions and the modeling of receptor activation states are presented as well. PMID:27102413

  15. Sampling of systematic errors to estimate likelihood weights in nuclear data uncertainty propagation

    Science.gov (United States)

    Helgesson, P.; Sjöstrand, H.; Koning, A. J.; Rydén, J.; Rochman, D.; Alhassan, E.; Pomp, S.

    2016-01-01

    In methodologies for nuclear data (ND) uncertainty assessment and propagation based on random sampling, likelihood weights can be used to infer experimental information into the distributions for the ND. As the included number of correlated experimental points grows large, the computational time for the matrix inversion involved in obtaining the likelihood can become a practical problem. There are also other problems related to the conventional computation of the likelihood, e.g., the assumption that all experimental uncertainties are Gaussian. In this study, a way to estimate the likelihood which avoids matrix inversion is investigated; instead, the experimental correlations are included by sampling of systematic errors. It is shown that the model underlying the sampling methodology (using univariate normal distributions for random and systematic errors) implies a multivariate Gaussian for the experimental points (i.e., the conventional model). It is also shown that the likelihood estimates obtained through sampling of systematic errors approach the likelihood obtained with matrix inversion as the sample size for the systematic errors grows large. In studied practical cases, it is seen that the estimates for the likelihood weights converge impractically slowly with the sample size, compared to matrix inversion. The computational time is estimated to be greater than for matrix inversion in cases with more experimental points, too. Hence, the sampling of systematic errors has little potential to compete with matrix inversion in cases where the latter is applicable. Nevertheless, the underlying model and the likelihood estimates can be easier to intuitively interpret than the conventional model and the likelihood function involving the inverted covariance matrix. Therefore, this work can both have pedagogical value and be used to help motivating the conventional assumption of a multivariate Gaussian for experimental data. The sampling of systematic errors could also

  16. Modelling of the automatic stabilization system of the aircraft course by a fuzzy logic method

    Science.gov (United States)

    Mamonova, T.; Syryamkin, V.; Vasilyeva, T.

    2016-04-01

    The problem of the present paper concerns the development of a fuzzy model of the system of an aircraft course stabilization. In this work modelling of the aircraft course stabilization system with the application of fuzzy logic is specified. Thus the authors have used the data taken for an ordinary passenger plane. As a result of the study the stabilization system models were realised in the environment of Matlab package Simulink on the basis of the PID-regulator and fuzzy logic. The authors of the paper have shown that the use of the method of artificial intelligence allows reducing the time of regulation to 1, which is 50 times faster than the time when standard receptions of the management theory are used. This fact demonstrates a positive influence of the use of fuzzy regulation.

  17. Free Model of Sentence Classifier for Automatic Extraction of Topic Sentences

    Directory of Open Access Journals (Sweden)

    M.L. Khodra

    2011-04-01

    Full Text Available This research employs free model that uses only sentential features without paragraph context to extract topic sentences of a paragraph. For finding optimal combination of features, corpus-based classification is used for constructing a sentence classifier as the model. The sentence classifier is trained by using Support Vector Machine (SVM. The experiment shows that position and meta-discourse features are more important than syntactic features to extract topic sentence, and the best performer (80.68% is SVM classifier with all features.

  18. Automatic Tracing and Segmentation of Rat Mammary Fat Pads in MRI Image Sequences Based on Cartoon-Texture Model

    Institute of Scientific and Technical Information of China (English)

    TU Shengxian; ZHANG Su; CHEN Yazhu; Freedman Matthew T; WANG Bin; XUAN Jason; WANG Yue

    2009-01-01

    The growth patterns of mammary fat pads and glandular tissues inside the fat pads may be related with the risk factors of breast cancer.Quantitative measurements of this relationship are available after segmentation of mammary pads and glandular tissues.Rat fat pads may lose continuity along image sequences or adjoin similar intensity areas like epidermis and subcutaneous regions.A new approach for automatic tracing and segmentation of fat pads in magnetic resonance imaging (MRI) image sequences is presented,which does not require that the number of pads be constant or the spatial location of pads be adjacent among image slices.First,each image is decomposed into cartoon image and texture image based on cartoon-texture model.They will be used as smooth image and feature image for segmentation and for targeting pad seeds,respectively.Then,two-phase direct energy segmentation based on Chan-Vese active contour model is applied to partitioning the cartoon image into a set of regions,from which the pad boundary is traced iteratively from the pad seed.A tracing algorithm based on scanning order is proposed to accurately trace the pad boundary,which effectively removes the epidermis attached to the pad without any post processing as well as solves the problem of over-segmentation of some small holes inside the pad.The experimental results demonstrate the utility of this approach in accurate delineation of various numbers of mammary pads from several sets of MRI images.

  19. Automatic Tuning of a Retina Model for a Cortical Visual Neuroprosthesis Using a Multi-Objective Optimization Genetic Algorithm.

    Science.gov (United States)

    Martínez-Álvarez, Antonio; Crespo-Cano, Rubén; Díaz-Tahoces, Ariadna; Cuenca-Asensi, Sergio; Ferrández Vicente, José Manuel; Fernández, Eduardo

    2016-11-01

    The retina is a very complex neural structure, which contains many different types of neurons interconnected with great precision, enabling sophisticated conditioning and coding of the visual information before it is passed via the optic nerve to higher visual centers. The encoding of visual information is one of the basic questions in visual and computational neuroscience and is also of seminal importance in the field of visual prostheses. In this framework, it is essential to have artificial retina systems to be able to function in a way as similar as possible to the biological retinas. This paper proposes an automatic evolutionary multi-objective strategy based on the NSGA-II algorithm for tuning retina models. Four metrics were adopted for guiding the algorithm in the search of those parameters that best approximate a synthetic retinal model output with real electrophysiological recordings. Results show that this procedure exhibits a high flexibility when different trade-offs has to be considered during the design of customized neuro prostheses. PMID:27354187

  20. Automatic 3D Building Detection and Modeling from Airborne LiDAR Point Clouds

    Science.gov (United States)

    Sun, Shaohui

    2013-01-01

    Urban reconstruction, with an emphasis on man-made structure modeling, is an active research area with broad impact on several potential applications. Urban reconstruction combines photogrammetry, remote sensing, computer vision, and computer graphics. Even though there is a huge volume of work that has been done, many problems still remain…

  1. An automatic heart wall contour extraction method on MR images using the Active Contour Model

    International Nuclear Information System (INIS)

    In this paper, we propose a new method of extracting heart wall contours using the Active Contour Model (snakes). We use an adaptive contrast enhancing method, which made it possible to extract both inner and outer contours of the left ventricule of the heart. Experimental results showed the efficiency of this method. (author)

  2. On the Integration of Automatic Deployment into the ABS Modeling Language

    NARCIS (Netherlands)

    Gouw, C.P.T. de; Lienhardt, M.; Mauro, J.; Nobakht, B.; Zavattaro, G.; Dustdar, S.; Leymann, F.; Villari, M.

    2015-01-01

    In modern software systems, deployment is an integral and critical part of application development (see, e.g., the DevOps approach to software development). Nevertheless, deployment is usually overlooked at the modeling level, thus losing the possibility to perform deployment conscious decisions dur

  3. AUTOMATIC HUMAN FACE RECOGNITION USING MULTIVARIATE GAUSSIAN MODEL AND FISHER LINEAR DISCRIMINATIVE ANALYSIS

    Directory of Open Access Journals (Sweden)

    Surya Kasturi

    2014-08-01

    Full Text Available Face recognition plays an important role in surveillance, biometrics and is a popular application of computer vision. In this paper, color based skin segmentation is proposed to detect faces and is matched with faces from the dataset. The proposed color based segmentation method is tested in different color spaces to identify suitable color space for identification of faces. Based on the sample skin distribution a Multivariate Gaussian Model is fitted to identify skin regions from which face regions are detected using connected components. The detected face is match with a template and verified. The proposed method Multivariate Gaussian Model – Fisher Linear Discriminative Analysis (MGM – FLDA is compared with machine learning - Viola & Jones algorithm and it gives better results in terms of time.

  4. Automatic sleep classification using a data-driven topic model reveals latent sleep states

    DEFF Research Database (Denmark)

    Koch, Henriette; Christensen, Julie Anja Engelhard; Frandsen, Rune;

    2014-01-01

    Latent Dirichlet Allocation. Model application was tested on control subjects and patients with periodic leg movements (PLM) representing a non-neurodegenerative group, and patients with idiopathic REM sleep behavior disorder (iRBD) and Parkinson's Disease (PD) representing a neurodegenerative group. The...... that sleep contains six diverse latent sleep states and that state transitions are continuous processes. Conclusions: The model is generally applicable and may contribute to the research in neurodegenerative diseases and sleep disorders. (C) 2014 Elsevier B.V. All rights reserved.......Background: The golden standard for sleep classification uses manual scoring of polysomnography despite points of criticism such as oversimplification, low inter-rater reliability and the standard being designed on young and healthy subjects. New method: To meet the criticism and reveal the latent...

  5. A Maximum Likelihood Approach to Functional Mapping of Longitudinal Binary Traits

    OpenAIRE

    Wang Chenguang; Li Hongying; Wang Zhong; Wang Yaqun; Wang Ningtao; Wang Zuoheng; Wu Rongling

    2012-01-01

    Despite their importance in biology and biomedicine, genetic mapping of binary traits that change over time has not been well explored. In this article, we develop a statistical model for mapping quantitative trait loci (QTLs) that govern longitudinal responses of binary traits. The model is constructed within the maximum likelihood framework by which the association between binary responses is modeled in terms of conditional log odds-ratios. With this parameterization, the maximum likelihood...

  6. AUTOMATIC HUMAN FACE RECOGNITION USING MULTIVARIATE GAUSSIAN MODEL AND FISHER LINEAR DISCRIMINATIVE ANALYSIS

    OpenAIRE

    Surya Kasturi; Ashoka Vanjare; S.N. Omkar

    2014-01-01

    Face recognition plays an important role in surveillance, biometrics and is a popular application of computer vision. In this paper, color based skin segmentation is proposed to detect faces and is matched with faces from the dataset. The proposed color based segmentation method is tested in different color spaces to identify suitable color space for identification of faces. Based on the sample skin distribution a Multivariate Gaussian Model is fitted to identify skin regions from which face ...

  7. Toward the Automatic Generation of a Semantic VRML Model from Unorganized 3D Point Clouds

    OpenAIRE

    Ben Hmida, Helmi; Cruz, Christophe; Nicolle, Christophe; Boochs, Frank

    2011-01-01

    International audience This paper presents our experience regarding the creation of 3D semantic facility model out of unorganized 3D point clouds. Thus, a knowledge-based detection approach of objects using the OWL ontology language is presented. This knowledge is used to define SWRL detection rules. In addition, the combination of 3D processing built-ins and topological Built-Ins in SWRL rules aims at combining geometrical analysis of 3D point clouds and specialist's knowledge. This combi...

  8. Automatic Adaptation of Resources to Workload requirements in Nested Fork-join Programming Model

    OpenAIRE

    Varisteas, Georgios; Brorsson, Mats

    2012-01-01

    We provide a work-stealing scheduling method for nested fork/join parallelism that is mathematically proven to self- adapt multiprogrammed applications resource allocation to the current workloads’ individual needs while it takes avail- able resources into account. The scheduling method both scales up the allocated resources when needed and down, when possible.The theoretical model has been implemented in the Bar- relfish distributed multikernel operating system and demon- strated to function ...

  9. Genetic algorithms used for PWRs refuel management automatic optimization: a new modelling

    International Nuclear Information System (INIS)

    A Genetic Algorithms-based system, linking the computer codes GENESIS 5.0 and ANC through the interface ALGER, has been developed aiming the PWRs fuel management optimization. An innovative codification, the Lists Model, has been incorporated to the genetic system, which avoids the use of variants of the standard crossover operator and generates only valid loading patterns in the core. The GENESIS/ALGER/ANC system has been successfully tested in an optimization study for Angra-1 second cycle. (author)

  10. A Weighted Likelihood Ratio of Two Related Negative Hypergeomeric Distributions

    Institute of Scientific and Technical Information of China (English)

    Titi Obilade

    2004-01-01

    In this paper we consider some related negative hypergeometric distributions arising from the problem of sampling without replacement from an urn containing balls of different colours and in different proportions but stopping only after some specifi number of balls of different colours have been obtained.With the aid of some simple recurrence relations and identities we obtain in the case of two colours the moments for the maximum negative hypergeometric distribution,the minimum negative hypergeometric distribution,the likelihood ratio negative hypergeometric distribution and consequently the likelihood proportional negative hypergeometric distributiuon.To the extent that the sampling scheme is applicable to modelling data as illustrated with a biological example and in fact many situations of estimating Bernoulli parameters for binary traits within afinite population,these are important first-step results.

  11. Maximum-likelihood fits to histograms for improved parameter estimation

    CERN Document Server

    Fowler, Joseph W

    2013-01-01

    Straightforward methods for adapting the familiar chi^2 statistic to histograms of discrete events and other Poisson distributed data generally yield biased estimates of the parameters of a model. The bias can be important even when the total number of events is large. For the case of estimating a microcalorimeter's energy resolution at 6 keV from the observed shape of the Mn K-alpha fluorescence spectrum, a poor choice of chi^2 can lead to biases of at least 10% in the estimated resolution when up to thousands of photons are observed. The best remedy is a Poisson maximum-likelihood fit, through a simple modification of the standard Levenberg-Marquardt algorithm for chi^2 minimization. Where the modification is not possible, another approach allows iterative approximation of the maximum-likelihood fit.

  12. Automatic Black-Box Model Order Reduction using Radial Basis Functions

    Energy Technology Data Exchange (ETDEWEB)

    Stephanson, M B; Lee, J F; White, D A

    2011-07-15

    Finite elements methods have long made use of model order reduction (MOR), particularly in the context of fast freqeucny sweeps. In this paper, we discuss a black-box MOR technique, applicable to a many solution methods and not restricted only to spectral responses. We also discuss automated methods for generating a reduced order model that meets a given error tolerance. Numerical examples demonstrate the effectiveness and wide applicability of the method. With the advent of improved computing hardware and numerous fast solution techniques, the field of computational electromagnetics are progressed rapidly in terms of the size and complexity of problems that can be solved. Numerous applications, however, require the solution of a problem for many different configurations, including optimization, parameter exploration, and uncertainly quantification, where the parameters that may be changed include frequency, material properties, geometric dimensions, etc. In such cases, thousands of solutions may be needed, so solve times of even a few minutes can be burdensome. Model order reduction (MOR) may alleviate this difficulty by creating a small model that can be evaluated quickly. Many MOR techniques have been applied to electromagnetic problems over the past few decades, particularly in the context of fast frequency sweeps. Recent works have extended these methods to allow more than one parameter and to allow the parameters to represent material and geometric properties. There are still limitations with these methods, however. First, they almost always assume that the finite element method is used to solve the problem, so that the system matrix is a known function of the parameters. Second, although some authors have presented adaptive methods (e.g., [2]), the order of the model is often determined before the MOR process begins, with little insight about what order is actually needed to reach the desired accuracy. Finally, it not clear how to efficiently extend most

  13. A semi-automatic method of generating subject-specific pediatric head finite element models for impact dynamic responses to head injury.

    Science.gov (United States)

    Li, Zhigang; Han, Xiaoqiang; Ge, Hao; Ma, Chunsheng

    2016-07-01

    To account for the effects of head realistic morphological feature variation on the impact dynamic responses to head injury, it is necessary to develop multiple subject-specific pediatric head finite element (FE) models based on computed tomography (CT) or magnetic resonance imaging (MRI) scans. However, traditional manual model development is very time-consuming. In this study, a new automatic method was developed to extract anatomical points from pediatric head CT scans to represent pediatric head morphological features (head size/shape, skull thickness, and suture/fontanel width). Subsequently, a geometry-adaptive mesh morphing method based on radial basis function was developed that can automatically morph a baseline pediatric head FE model into target FE models with geometries corresponding to the extracted head morphological features. In the end, five subject-specific head FE models of approximately 6-month-old (6MO) were automatically generated using the developed method. These validated models were employed to investigate differences in the head dynamic responses among subjects with different head morphologies. The results show that variations in head morphological features have a relatively large effect on pediatric head dynamic response. The results of this study indicate that pediatric head morphological variation had better be taken into account when reconstructing pediatric head injury due to traffic/fall accidents or child abuses using computational models as well as predicting head injury risk for children with obvious difference in head size and morphologies. PMID:27058003

  14. Program CCOM. Coupled-channels optical model calculation with automatic parameter search

    International Nuclear Information System (INIS)

    A new program of coupled-channels optical model calculation has been developed for the evaluation of actinide nuclei. The code is composed of by the modules having high independency and large flexibility. The code is written by C++ language using object oriented techniques. The program has capability of fitting of the parameters even for the several nuclei at the same time. The formulae required in the calculation, details of the numerical treatments and the input parameters are described. The examples of the input file and the output are also shown. (author)

  15. Automatic TCAD model calibration for multi-cellular Trench-IGBTs

    Science.gov (United States)

    Maresca, Luca; Breglio, Giovanni; Irace, Andrea

    2014-01-01

    TCAD simulators are a consolidate tool in the field of the semiconductor research because of their predictive capability. However, an accurate calibration of the models is needed in order to get quantitative accurate results. In this work a calibration procedure of the TCAD elementary cell, specific for Trench IGBT with a blocking voltage of 600 V, is presented. It is based on the error minimization between the experimental and the simulated terminal curves of the device at two temperatures. The procedure is applied to a PT-IGBT and a good predictive capability is showed in the simulation of both the short-circuit and turn-off tests.

  16. FIACH: A biophysical model for automatic retrospective noise control in fMRI.

    Science.gov (United States)

    Tierney, Tim M; Weiss-Croft, Louise J; Centeno, Maria; Shamshiri, Elhum A; Perani, Suejen; Baldeweg, Torsten; Clark, Christopher A; Carmichael, David W

    2016-01-01

    Different noise sources in fMRI acquisition can lead to spurious false positives and reduced sensitivity. We have developed a biophysically-based model (named FIACH: Functional Image Artefact Correction Heuristic) which extends current retrospective noise control methods in fMRI. FIACH can be applied to both General Linear Model (GLM) and resting state functional connectivity MRI (rs-fcMRI) studies. FIACH is a two-step procedure involving the identification and correction of non-physiological large amplitude temporal signal changes and spatial regions of high temporal instability. We have demonstrated its efficacy in a sample of 42 healthy children while performing language tasks that include overt speech with known activations. We demonstrate large improvements in sensitivity when FIACH is compared with current methods of retrospective correction. FIACH reduces the confounding effects of noise and increases the study's power by explaining significant variance that is not contained within the commonly used motion parameters. The method is particularly useful in detecting activations in inferior temporal regions which have proven problematic for fMRI. We have shown greater reproducibility and robustness of fMRI responses using FIACH in the context of task induced motion. In a clinical setting this will translate to increasing the reliability and sensitivity of fMRI used for the identification of language lateralisation and eloquent cortex. FIACH can benefit studies of cognitive development in young children, patient populations and older adults. PMID:26416652

  17. Optimization of Automatic Digital Phase Correction Model of Ka-band Radar%Ka频段设备自动校相模型优化

    Institute of Scientific and Technical Information of China (English)

    高山; 刘桂生; 李天宝; 何嘉靖; 贵宇

    2014-01-01

    针对Ka频段设备在实际工作中出现的由于自跟踪零点不准确导致自动校相检查失败的问题,对其自动校相模型进行了理论分析,计算得到了解算误差。结合该设备的性能规律,针对数据处理算法提出了优化建议,解决了该问题,进一步增强了自动校相模型的适用性。%The inaccuracy of automatic zero tracking in practical engineering may cause Ka-band Radar ’ s failure in automatic digital phase correction checking.Aiming at this problem,this paper firstly analyzes theoretically the automatic digital phase correction model,obtaining reliable results of resolution error;secondly,aiming at the data processing method,and combining with the performance law of the equipment,it puts forward some optimization suggestions which solve the problem,and further enhance the applicability of automatic digital phase correction model.

  18. An automatic generation of non-uniform mesh for CFD analyses of image-based multiscale human airway models

    Science.gov (United States)

    Miyawaki, Shinjiro; Tawhai, Merryn H.; Hoffman, Eric A.; Lin, Ching-Long

    2014-11-01

    The authors have developed a method to automatically generate non-uniform CFD mesh for image-based human airway models. The sizes of generated tetrahedral elements vary in both radial and longitudinal directions to account for boundary layer and multiscale nature of pulmonary airflow. The proposed method takes advantage of our previously developed centerline-based geometry reconstruction method. In order to generate the mesh branch by branch in parallel, we used the open-source programs Gmsh and TetGen for surface and volume meshes, respectively. Both programs can specify element sizes by means of background mesh. The size of an arbitrary element in the domain is a function of wall distance, element size on the wall, and element size at the center of airway lumen. The element sizes on the wall are computed based on local flow rate and airway diameter. The total number of elements in the non-uniform mesh (10 M) was about half of that in the uniform mesh, although the computational time for the non-uniform mesh was about twice longer (170 min). The proposed method generates CFD meshes with fine elements near the wall and smooth variation of element size in longitudinal direction, which are required, e.g., for simulations with high flow rate. NIH Grants R01-HL094315, U01-HL114494, and S10-RR022421. Computer time provided by XSEDE.

  19. A Sensitive and Automatic White Matter Fiber Tracts Model for Longitudinal Analysis of Diffusion Tensor Images in Multiple Sclerosis.

    Science.gov (United States)

    Stamile, Claudio; Kocevar, Gabriel; Cotton, François; Durand-Dubief, Françoise; Hannoun, Salem; Frindel, Carole; Guttmann, Charles R G; Rousseau, David; Sappey-Marinier, Dominique

    2016-01-01

    Diffusion tensor imaging (DTI) is a sensitive tool for the assessment of microstructural alterations in brain white matter (WM). We propose a new processing technique to detect, local and global longitudinal changes of diffusivity metrics, in homologous regions along WM fiber-bundles. To this end, a reliable and automatic processing pipeline was developed in three steps: 1) co-registration and diffusion metrics computation, 2) tractography, bundle extraction and processing, and 3) longitudinal fiber-bundle analysis. The last step was based on an original Gaussian mixture model providing a fine analysis of fiber-bundle cross-sections, and allowing a sensitive detection of longitudinal changes along fibers. This method was tested on simulated and clinical data. High levels of F-Measure were obtained on simulated data. Experiments on cortico-spinal tract and inferior fronto-occipital fasciculi of five patients with Multiple Sclerosis (MS) included in a weekly follow-up protocol highlighted the greater sensitivity of this fiber scale approach to detect small longitudinal alterations. PMID:27224308

  20. A Sensitive and Automatic White Matter Fiber Tracts Model for Longitudinal Analysis of Diffusion Tensor Images in Multiple Sclerosis

    Science.gov (United States)

    Stamile, Claudio; Kocevar, Gabriel; Cotton, François; Durand-Dubief, Françoise; Hannoun, Salem; Frindel, Carole; Guttmann, Charles R. G.; Rousseau, David; Sappey-Marinier, Dominique

    2016-01-01

    Diffusion tensor imaging (DTI) is a sensitive tool for the assessment of microstructural alterations in brain white matter (WM). We propose a new processing technique to detect, local and global longitudinal changes of diffusivity metrics, in homologous regions along WM fiber-bundles. To this end, a reliable and automatic processing pipeline was developed in three steps: 1) co-registration and diffusion metrics computation, 2) tractography, bundle extraction and processing, and 3) longitudinal fiber-bundle analysis. The last step was based on an original Gaussian mixture model providing a fine analysis of fiber-bundle cross-sections, and allowing a sensitive detection of longitudinal changes along fibers. This method was tested on simulated and clinical data. High levels of F-Measure were obtained on simulated data. Experiments on cortico-spinal tract and inferior fronto-occipital fasciculi of five patients with Multiple Sclerosis (MS) included in a weekly follow-up protocol highlighted the greater sensitivity of this fiber scale approach to detect small longitudinal alterations. PMID:27224308

  1. Applying exclusion likelihoods from LHC searches to extended Higgs sectors

    International Nuclear Information System (INIS)

    LHC searches for non-standard Higgs bosons decaying into tau lepton pairs constitute a sensitive experimental probe for physics beyond the Standard Model (BSM), such as supersymmetry (SUSY). Recently, the limits obtained from these searches have been presented by the CMS collaboration in a nearly model-independent fashion - as a narrow resonance model - based on the full 8 TeV dataset. In addition to publishing a 95 % C.L. exclusion limit, the full likelihood information for the narrowresonance model has been released. This provides valuable information that can be incorporated into global BSM fits. We present a simple algorithm that maps an arbitrary model with multiple neutral Higgs bosons onto the narrow resonance model and derives the corresponding value for the exclusion likelihood from the CMS search. This procedure has been implemented into the public computer code HiggsBounds (version 4.2.0 and higher). We validate our implementation by cross-checking against the official CMS exclusion contours in three Higgs benchmark scenarios in the Minimal Supersymmetric Standard Model (MSSM), and find very good agreement. Going beyond validation, we discuss the combined constraints of the ττ search and the rate measurements of the SM-like Higgs at 125 GeV in a recently proposed MSSM benchmark scenario, where the lightest Higgs boson obtains SM-like couplings independently of the decoupling of the heavier Higgs states. Technical details for how to access the likelihood information within HiggsBounds are given in the appendix. The program is available at http:// higgsbounds.hepforge.org. (orig.)

  2. Applying exclusion likelihoods from LHC searches to extended Higgs sectors

    Energy Technology Data Exchange (ETDEWEB)

    Bechtle, Philip [Bonn Univ. (Germany). Physikalisches Inst.; Heinemeyer, Sven [Instituto de Fisica de Cantabria (CSIC-UC), Santander (Spain); Staal, Oscar [Stockholm Univ. (Sweden). Dept. of Physics; Stefaniak, Tim [California Univ., Santa Cruz, CA (United States). Santa Cruz Inst. of Particle Physics (SCIPP); Weiglein, Georg [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany)

    2015-10-15

    LHC searches for non-standard Higgs bosons decaying into tau lepton pairs constitute a sensitive experimental probe for physics beyond the Standard Model (BSM), such as Supersymmetry (SUSY). Recently, the limits obtained from these searches have been presented by the CMS collaboration in a nearly model-independent fashion - as a narrow resonance model - based on the full 8 TeV dataset. In addition to publishing a 95% C.L. exclusion limit, the full likelihood information for the narrow resonance model has been released. This provides valuable information that can be incorporated into global BSM fits. We present a simple algorithm that maps an arbitrary model with multiple neutral Higgs bosons onto the narrow resonance model and derives the corresponding value for the exclusion likelihood from the CMS search. This procedure has been implemented into the public computer code HiggsBounds (version 4.2.0 and higher). We validate our implementation by cross-checking against the official CMS exclusion contours in three Higgs benchmark scenarios in the Minimal Supersymmetric Standard Model (MSSM), and find very good agreement. Going beyond validation, we discuss the combined constraints of the ττ search and the rate measurements of the SM-like Higgs at 125 GeV in a recently proposed MSSM benchmark scenario, where the lightest Higgs boson obtains SM-like couplings independently of the decoupling of the heavier Higgs states. Technical details for how to access the likelihood information within HiggsBounds are given in the appendix. The program is available at http://higgsbounds.hepforge.org.

  3. Applying exclusion likelihoods from LHC searches to extended Higgs sectors

    International Nuclear Information System (INIS)

    LHC searches for non-standard Higgs bosons decaying into tau lepton pairs constitute a sensitive experimental probe for physics beyond the Standard Model (BSM), such as Supersymmetry (SUSY). Recently, the limits obtained from these searches have been presented by the CMS collaboration in a nearly model-independent fashion - as a narrow resonance model - based on the full 8 TeV dataset. In addition to publishing a 95% C.L. exclusion limit, the full likelihood information for the narrow resonance model has been released. This provides valuable information that can be incorporated into global BSM fits. We present a simple algorithm that maps an arbitrary model with multiple neutral Higgs bosons onto the narrow resonance model and derives the corresponding value for the exclusion likelihood from the CMS search. This procedure has been implemented into the public computer code HiggsBounds (version 4.2.0 and higher). We validate our implementation by cross-checking against the official CMS exclusion contours in three Higgs benchmark scenarios in the Minimal Supersymmetric Standard Model (MSSM), and find very good agreement. Going beyond validation, we discuss the combined constraints of the ττ search and the rate measurements of the SM-like Higgs at 125 GeV in a recently proposed MSSM benchmark scenario, where the lightest Higgs boson obtains SM-like couplings independently of the decoupling of the heavier Higgs states. Technical details for how to access the likelihood information within HiggsBounds are given in the appendix. The program is available at http://higgsbounds.hepforge.org.

  4. Modeling complexity in pathologist workload measurement: the Automatable Activity-Based Approach to Complexity Unit Scoring (AABACUS).

    Science.gov (United States)

    Cheung, Carol C; Torlakovic, Emina E; Chow, Hung; Snover, Dale C; Asa, Sylvia L

    2015-03-01

    Pathologists provide diagnoses relevant to the disease state of the patient and identify specific tissue characteristics relevant to response to therapy and prognosis. As personalized medicine evolves, there is a trend for increased demand of tissue-derived parameters. Pathologists perform increasingly complex analyses on the same 'cases'. Traditional methods of workload assessment and reimbursement, based on number of cases sometimes with a modifier (eg, the relative value unit (RVU) system used in the United States), often grossly underestimate the amount of work needed for complex cases and may overvalue simple, small biopsy cases. We describe a new approach to pathologist workload measurement that aligns with this new practice paradigm. Our multisite institution with geographically diverse partner institutions has developed the Automatable Activity-Based Approach to Complexity Unit Scoring (AABACUS) model that captures pathologists' clinical activities from parameters documented in departmental laboratory information systems (LISs). The model's algorithm includes: 'capture', 'export', 'identify', 'count', 'score', 'attribute', 'filter', and 'assess filtered results'. Captured data include specimen acquisition, handling, analysis, and reporting activities. Activities were counted and complexity units (CUs) generated using a complexity factor for each activity. CUs were compared between institutions, practice groups, and practice types and evaluated over a 5-year period (2008-2012). The annual load of a clinical service pathologist, irrespective of subspecialty, was ∼40,000 CUs using relative benchmarking. The model detected changing practice patterns and was appropriate for monitoring clinical workload for anatomical pathology, neuropathology, and hematopathology in academic and community settings, and encompassing subspecialty and generalist practices. AABACUS is objective, can be integrated with an LIS and automated, is reproducible, backwards compatible

  5. Modelling and automatic reactive power control of isolated wind-diesel hybrid power systems using ANN

    International Nuclear Information System (INIS)

    This paper presents an artificial neural network (ANN) based approach to tune the parameters of the static var compensator (SVC) reactive power controller over a wide range of typical load model parameters. The gains of PI (proportional integral) based SVC are optimised for typical values of the load voltage characteristics (nq) by conventional techniques. Using the generated data, the method of multi-layer feed forward ANN with error back propagation training is employed to tune the parameters of the SVC. An ANN tuned SVC controller has been applied to control the reactive power of a variable slip/speed isolated wind-diesel hybrid power system. It is observed that the maximum deviations of all parameters are more for larger values of nq. It has been shown that initially synchronous generator supplies the reactive power required by the induction generator and/or load, and the latter reactive power is purely supplied by the SVC

  6. Automatic mesh adaptivity for CADIS and FW-CADIS neutronics modeling of difficult shielding problems

    International Nuclear Information System (INIS)

    The CADIS and FW-CADIS hybrid Monte Carlo/deterministic techniques dramatically increase the efficiency of neutronics modeling, but their use in the accurate design analysis of very large and geometrically complex nuclear systems has been limited by the large number of processors and memory requirements for their preliminary deterministic calculations and final Monte Carlo calculation. Three mesh adaptivity algorithms were developed to reduce the memory requirements of CADIS and FW-CADIS without sacrificing their efficiency improvement. First, a macro-material approach enhances the fidelity of the deterministic models without changing the mesh. Second, a deterministic mesh refinement algorithm generates meshes that capture as much geometric detail as possible without exceeding a specified maximum number of mesh elements. Finally, a weight window coarsening algorithm de-couples the weight window mesh and energy bins from the mesh and energy group structure of the deterministic calculations in order to remove the memory constraint of the weight window map from the deterministic mesh resolution. The three algorithms were used to enhance an FW-CADIS calculation of the prompt dose rate throughout the ITER experimental facility. Using these algorithms resulted in a 23.3% increase in the number of mesh tally elements in which the dose rates were calculated in a 10-day Monte Carlo calculation and, additionally, increased the efficiency of the Monte Carlo simulation by a factor of at least 3.4. The three algorithms enabled this difficult calculation to be accurately solved using an FW-CADIS simulation on a regular computer cluster, obviating the need for a world-class super computer. (authors)

  7. Evaluation of plan quality assurance models for prostate cancer patients based on fully automatically generated Pareto-optimal treatment plans

    Science.gov (United States)

    Wang, Yibing; Breedveld, Sebastiaan; Heijmen, Ben; Petit, Steven F.

    2016-06-01

    IMRT planning with commercial Treatment Planning Systems (TPSs) is a trial-and-error process. Consequently, the quality of treatment plans may not be consistent among patients, planners and institutions. Recently, different plan quality assurance (QA) models have been proposed, that could flag and guide improvement of suboptimal treatment plans. However, the performance of these models was validated using plans that were created using the conventional trail-and-error treatment planning process. Consequently, it is challenging to assess and compare quantitatively the accuracy of different treatment planning QA models. Therefore, we created a golden standard dataset of consistently planned Pareto-optimal IMRT plans for 115 prostate patients. Next, the dataset was used to assess the performance of a treatment planning QA model that uses the overlap volume histogram (OVH). 115 prostate IMRT plans were fully automatically planned using our in-house developed TPS Erasmus-iCycle. An existing OVH model was trained on the plans of 58 of the patients. Next it was applied to predict DVHs of the rectum, bladder and anus of the remaining 57 patients. The predictions were compared with the achieved values of the golden standard plans for the rectum D mean, V 65, and V 75, and D mean of the anus and the bladder. For the rectum, the prediction errors (predicted–achieved) were only  ‑0.2  ±  0.9 Gy (mean  ±  1 SD) for D mean,‑1.0  ±  1.6% for V 65, and  ‑0.4  ±  1.1% for V 75. For D mean of the anus and the bladder, the prediction error was 0.1  ±  1.6 Gy and 4.8  ±  4.1 Gy, respectively. Increasing the training cohort to 114 patients only led to minor improvements. A dataset of consistently planned Pareto-optimal prostate IMRT plans was generated. This dataset can be used to train new, and validate and compare existing treatment planning QA models, and has been made publicly available. The OVH model was highly

  8. Evaluation of plan quality assurance models for prostate cancer patients based on fully automatically generated Pareto-optimal treatment plans.

    Science.gov (United States)

    Wang, Yibing; Breedveld, Sebastiaan; Heijmen, Ben; Petit, Steven F

    2016-06-01

    IMRT planning with commercial Treatment Planning Systems (TPSs) is a trial-and-error process. Consequently, the quality of treatment plans may not be consistent among patients, planners and institutions. Recently, different plan quality assurance (QA) models have been proposed, that could flag and guide improvement of suboptimal treatment plans. However, the performance of these models was validated using plans that were created using the conventional trail-and-error treatment planning process. Consequently, it is challenging to assess and compare quantitatively the accuracy of different treatment planning QA models. Therefore, we created a golden standard dataset of consistently planned Pareto-optimal IMRT plans for 115 prostate patients. Next, the dataset was used to assess the performance of a treatment planning QA model that uses the overlap volume histogram (OVH). 115 prostate IMRT plans were fully automatically planned using our in-house developed TPS Erasmus-iCycle. An existing OVH model was trained on the plans of 58 of the patients. Next it was applied to predict DVHs of the rectum, bladder and anus of the remaining 57 patients. The predictions were compared with the achieved values of the golden standard plans for the rectum D mean, V 65, and V 75, and D mean of the anus and the bladder. For the rectum, the prediction errors (predicted-achieved) were only  -0.2  ±  0.9 Gy (mean  ±  1 SD) for D mean,-1.0  ±  1.6% for V 65, and  -0.4  ±  1.1% for V 75. For D mean of the anus and the bladder, the prediction error was 0.1  ±  1.6 Gy and 4.8  ±  4.1 Gy, respectively. Increasing the training cohort to 114 patients only led to minor improvements. A dataset of consistently planned Pareto-optimal prostate IMRT plans was generated. This dataset can be used to train new, and validate and compare existing treatment planning QA models, and has been made publicly available. The OVH model was highly accurate

  9. Automatic segmentation of lymph vessel wall using optimal surface graph cut and hidden Markov Models.

    Science.gov (United States)

    Jones, Jonathan-Lee; Essa, Ehab; Xie, Xianghua

    2015-08-01

    We present a novel method to segment the lymph vessel wall in confocal microscopy images using Optimal Surface Segmentation (OSS) and hidden Markov Models (HMM). OSS is used to preform a pre-segmentation on the images, to act as the initial state for the HMM. We utilize a steerable filter to determine edge based filters for both of these segmentations, and use these features to build Gaussian probability distributions for both the vessel walls and the background. From this we infer the emission probability for the HMM, and the transmission probability is learned using a Baum-Welch algorithm. We transform the segmentation problem into one of cost minimization, with each node in the graph corresponding to one state, and the weight for each node being defined using its emission probability. We define the inter-relations between neighboring nodes using the transmission probability. Having constructed the problem, it is solved using the Viterbi algorithm, allowing the vessel to be reconstructed. The optimal solution can be found in polynomial time. We present qualitative and quantitative analysis to show the performance of the proposed method. PMID:26736778

  10. Regularized Maximum Likelihood for Intrinsic Dimension Estimation

    CERN Document Server

    Gupta, Mithun Das

    2012-01-01

    We propose a new method for estimating the intrinsic dimension of a dataset by applying the principle of regularized maximum likelihood to the distances between close neighbors. We propose a regularization scheme which is motivated by divergence minimization principles. We derive the estimator by a Poisson process approximation, argue about its convergence properties and apply it to a number of simulated and real datasets. We also show it has the best overall performance compared with two other intrinsic dimension estimators.

  11. What Determines the Likelihood of Structural Reforms?

    OpenAIRE

    Agnello, Luca; Castro, Vitor; Jalles, João Tovar; Sousa, Ricardo M.

    2014-01-01

    We use data for a panel of 60 countries over the period 1980-2005 to investigate the main drivers of the likelihood of structural reforms. We find that: (i) external debt crises are the main trigger of financial and banking reforms; (ii) inflation and banking crises are the key drivers of external capital account reforms; (iii) banking crises also hasten financial reforms; and (iv) economic recessions play an important role in promoting the necessary consensus for financial, capital, banking ...

  12. Corporate governance effect on financial distress likelihood: Evidence from Spain

    Directory of Open Access Journals (Sweden)

    Montserrat Manzaneque

    2016-01-01

    Full Text Available The paper explores some mechanisms of corporate governance (ownership and board characteristics in Spanish listed companies and their impact on the likelihood of financial distress. An empirical study was conducted between 2007 and 2012 using a matched-pairs research design with 308 observations, with half of them classified as distressed and non-distressed. Based on the previous study by Pindado, Rodrigues, and De la Torre (2008, a broader concept of bankruptcy is used to define business failure. Employing several conditional logistic models, as well as to other previous studies on bankruptcy, the results confirm that in difficult situations prior to bankruptcy, the impact of board ownership and proportion of independent directors on business failure likelihood are similar to those exerted in more extreme situations. These results go one step further, to offer a negative relationship between board size and the likelihood of financial distress. This result is interpreted as a form of creating diversity and to improve the access to the information and resources, especially in contexts where the ownership is highly concentrated and large shareholders have a great power to influence the board structure. However, the results confirm that ownership concentration does not have a significant impact on financial distress likelihood in the Spanish context. It is argued that large shareholders are passive as regards an enhanced monitoring of management and, alternatively, they do not have enough incentives to hold back the financial distress. These findings have important implications in the Spanish context, where several changes in the regulatory listing requirements have been carried out with respect to corporate governance, and where there is no empirical evidence regarding this respect.

  13. Database likelihood ratios and familial DNA searching

    CERN Document Server

    Slooten, Klaas

    2012-01-01

    Familial Searching is the process of searching in a DNA database for relatives of a given individual. It is well known that in order to evaluate the genetic evidence in favour of a certain given form of relatedness between two individuals, one needs to calculate the appropriate likelihood ratio, which is in this context called a Kinship Index. Suppose that the database contains, for a given type of relative, at most one related individual. Given prior probabilities of being the relative for all persons in the database, we derive the likelihood ratio for each database member in favour of being that relative. This likelihood ratio takes all the Kinship Indices between target and members of the database into account. We also compute the corresponding posterior probabilities. We then discuss two ways of selecting a subset from the database that contains the relative with a known probability, or at least a useful lower bound thereof. We discuss the relation between these approaches and illustrate them with Familia...

  14. Automaticity or active control

    DEFF Research Database (Denmark)

    Tudoran, Ana Alina; Olsen, Svein Ottar

    This study addresses the quasi-moderating role of habit strength in explaining action loyalty. A model of loyalty behaviour is proposed that extends the traditional satisfaction–intention–action loyalty network. Habit strength is conceptualised as a cognitive construct to refer to the psychologic......, respectively, between intended loyalty and action loyalty. At high levels of habit strength, consumers are more likely to free up cognitive resources and incline the balance from controlled to routine and automatic-like responses....

  15. An Analytic Linear Accelerator Source Model for Monte Carlo dose calculations. II. Model Utilization in a GPU-based Monte Carlo Package and Automatic Source Commissioning

    CERN Document Server

    Tian, Zhen; Li, Yongbao; Shi, Feng; Jiang, Steve B; Jia, Xun

    2015-01-01

    We recently built an analytical source model for GPU-based MC dose engine. In this paper, we present a sampling strategy to efficiently utilize this source model in GPU-based dose calculation. Our source model was based on a concept of phase-space-ring (PSR). This ring structure makes it effective to account for beam rotational symmetry, but not suitable for dose calculations due to rectangular jaw settings. Hence, we first convert PSR source model to its phase-space let (PSL) representation. Then in dose calculation, different types of sub-sources were separately sampled. Source sampling and particle transport were iterated. So that the particles being sampled and transported simultaneously are of same type and close in energy to alleviate GPU thread divergence. We also present an automatic commissioning approach to adjust the model for a good representation of a clinical linear accelerator . Weighting factors were introduced to adjust relative weights of PSRs, determined by solving a quadratic minimization ...

  16. Evaluating the use of verbal probability expressions to communicate likelihood information in IPCC reports

    Science.gov (United States)

    Harris, Adam

    2014-05-01

    The Intergovernmental Panel on Climate Change (IPCC) prescribes that the communication of risk and uncertainty information pertaining to scientific reports, model predictions etc. be communicated with a set of 7 likelihood expressions. These range from "Extremely likely" (intended to communicate a likelihood of greater than 99%) through "As likely as not" (33-66%) to "Extremely unlikely" (less than 1%). Psychological research has investigated the degree to which these expressions are interpreted as intended by the IPCC, both within and across cultures. I will present a selection of this research and demonstrate some problems associated with communicating likelihoods in this way, as well as suggesting some potential improvements.

  17. Exploiting Syntactic Structure for Natural Language Modeling

    OpenAIRE

    Chelba, Ciprian

    2000-01-01

    The thesis presents an attempt at using the syntactic structure in natural language for improved language models for speech recognition. The structured language model merges techniques in automatic parsing and language modeling using an original probabilistic parameterization of a shift-reduce parser. A maximum likelihood reestimation procedure belonging to the class of expectation-maximization algorithms is employed for training the model. Experiments on the Wall Street Journal, Switchboard ...

  18. Automatic Segmentation of the Eye in 3D Magnetic Resonance Imaging: A Novel Statistical Shape Model for Treatment Planning of Retinoblastoma

    Energy Technology Data Exchange (ETDEWEB)

    Ciller, Carlos, E-mail: carlos.cillerruiz@unil.ch [Department of Radiology, Lausanne University Hospital and University of Lausanne, Lausanne (Switzerland); Ophthalmic Technology Group, ARTORG Center of the University of Bern, Bern (Switzerland); Centre d’Imagerie BioMédicale, University of Lausanne, Lausanne (Switzerland); De Zanet, Sandro I.; Rüegsegger, Michael B. [Ophthalmic Technology Group, ARTORG Center of the University of Bern, Bern (Switzerland); Department of Ophthalmology, Inselspital, Bern University Hospital, Bern (Switzerland); Pica, Alessia [Department of Radiation Oncology, Inselspital, Bern University Hospital, Bern (Switzerland); Sznitman, Raphael [Ophthalmic Technology Group, ARTORG Center of the University of Bern, Bern (Switzerland); Department of Ophthalmology, Inselspital, Bern University Hospital, Bern (Switzerland); Thiran, Jean-Philippe [Department of Radiology, Lausanne University Hospital and University of Lausanne, Lausanne (Switzerland); Signal Processing Laboratory, École Polytechnique Fédérale de Lausanne, Lausanne (Switzerland); Maeder, Philippe [Department of Radiology, Lausanne University Hospital and University of Lausanne, Lausanne (Switzerland); Munier, Francis L. [Unit of Pediatric Ocular Oncology, Jules Gonin Eye Hospital, Lausanne (Switzerland); Kowal, Jens H. [Ophthalmic Technology Group, ARTORG Center of the University of Bern, Bern (Switzerland); Department of Ophthalmology, Inselspital, Bern University Hospital, Bern (Switzerland); and others

    2015-07-15

    Purpose: Proper delineation of ocular anatomy in 3-dimensional (3D) imaging is a big challenge, particularly when developing treatment plans for ocular diseases. Magnetic resonance imaging (MRI) is presently used in clinical practice for diagnosis confirmation and treatment planning for treatment of retinoblastoma in infants, where it serves as a source of information, complementary to the fundus or ultrasonographic imaging. Here we present a framework to fully automatically segment the eye anatomy for MRI based on 3D active shape models (ASM), and we validate the results and present a proof of concept to automatically segment pathological eyes. Methods and Materials: Manual and automatic segmentation were performed in 24 images of healthy children's eyes (3.29 ± 2.15 years of age). Imaging was performed using a 3-T MRI scanner. The ASM consists of the lens, the vitreous humor, the sclera, and the cornea. The model was fitted by first automatically detecting the position of the eye center, the lens, and the optic nerve, and then aligning the model and fitting it to the patient. We validated our segmentation method by using a leave-one-out cross-validation. The segmentation results were evaluated by measuring the overlap, using the Dice similarity coefficient (DSC) and the mean distance error. Results: We obtained a DSC of 94.90 ± 2.12% for the sclera and the cornea, 94.72 ± 1.89% for the vitreous humor, and 85.16 ± 4.91% for the lens. The mean distance error was 0.26 ± 0.09 mm. The entire process took 14 seconds on average per eye. Conclusion: We provide a reliable and accurate tool that enables clinicians to automatically segment the sclera, the cornea, the vitreous humor, and the lens, using MRI. We additionally present a proof of concept for fully automatically segmenting eye pathology. This tool reduces the time needed for eye shape delineation and thus can help clinicians when planning eye treatment and confirming the extent of the tumor.

  19. Automatic Segmentation of the Eye in 3D Magnetic Resonance Imaging: A Novel Statistical Shape Model for Treatment Planning of Retinoblastoma

    International Nuclear Information System (INIS)

    Purpose: Proper delineation of ocular anatomy in 3-dimensional (3D) imaging is a big challenge, particularly when developing treatment plans for ocular diseases. Magnetic resonance imaging (MRI) is presently used in clinical practice for diagnosis confirmation and treatment planning for treatment of retinoblastoma in infants, where it serves as a source of information, complementary to the fundus or ultrasonographic imaging. Here we present a framework to fully automatically segment the eye anatomy for MRI based on 3D active shape models (ASM), and we validate the results and present a proof of concept to automatically segment pathological eyes. Methods and Materials: Manual and automatic segmentation were performed in 24 images of healthy children's eyes (3.29 ± 2.15 years of age). Imaging was performed using a 3-T MRI scanner. The ASM consists of the lens, the vitreous humor, the sclera, and the cornea. The model was fitted by first automatically detecting the position of the eye center, the lens, and the optic nerve, and then aligning the model and fitting it to the patient. We validated our segmentation method by using a leave-one-out cross-validation. The segmentation results were evaluated by measuring the overlap, using the Dice similarity coefficient (DSC) and the mean distance error. Results: We obtained a DSC of 94.90 ± 2.12% for the sclera and the cornea, 94.72 ± 1.89% for the vitreous humor, and 85.16 ± 4.91% for the lens. The mean distance error was 0.26 ± 0.09 mm. The entire process took 14 seconds on average per eye. Conclusion: We provide a reliable and accurate tool that enables clinicians to automatically segment the sclera, the cornea, the vitreous humor, and the lens, using MRI. We additionally present a proof of concept for fully automatically segmenting eye pathology. This tool reduces the time needed for eye shape delineation and thus can help clinicians when planning eye treatment and confirming the extent of the tumor

  20. Likelihood free inference for Markov processes: a comparison.

    Science.gov (United States)

    Owen, Jamie; Wilkinson, Darren J; Gillespie, Colin S

    2015-04-01

    Approaches to Bayesian inference for problems with intractable likelihoods have become increasingly important in recent years. Approximate Bayesian computation (ABC) and "likelihood free" Markov chain Monte Carlo techniques are popular methods for tackling inference in these scenarios but such techniques are computationally expensive. In this paper we compare the two approaches to inference, with a particular focus on parameter inference for stochastic kinetic models, widely used in systems biology. Discrete time transition kernels for models of this type are intractable for all but the most trivial systems yet forward simulation is usually straightforward. We discuss the relative merits and drawbacks of each approach whilst considering the computational cost implications and efficiency of these techniques. In order to explore the properties of each approach we examine a range of observation regimes using two example models. We use a Lotka-Volterra predator-prey model to explore the impact of full or partial species observations using various time course observations under the assumption of known and unknown measurement error. Further investigation into the impact of observation error is then made using a Schlögl system, a test case which exhibits bi-modal state stability in some regions of parameter space. PMID:25720092