WorldWideScience

Sample records for automatic likelihood model

  1. In all likelihood statistical modelling and inference using likelihood

    CERN Document Server

    Pawitan, Yudi

    2001-01-01

    Based on a course in the theory of statistics this text concentrates on what can be achieved using the likelihood/Fisherian method of taking account of uncertainty when studying a statistical problem. It takes the concept ot the likelihood as providing the best methods for unifying the demands of statistical modelling and the theory of inference. Every likelihood concept is illustrated by realistic examples, which are not compromised by computational problems. Examples range from asimile comparison of two accident rates, to complex studies that require generalised linear or semiparametric mode

  2. Model Fit after Pairwise Maximum Likelihood.

    Science.gov (United States)

    Barendse, M T; Ligtvoet, R; Timmerman, M E; Oort, F J

    2016-01-01

    Maximum likelihood factor analysis of discrete data within the structural equation modeling framework rests on the assumption that the observed discrete responses are manifestations of underlying continuous scores that are normally distributed. As maximizing the likelihood of multivariate response patterns is computationally very intensive, the sum of the log-likelihoods of the bivariate response patterns is maximized instead. Little is yet known about how to assess model fit when the analysis is based on such a pairwise maximum likelihood (PML) of two-way contingency tables. We propose new fit criteria for the PML method and conduct a simulation study to evaluate their performance in model selection. With large sample sizes (500 or more), PML performs as well the robust weighted least squares analysis of polychoric correlations. PMID:27148136

  3. Likelihood analysis of the I(2) model

    DEFF Research Database (Denmark)

    Johansen, Søren

    1997-01-01

    The I(2) model is defined as a submodel of the general vector autoregressive model, by two reduced rank conditions. The model describes stochastic processes with stationary second difference. A parametrization is suggested which makes likelihood inference feasible. Consistency of the maximum...

  4. Bayesian model comparison with intractable likelihoods

    CERN Document Server

    Everitt, Richard G; Rowing, Ellen; Evdemon-Hogan, Melina

    2015-01-01

    Markov random field models are used widely in computer science, statistical physics and spatial statistics and network analysis. However, Bayesian analysis of these models using standard Monte Carlo methods is not possible due to their intractable likelihood functions. Several methods have been developed that permit exact, or close to exact, simulation from the posterior distribution. However, estimating the evidence and Bayes' factors (BFs) for these models remains challenging in general. This paper describes new random weight importance sampling and sequential Monte Carlo methods for estimating BFs that use simulation to circumvent the evaluation of the intractable likelihood, and compares them to existing methods. In some cases we observe an advantage in the use of biased weight estimates; an initial investigation into the theoretical and empirical properties of this class of methods is presented.

  5. Evaluating Network Models: A Likelihood Analysis

    CERN Document Server

    Wang, Wen-Qiang; Zhou, Tao

    2011-01-01

    Many models are put forward to mimic the evolution of real networked systems. A well-accepted way to judge the validity is to compare the modeling results with real networks subject to several structural features. Even for a specific real network, we cannot fairly evaluate the goodness of different models since there are too many structural features while there is no criterion to select and assign weights on them. Motivated by the studies on link prediction algorithms, we propose a unified method to evaluate the network models via the comparison of the likelihoods of the currently observed network driven by different models, with an assumption that the higher the likelihood is, the better the model is. We test our method on the real Internet at the Autonomous System (AS) level, and the results suggest that the Generalized Linear Preferential (GLP) model outperforms the Tel Aviv Network Generator (Tang), while both two models are better than the Barab\\'asi-Albert (BA) and Erd\\"os-R\\'enyi (ER) models. Our metho...

  6. Inference in HIV dynamics models via hierarchical likelihood

    CERN Document Server

    Commenges, D; Putter, H; Thiebaut, R

    2010-01-01

    HIV dynamical models are often based on non-linear systems of ordinary differential equations (ODE), which do not have analytical solution. Introducing random effects in such models leads to very challenging non-linear mixed-effects models. To avoid the numerical computation of multiple integrals involved in the likelihood, we propose a hierarchical likelihood (h-likelihood) approach, treated in the spirit of a penalized likelihood. We give the asymptotic distribution of the maximum h-likelihood estimators (MHLE) for fixed effects, a result that may be relevant in a more general setting. The MHLE are slightly biased but the bias can be made negligible by using a parametric bootstrap procedure. We propose an efficient algorithm for maximizing the h-likelihood. A simulation study, based on a classical HIV dynamical model, confirms the good properties of the MHLE. We apply it to the analysis of a clinical trial.

  7. INTERACTING MULTIPLE MODEL ALGORITHM BASED ON JOINT LIKELIHOOD ESTIMATION

    Institute of Scientific and Technical Information of China (English)

    Sun Jie; Jiang Chaoshu; Chen Zhuming; Zhang Wei

    2011-01-01

    A novel approach is proposed for the estimation of likelihood on Interacting Multiple-Model (IMM) filter.In this approach,the actual innovation,based on a mismatched model,can be formulated as sum of the theoretical innovation based on a matched model and the distance between matched and mismatched models,whose probability distributions are known.The joint likelihood of innovation sequence can be estimated by convolution of the two known probability density functions.The likelihood of tracking models can be calculated by conditional probability formula.Compared with the conventional likelihood estimation method,the proposed method improves the estimation accuracy of likelihood and robustness of IMM,especially when maneuver occurs.

  8. Maximum likelihood estimation of finite mixture model for economic data

    Science.gov (United States)

    Phoong, Seuk-Yen; Ismail, Mohd Tahir

    2014-06-01

    Finite mixture model is a mixture model with finite-dimension. This models are provides a natural representation of heterogeneity in a finite number of latent classes. In addition, finite mixture models also known as latent class models or unsupervised learning models. Recently, maximum likelihood estimation fitted finite mixture models has greatly drawn statistician's attention. The main reason is because maximum likelihood estimation is a powerful statistical method which provides consistent findings as the sample sizes increases to infinity. Thus, the application of maximum likelihood estimation is used to fit finite mixture model in the present paper in order to explore the relationship between nonlinear economic data. In this paper, a two-component normal mixture model is fitted by maximum likelihood estimation in order to investigate the relationship among stock market price and rubber price for sampled countries. Results described that there is a negative effect among rubber price and stock market price for Malaysia, Thailand, Philippines and Indonesia.

  9. EMPIRICAL LIKELIHOOD FOR LINEAR MODELS UNDER m-DEPENDENT ERRORS

    Institute of Scientific and Technical Information of China (English)

    QinYongsong; JiangBo; LiYufang

    2005-01-01

    In this paper,the empirical likelihood confidence regions for the regression coefficient in a linear model are constructed under m-dependent errors. It is shown that the blockwise empirical likelihood is a good way to deal with dependent samples.

  10. Gaussian Process Pseudo-Likelihood Models for Sequence Labeling

    OpenAIRE

    Srijith, P. K.; Balamurugan, P.; Shevade, Shirish

    2014-01-01

    Several machine learning problems arising in natural language processing can be modeled as a sequence labeling problem. We provide Gaussian process models based on pseudo-likelihood approximation to perform sequence labeling. Gaussian processes (GPs) provide a Bayesian approach to learning in a kernel based framework. The pseudo-likelihood model enables one to capture long range dependencies among the output components of the sequence without becoming computationally intractable. We use an ef...

  11. MAXIMUM LIKELIHOOD ESTIMATION IN GENERALIZED GAMMA TYPE MODEL

    Directory of Open Access Journals (Sweden)

    Vinod Kumar

    2010-01-01

    Full Text Available In the present paper, the maximum likelihood estimates of the two parameters of ageneralized gamma type model have been obtained directly by solving the likelihood equationsas well as by reparametrizing the model first and then solving the likelihood equations (as doneby Prentice, 1974 for fixed values of the third parameter. It is found that reparametrization doesneither reduce the bulk nor the complexity of calculations. as claimed by Prentice (1974. Theprocedure has been illustrated with the help of an example. The distribution of MLE of q alongwith its properties has also been obtained.

  12. Tapered composite likelihood for spatial max-stable models

    KAUST Repository

    Sang, Huiyan

    2014-05-01

    Spatial extreme value analysis is useful to environmental studies, in which extreme value phenomena are of interest and meaningful spatial patterns can be discerned. Max-stable process models are able to describe such phenomena. This class of models is asymptotically justified to characterize the spatial dependence among extremes. However, likelihood inference is challenging for such models because their corresponding joint likelihood is unavailable and only bivariate or trivariate distributions are known. In this paper, we propose a tapered composite likelihood approach by utilizing lower dimensional marginal likelihoods for inference on parameters of various max-stable process models. We consider a weighting strategy based on a "taper range" to exclude distant pairs or triples. The "optimal taper range" is selected to maximize various measures of the Godambe information associated with the tapered composite likelihood function. This method substantially reduces the computational cost and improves the efficiency over equally weighted composite likelihood estimators. We illustrate its utility with simulation experiments and an analysis of rainfall data in Switzerland.

  13. Likelihood inference for a nonstationary fractional autoregressive model

    DEFF Research Database (Denmark)

    Johansen, Søren; Nielsen, Morten Ørregaard

    This paper discusses model based inference in an autoregressive model for fractional processes based on the Gaussian likelihood. The model allows for the process to be fractional of order d or d-b; where d ≥ b > 1/2 are parameters to be estimated. We model the data X1,...,XT given the initial...

  14. Likelihood Inference for a Nonstationary Fractional Autoregressive Model

    DEFF Research Database (Denmark)

    Johansen, Søren; Nielsen, Morten Ørregaard

    This paper discusses model based inference in an autoregressive model for fractional processes based on the Gaussian likelihood. The model allows for the process to be fractional of order d or d - b; where d = b > 1/2 are parameters to be estimated. We model the data X¿, ..., X¿ given the initial...

  15. Maximum likelihood estimation for the bombing model

    NARCIS (Netherlands)

    Lieshout, M.N.M. van; Zwet, E.W. van

    2000-01-01

    Perhaps the best known example of a random set is the Boolean model. It is the union of `grains' such as discs, squares or triangles which are placed at the points of a Poisson point process. The Poisson points are called the `germs'. We are interested in estimating the intensity, say lambda, of the

  16. Likelihood Inference for a Fractionally Cointegrated Vector Autoregressive Model

    DEFF Research Database (Denmark)

    Johansen, Søren; Nielsen, Morten Ørregaard

    We consider model based inference in a fractionally cointegrated (or cofractional) vector autoregressive model based on the conditional Gaussian likelihood. The model allows the process X(t) to be fractional of order d and cofractional of order d-b; that is, there exist vectors ß for which ß...

  17. Likelihood inference for a fractionally cointegrated vector autoregressive model

    DEFF Research Database (Denmark)

    Johansen, Søren; Nielsen, Morten Ørregaard

    We consider model based inference in a fractionally cointegrated (or cofractional) vector autoregressive model based on the conditional Gaussian likelihood. The model allows the process X_{t} to be fractional of order d and cofractional of order d-b; that is, there exist vectors β for which β...

  18. Empirical likelihood-based evaluations of Value at Risk models

    Institute of Scientific and Technical Information of China (English)

    WEI ZhengHong; WEN SongQiao; ZHU LiXing

    2009-01-01

    Value at Risk (VaR) is a basic and very useful tool in measuring market risks. Numerous VaR models have been proposed in literature. Therefore, it is of great interest to evaluate the efficiency of these models, and to select the most appropriate one. In this paper, we shall propose to use the empirical likelihood approach to evaluate these models. Simulation results and real life examples show that the empirical likelihood method is more powerful and more robust than some of the asymptotic method available in literature.

  19. AD Model Builder: using automatic differentiation for statistical inference of highly parameterized complex nonlinear models

    DEFF Research Database (Denmark)

    Fournier, David A.; Skaug, Hans J.; Ancheta, Johnoel;

    2011-01-01

    Many criteria for statistical parameter estimation, such as maximum likelihood, are formulated as a nonlinear optimization problem.Automatic Differentiation Model Builder (ADMB) is a programming framework based on automatic differentiation, aimed at highly nonlinear models with a large number of...

  20. HALM: A Hybrid Asperity Likelihood Model for Italy

    Science.gov (United States)

    Gulia, L.; Wiemer, S.

    2009-04-01

    The Asperity Likelihood Model (ALM), first developed and currently tested for California, hypothesizes that small-scale spatial variations in the b-value of the Gutenberg and Richter relationship play a central role in forecasting future seismicity (Wiemer and Schorlemmer, SRL, 2007). The physical basis of the model is the concept that the local b-value is inversely dependent on applied shear stress. Thus low b-values (b more likely to be generated, whereas the high b-values (b > 1.1) found for example in creeping section of faults suggest a lower seismic hazard. To test this model in a reproducible and prospective way suitable for the requirements of the CSEP initiative (www.cseptesting.org), the b-value variability is mapped on a grid. First, using the entire dataset above the overall magnitude of completeness, the regional b-value is estimated. This value is then compared to the one locally estimated at each grid-node for a number of radii, we use the local value if its likelihood score, corrected for the degrees of freedom using the Akaike Information Criterion, suggest to do so. We are currently calibrating the ALM model for implementation in the Italian testing region, the first region within the CSEP EU testing Center (eu.cseptesting.org) for which fully prospective tests of earthquake likelihood models will commence in Europe. We are also developing a modified approach, ‘hybrid' between a grid-based and a zoning one: the HALM (Hybrid Asperity Likelihood Model). According to HALM, the Italian territory is divided in three distinct regions depending on the main tectonic elements, combined with knowledge derived from GPS networks, seismic profile interpretation, borehole breakouts and the focal mechanisms of the event. The local b-value variability was thus mapped using three independent overall b-values. We evaluate the performance of the two models in retrospective tests using the standard CSEP likelihood test.

  1. A model independent safeguard for unbinned Profile Likelihood

    CERN Document Server

    Priel, Nadav; Landsman, Hagar; Manfredini, Alessandro; Budnik, Ranny

    2016-01-01

    We present a general method to include residual un-modeled background shape uncertainties in profile likelihood based statistical tests for high energy physics and astroparticle physics counting experiments. This approach provides a simple and natural protection against undercoverage, thus lowering the chances of a false discovery or of an over constrained confidence interval, and allows a natural transition to unbinned space. Unbinned likelihood enhances the sensitivity and allows optimal usage of information for the data and the models. We show that the asymptotic behavior of the test statistic can be regained in cases where the model fails to describe the true background behavior, and present 1D and 2D case studies for model-driven and data-driven background models. The resulting penalty on sensitivities follows the actual discrepancy between the data and the models, and is asymptotically reduced to zero with increasing knowledge.

  2. Likelihood inference for a nonstationary fractional autoregressive model

    DEFF Research Database (Denmark)

    Johansen, Søren; Ørregård Nielsen, Morten

    2010-01-01

    This paper discusses model-based inference in an autoregressive model for fractional processes which allows the process to be fractional of order d or d-b. Fractional differencing involves infinitely many past values and because we are interested in nonstationary processes we model the data X1......,...,X_{T} given the initial values X_{-n}, n=0,1,..., as is usually done. The initial values are not modeled but assumed to be bounded. This represents a considerable generalization relative to all previous work where it is assumed that initial values are zero. For the statistical analysis we assume...... the conditional Gaussian likelihood and for the probability analysis we also condition on initial values but assume that the errors in the autoregressive model are i.i.d. with suitable moment conditions. We analyze the conditional likelihood and its derivatives as stochastic processes in the parameters, including...

  3. Automatic programming of simulation models

    Science.gov (United States)

    Schroer, Bernard J.; Tseng, Fan T.; Zhang, Shou X.; Dwan, Wen S.

    1988-01-01

    The objective of automatic programming is to improve the overall environment for describing the program. This improved environment is realized by a reduction in the amount of detail that the programmer needs to know and is exposed to. Furthermore, this improved environment is achieved by a specification language that is more natural to the user's problem domain and to the user's way of thinking and looking at the problem. The goal of this research is to apply the concepts of automatic programming (AP) to modeling discrete event simulation system. Specific emphasis is on the design and development of simulation tools to assist the modeler define or construct a model of the system and to then automatically write the corresponding simulation code in the target simulation language, GPSS/PC. A related goal is to evaluate the feasibility of various languages for constructing automatic programming simulation tools.

  4. On penalized maximum likelihood estimation of approximate factor models

    OpenAIRE

    Wang, Shaoxin; Yang, Hu; Yao, Chaoli

    2016-01-01

    In this paper, we mainly focus on the estimation of high-dimensional approximate factor model. We rewrite the estimation of error covariance matrix as a new form which shares similar properties as the penalized maximum likelihood covariance estimator given by Bien and Tibshirani(2011). Based on the lagrangian duality, we propose an APG algorithm to give a positive definite estimate of the error covariance matrix. The new algorithm for the estimation of approximate factor model has a desirable...

  5. Likelihood inference for a fractionally cointegrated vector autoregressive model

    DEFF Research Database (Denmark)

    Johansen, Søren; Ørregård Nielsen, Morten

    2012-01-01

    We consider model based inference in a fractionally cointegrated (or cofractional) vector autoregressive model with a restricted constant term, ¿, based on the Gaussian likelihood conditional on initial values. The model nests the I(d) VAR model. We give conditions on the parameters...... such that the process X_{t} is fractional of order d and cofractional of order d-b; that is, there exist vectors ß for which ß'X_{t} is fractional of order d-b, and no other fractionality order is possible. We define the statistical model by 0

  6. Maximum Likelihood and the Bootstrap for Nonlinear Dynamic Models

    OpenAIRE

    Goncalves, Silvia; White, Halbert

    2002-01-01

    The bootstrap is an increasingly popular method for performing statistical inference. This paper provides the theoretical foundation for using the bootstrap as a valid tool of inference for quasi-maximum likelihood estimators (QMLE). We provide a unified framework for analyzing bootstrapped extremum estimators of nonlinear dynamic models for heterogeneous dependent stochastic processes. We apply our results to two block bootstrap methods, the moving blocks bootstrap of Künsch (1989) and Liu a...

  7. MAXIMUM LIKELIHOOD ESTIMATION FOR PERIODIC AUTOREGRESSIVE MOVING AVERAGE MODELS.

    Science.gov (United States)

    Vecchia, A.V.

    1985-01-01

    A useful class of models for seasonal time series that cannot be filtered or standardized to achieve second-order stationarity is that of periodic autoregressive moving average (PARMA) models, which are extensions of ARMA models that allow periodic (seasonal) parameters. An approximation to the exact likelihood for Gaussian PARMA processes is developed, and a straightforward algorithm for its maximization is presented. The algorithm is tested on several periodic ARMA(1, 1) models through simulation studies and is compared to moment estimation via the seasonal Yule-Walker equations. Applicability of the technique is demonstrated through an analysis of a seasonal stream-flow series from the Rio Caroni River in Venezuela.

  8. Applications of the Likelihood Theory in Finance: Modelling and Pricing

    CERN Document Server

    Janssen, Arnold

    2012-01-01

    This paper discusses the connection between mathematical finance and statistical modelling which turns out to be more than a formal mathematical correspondence. We like to figure out how common results and notions in statistics and their meaning can be translated to the world of mathematical finance and vice versa. A lot of similarities can be expressed in terms of LeCam's theory for statistical experiments which is the theory of the behaviour of likelihood processes. For positive prices the arbitrage free financial assets fit into filtered experiments. It is shown that they are given by filtered likelihood ratio processes. From the statistical point of view, martingale measures, completeness and pricing formulas are revisited. The pricing formulas for various options are connected with the power functions of tests. For instance the Black-Scholes price of a European option has an interpretation as Bayes risk of a Neyman Pearson test. Under contiguity the convergence of financial experiments and option prices ...

  9. Marginal Maximum Likelihood Estimation of Item Response Models in R

    Directory of Open Access Journals (Sweden)

    Matthew S. Johnson

    2007-02-01

    Full Text Available Item response theory (IRT models are a class of statistical models used by researchers to describe the response behaviors of individuals to a set of categorically scored items. The most common IRT models can be classified as generalized linear fixed- and/or mixed-effect models. Although IRT models appear most often in the psychological testing literature, researchers in other fields have successfully utilized IRT-like models in a wide variety of applications. This paper discusses the three major methods of estimation in IRT and develops R functions utilizing the built-in capabilities of the R environment to find the marginal maximum likelihood estimates of the generalized partial credit model. The currently available R packages ltm is also discussed.

  10. Adaptive quasi-likelihood estimate in generalized linear models

    Institute of Scientific and Technical Information of China (English)

    CHEN Xia; CHEN Xiru

    2005-01-01

    This paper gives a thorough theoretical treatment on the adaptive quasilikelihood estimate of the parameters in the generalized linear models. The unknown covariance matrix of the response variable is estimated by the sample. It is shown that the adaptive estimator defined in this paper is asymptotically most efficient in the sense that it is asymptotic normal, and the covariance matrix of the limit distribution coincides with the one for the quasi-likelihood estimator for the case that the covariance matrix of the response variable is completely known.

  11. Calibration of two complex ecosystem models with different likelihood functions

    Science.gov (United States)

    Hidy, Dóra; Haszpra, László; Pintér, Krisztina; Nagy, Zoltán; Barcza, Zoltán

    2014-05-01

    The biosphere is a sensitive carbon reservoir. Terrestrial ecosystems were approximately carbon neutral during the past centuries, but they became net carbon sinks due to climate change induced environmental change and associated CO2 fertilization effect of the atmosphere. Model studies and measurements indicate that the biospheric carbon sink can saturate in the future due to ongoing climate change which can act as a positive feedback. Robustness of carbon cycle models is a key issue when trying to choose the appropriate model for decision support. The input parameters of the process-based models are decisive regarding the model output. At the same time there are several input parameters for which accurate values are hard to obtain directly from experiments or no local measurements are available. Due to the uncertainty associated with the unknown model parameters significant bias can be experienced if the model is used to simulate the carbon and nitrogen cycle components of different ecosystems. In order to improve model performance the unknown model parameters has to be estimated. We developed a multi-objective, two-step calibration method based on Bayesian approach in order to estimate the unknown parameters of PaSim and Biome-BGC models. Biome-BGC and PaSim are a widely used biogeochemical models that simulate the storage and flux of water, carbon, and nitrogen between the ecosystem and the atmosphere, and within the components of the terrestrial ecosystems (in this research the developed version of Biome-BGC is used which is referred as BBGC MuSo). Both models were calibrated regardless the simulated processes and type of model parameters. The calibration procedure is based on the comparison of measured data with simulated results via calculating a likelihood function (degree of goodness-of-fit between simulated and measured data). In our research different likelihood function formulations were used in order to examine the effect of the different model

  12. Empirical likelihood ratio tests for multivariate regression models

    Institute of Scientific and Technical Information of China (English)

    WU Jianhong; ZHU Lixing

    2007-01-01

    This paper proposes some diagnostic tools for checking the adequacy of multivariate regression models including classical regression and time series autoregression. In statistical inference, the empirical likelihood ratio method has been well known to be a powerful tool for constructing test and confidence region. For model checking, however, the naive empirical likelihood (EL) based tests are not of Wilks' phenomenon. Hence, we make use of bias correction to construct the EL-based score tests and derive a nonparametric version of Wilks' theorem. Moreover, by the advantages of both the EL and score test method, the EL-based score tests share many desirable features as follows: They are self-scale invariant and can detect the alternatives that converge to the null at rate n-1/2, the possibly fastest rate for lack-of-fit testing; they involve weight functions, which provides us with the flexibility to choose scores for improving power performance, especially under directional alternatives. Furthermore, when the alternatives are not directional, we construct asymptotically distribution-free maximin tests for a large class of possible alternatives. A simulation study is carried out and an application for a real dataset is analyzed.

  13. Music genre classification via likelihood fusion from multiple feature models

    Science.gov (United States)

    Shiu, Yu; Kuo, C.-C. J.

    2005-01-01

    Music genre provides an efficient way to index songs in a music database, and can be used as an effective means to retrieval music of a similar type, i.e. content-based music retrieval. A new two-stage scheme for music genre classification is proposed in this work. At the first stage, we examine a couple of different features, construct their corresponding parametric models (e.g. GMM and HMM) and compute their likelihood functions to yield soft classification results. In particular, the timbre, rhythm and temporal variation features are considered. Then, at the second stage, these soft classification results are integrated to result in a hard decision for final music genre classification. Experimental results are given to demonstrate the performance of the proposed scheme.

  14. Likelihood ratio model for classification of forensic evidence

    International Nuclear Information System (INIS)

    One of the problems of analysis of forensic evidence such as glass fragments, is the determination of their use-type category, e.g. does a glass fragment originate from an unknown window or container? Very small glass fragments arise during various accidents and criminal offences, and could be carried on the clothes, shoes and hair of participants. It is therefore necessary to obtain information on their physicochemical composition in order to solve the classification problem. Scanning Electron Microscopy coupled with an Energy Dispersive X-ray Spectrometer and the Glass Refractive Index Measurement method are routinely used in many forensic institutes for the investigation of glass. A natural form of glass evidence evaluation for forensic purposes is the likelihood ratio-LR = p(E|H1)/p(E|H2). The main aim of this paper was to study the performance of LR models for glass object classification which considered one or two sources of data variability, i.e. between-glass-object variability and(or) within-glass-object variability. Within the proposed model a multivariate kernel density approach was adopted for modelling the between-object distribution and a multivariate normal distribution was adopted for modelling within-object distributions. Moreover, a graphical method of estimating the dependence structure was employed to reduce the highly multivariate problem to several lower-dimensional problems. The performed analysis showed that the best likelihood model was the one which allows to include information about between and within-object variability, and with variables derived from elemental compositions measured by SEM-EDX, and refractive values determined before (RIb) and after (RIa) the annealing process, in the form of dRI = log10|RIa - RIb|. This model gave better results than the model with only between-object variability considered. In addition, when dRI and variables derived from elemental compositions were used, this model outperformed two other

  15. Approximate Maximum Likelihood Commercial Bank Loan Management Model

    Directory of Open Access Journals (Sweden)

    Godwin N.O.   Asemota

    2009-01-01

    Full Text Available Problem statement: Loan management is a very complex and yet, a vitally important aspect of any commercial bank operations. The balance sheet position shows the main sources of funds as deposits and shareholders contributions. Approach: In order to operate profitably, remain solvent and consequently grow, a commercial bank needs to properly manage its excess cash to yield returns in the form of loans. Results: The above are achieved if the bank can honor depositors withdrawals at all times and also grant loans to credible borrowers. This is so because loans are the main portfolios of a commercial bank that yield the highest rate of returns. Commercial banks and the environment in which they operate are dynamic. So, any attempt to model their behavior without including some elements of uncertainty would be less than desirable. The inclusion of uncertainty factor is now possible with the advent of stochastic optimal control theories. Thus, approximate maximum likelihood algorithm with variable forgetting factor was used to model the loan management behavior of a commercial bank in this study. Conclusion: The results showed that uncertainty factor employed in the stochastic modeling, enable us to adaptively control loan demand as well as fluctuating cash balances in the bank. However, this loan model can also visually aid commercial bank managers planning decisions by allowing them to competently determine excess cash and invest this excess cash as loans to earn more assets without jeopardizing public confidence.

  16. Efficient scatter modelling for incorporation in maximum likelihood reconstruction

    International Nuclear Information System (INIS)

    Definition of a simplified model of scatter which can be incorporated in maximum likelihood reconstruction for single-photon emission tomography (SPET) continues to be appealing; however, implementation must be efficient for it to be clinically applicable. In this paper an efficient algorithm for scatter estimation is described in which the spatial scatter distribution is implemented as a spatially invariant convolution for points of constant depth in tissue. The scatter estimate is weighted by a space-dependent build-up factor based on the measured attenuation in tissue. Monte Carlo simulation of a realistic thorax phantom was used to validate this approach. Further efficiency was introduced by estimating scatter once after a small number of iterations using the ordered subsets expectation maximisation (OSEM) reconstruction algorithm. The scatter estimate was incorporated as a constant term in subsequent iterations rather than modifying the scatter estimate each iteration. Monte Carlo simulation was used to demonstrate that the scatter estimate does not change significantly provided at least two iterations OSEM reconstruction, subset size 8, is used. Complete scatter-corrected reconstruction of 64 projections of 40 x 128 pixels was achieved in 38 min using a Sun Sparc20 computer. (orig.)

  17. Quantifying uncertainty, variability and likelihood for ordinary differential equation models

    LENUS (Irish Health Repository)

    Weisse, Andrea Y

    2010-10-28

    Abstract Background In many applications, ordinary differential equation (ODE) models are subject to uncertainty or variability in initial conditions and parameters. Both, uncertainty and variability can be quantified in terms of a probability density function on the state and parameter space. Results The partial differential equation that describes the evolution of this probability density function has a form that is particularly amenable to application of the well-known method of characteristics. The value of the density at some point in time is directly accessible by the solution of the original ODE extended by a single extra dimension (for the value of the density). This leads to simple methods for studying uncertainty, variability and likelihood, with significant advantages over more traditional Monte Carlo and related approaches especially when studying regions with low probability. Conclusions While such approaches based on the method of characteristics are common practice in other disciplines, their advantages for the study of biological systems have so far remained unrecognized. Several examples illustrate performance and accuracy of the approach and its limitations.

  18. Empirical likelihood-based inference in a partially linear model for longitudinal data

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    A partially linear model with longitudinal data is considered, empirical likelihood to inference for the regression coefficients and the baseline function is investigated, the empirical log-likelihood ratios is proven to be asymptotically chi-squared, and the corresponding confidence regions for the parameters of interest are then constructed. Also by the empirical likelihood ratio functions, we can obtain the maximum empirical likelihood estimates of the regression coefficients and the baseline function, and prove the asymptotic normality. The numerical results are conducted to compare the performance of the empirical likelihood and the normal approximation-based method, and a real example is analysed.

  19. Empirical likelihood-based inference in a partially linear model for longitudinal data

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    A partially linear model with longitudinal data is considered, empirical likelihood to infer- ence for the regression coefficients and the baseline function is investigated, the empirical log-likelihood ratios is proven to be asymptotically chi-squared, and the corresponding confidence regions for the pa- rameters of interest are then constructed. Also by the empirical likelihood ratio functions, we can obtain the maximum empirical likelihood estimates of the regression coefficients and the baseline function, and prove the asymptotic normality. The numerical results are conducted to compare the performance of the empirical likelihood and the normal approximation-based method, and a real example is analysed.

  20. How to Maximize the Likelihood Function for a DSGE Model

    DEFF Research Database (Denmark)

    Andreasen, Martin Møller

    ). Following these extensions, we examine the ability of the two routines to maximize the likelihood function for a sequence of test economies. Our results show that the CMA- ES routine clearly outperforms Simulated Annealing in its ability to find the global optimum and in efficiency. With 10 unknown...

  1. Fast inference in generalized linear models via expected log-likelihoods.

    Science.gov (United States)

    Ramirez, Alexandro D; Paninski, Liam

    2014-04-01

    Generalized linear models play an essential role in a wide variety of statistical applications. This paper discusses an approximation of the likelihood in these models that can greatly facilitate computation. The basic idea is to replace a sum that appears in the exact log-likelihood by an expectation over the model covariates; the resulting "expected log-likelihood" can in many cases be computed significantly faster than the exact log-likelihood. In many neuroscience experiments the distribution over model covariates is controlled by the experimenter and the expected log-likelihood approximation becomes particularly useful; for example, estimators based on maximizing this expected log-likelihood (or a penalized version thereof) can often be obtained with orders of magnitude computational savings compared to the exact maximum likelihood estimators. A risk analysis establishes that these maximum EL estimators often come with little cost in accuracy (and in some cases even improved accuracy) compared to standard maximum likelihood estimates. Finally, we find that these methods can significantly decrease the computation time of marginal likelihood calculations for model selection and of Markov chain Monte Carlo methods for sampling from the posterior parameter distribution. We illustrate our results by applying these methods to a computationally-challenging dataset of neural spike trains obtained via large-scale multi-electrode recordings in the primate retina.

  2. Empirical likelihood

    CERN Document Server

    Owen, Art B

    2001-01-01

    Empirical likelihood provides inferences whose validity does not depend on specifying a parametric model for the data. Because it uses a likelihood, the method has certain inherent advantages over resampling methods: it uses the data to determine the shape of the confidence regions, and it makes it easy to combined data from multiple sources. It also facilitates incorporating side information, and it simplifies accounting for censored, truncated, or biased sampling.One of the first books published on the subject, Empirical Likelihood offers an in-depth treatment of this method for constructing confidence regions and testing hypotheses. The author applies empirical likelihood to a range of problems, from those as simple as setting a confidence region for a univariate mean under IID sampling, to problems defined through smooth functions of means, regression models, generalized linear models, estimating equations, or kernel smooths, and to sampling with non-identically distributed data. Abundant figures offer vi...

  3. Improved Likelihood Ratio Tests for Cointegration Rank in the VAR Model

    DEFF Research Database (Denmark)

    Boswijk, H. Peter; Jansson, Michael; Nielsen, Morten Ørregaard

    We suggest improved tests for cointegration rank in the vector autoregressive (VAR) model and develop asymptotic distribution theory and local power results. The tests are (quasi-)likelihood ratio tests based on a Gaussian likelihood, but of course the asymptotic results apply more generally...

  4. Modeling Human Multimodal Perception and Control Using Genetic Maximum Likelihood Estimation

    NARCIS (Netherlands)

    Zaal, P.M.T.; Pool, D.M.; Chu, Q.P.; Van Paassen, M.M.; Mulder, M.; Mulder, J.A.

    2009-01-01

    This paper presents a new method for estimating the parameters of multi-channel pilot models that is based on maximum likelihood estimation. To cope with the inherent nonlinearity of this optimization problem, the gradient-based Gauss-Newton algorithm commonly used to optimize the likelihood functio

  5. Analyzing multivariate survival data using composite likelihood and flexible parametric modeling of the hazard functions

    DEFF Research Database (Denmark)

    Nielsen, Jan; Parner, Erik

    2010-01-01

    In this paper, we model multivariate time-to-event data by composite likelihood of pairwise frailty likelihoods and marginal hazards using natural cubic splines. Both right- and interval-censored data are considered. The suggested approach is applied on two types of family studies using the gamma...

  6. Choosing the observational likelihood in state-space stock assessment models

    DEFF Research Database (Denmark)

    Albertsen, Christoffer Moesgaard; Nielsen, Anders; Thygesen, Uffe Høgsbro

    2016-01-01

    Data used in stock assessment models result from combinations of biological, ecological, fishery, and sampling processes. Since different types of errors propagate through these processes it can be difficult to identify a particular family of distributions for modelling errors on observations...... a priori. By implementing several observational likelihoods, modelling both numbers- and proportions-at-age, in an age based state-space stock assessment model, we compare the model fit for each choice of likelihood along with the implications for spawning stock biomass and average fishing mortality. We...... propose using AIC intervals based on fitting the full observational model for comparing different observational likelihoods. Using data from four stocks, we show that the model fit is improved by modelling the correlation of observations within years. However, the best choice of observational likelihood...

  7. Approximate Likelihood

    CERN Document Server

    CERN. Geneva

    2015-01-01

    Most physics results at the LHC end in a likelihood ratio test. This includes discovery and exclusion for searches as well as mass, cross-section, and coupling measurements. The use of Machine Learning (multivariate) algorithms in HEP is mainly restricted to searches, which can be reduced to classification between two fixed distributions: signal vs. background. I will show how we can extend the use of ML classifiers to distributions parameterized by physical quantities like masses and couplings as well as nuisance parameters associated to systematic uncertainties. This allows for one to approximate the likelihood ratio while still using a high dimensional feature vector for the data. Both the MEM and ABC approaches mentioned above aim to provide inference on model parameters (like cross-sections, masses, couplings, etc.). ABC is fundamentally tied Bayesian inference and focuses on the “likelihood free” setting where only a simulator is available and one cannot directly compute the likelihood for the dat...

  8. Recent developments in maximum likelihood estimation of MTMM models for categorical data

    Directory of Open Access Journals (Sweden)

    Minjeong eJeon

    2014-04-01

    Full Text Available Maximum likelihood (ML estimation of categorical multitrait-multimethod (MTMM data is challenging because the likelihood involves high-dimensional integrals over the crossed method and trait factors, with no known closed-form solution.The purpose of the study is to introduce three newly developed ML methods that are eligible for estimating MTMM models with categorical responses: Variational maximization-maximization, Alternating imputation posterior, and Monte Carlo local likelihood. Each method is briefly described and its applicability for MTMM models with categorical data are discussed.An illustration is provided using an empirical example.

  9. Automatic terrain modeling using transfinite element analysis

    KAUST Repository

    Collier, Nathaniel O.

    2010-05-31

    An automatic procedure for modeling terrain is developed based on L2 projection-based interpolation of discrete terrain data onto transfinite function spaces. The function space is refined automatically by the use of image processing techniques to detect regions of high error and the flexibility of the transfinite interpolation to add degrees of freedom to these areas. Examples are shown of a section of the Palo Duro Canyon in northern Texas.

  10. Estimation and Model Selection for Model-Based Clustering with the Conditional Classification Likelihood

    CERN Document Server

    Baudry, Jean-Patrick

    2012-01-01

    The Integrated Completed Likelihood (ICL) criterion has been proposed by Biernacki et al. (2000) in the model-based clustering framework to select a relevant number of classes and has been used by statisticians in various application areas. A theoretical study of this criterion is proposed. A contrast related to the clustering objective is introduced: the conditional classification likelihood. This yields an estimator and a model selection criteria class. The properties of these new procedures are studied and ICL is proved to be an approximation of one of these criteria. We oppose these results to the current leading point of view about ICL, that it would not be consistent. Moreover these results give insights into the class notion underlying ICL and feed a reflection on the class notion in clustering. General results on penalized minimum contrast criteria and on mixture models are derived, which are interesting in their own right.

  11. Empirical Likelihood for Mixed-effects Error-in-variables Model

    Institute of Scientific and Technical Information of China (English)

    Qiu-hua Chen; Ping-shou Zhong; Heng-jian Cui

    2009-01-01

    This paper mainly introduces the method of empirical likelihood and its applications on two dif-ferent models.We discuss the empirical likelihood inference on fixed-effect parameter in mixed-effects model with error-in-variables.We first consider a linear mixed-effects model with measurement errors in both fixed and random effects.We construct the empirical likelihood confidence regions for the fixed-effects parameters and the mean parameters of random-effects.The limiting distribution of the empirical log likelihood ratio at the true parameter is χ2p+q,where p,q are dimension of fixed and random effects respectively.Then we discuss empirical likelihood inference in a semi-linear error-in-variable mixed-effects model.Under certain conditions,it is shown that the empirical log likelihood ratio at the true parameter also converges to χ2p+q.Simulations illustrate that the proposed confidence region has a coverage probability more closer to the nominal level than normal approximation based confidence region.

  12. An Adjusted profile likelihood for non-stationary panel data models with fixed effects

    OpenAIRE

    Dhaene, Geert; Jochmans, Koen

    2011-01-01

    We calculate the bias of the profile score for the autoregressive parameters p and covariate slopes in the linear model for N x T panel data with p lags of the dependent variable, exogenous covariates, fixed effects, and unrestricted initial observations. The bias is a vector of multivariate polynomials in p with coefficients that depend only on T. We center the profile score and, on integration, obtain an adjusted profile likelihood. When p = 1, the adjusted profile likelihood coincides wi...

  13. On Compound Poisson Processes Arising in Change-Point Type Statistical Models as Limiting Likelihood Ratios

    CERN Document Server

    Dachian, Serguei

    2010-01-01

    Different change-point type models encountered in statistical inference for stochastic processes give rise to different limiting likelihood ratio processes. In a previous paper of one of the authors it was established that one of these likelihood ratios, which is an exponential functional of a two-sided Poisson process driven by some parameter, can be approximated (for sufficiently small values of the parameter) by another one, which is an exponential functional of a two-sided Brownian motion. In this paper we consider yet another likelihood ratio, which is the exponent of a two-sided compound Poisson process driven by some parameter. We establish, that similarly to the Poisson type one, the compound Poisson type likelihood ratio can be approximated by the Brownian type one for sufficiently small values of the parameter. We equally discuss the asymptotics for large values of the parameter and illustrate the results by numerical simulations.

  14. Generalized linear models with random effects unified analysis via H-likelihood

    CERN Document Server

    Lee, Youngjo; Pawitan, Yudi

    2006-01-01

    Since their introduction in 1972, generalized linear models (GLMs) have proven useful in the generalization of classical normal models. Presenting methods for fitting GLMs with random effects to data, Generalized Linear Models with Random Effects: Unified Analysis via H-likelihood explores a wide range of applications, including combining information over trials (meta-analysis), analysis of frailty models for survival data, genetic epidemiology, and analysis of spatial and temporal models with correlated errors.Written by pioneering authorities in the field, this reference provides an introduction to various theories and examines likelihood inference and GLMs. The authors show how to extend the class of GLMs while retaining as much simplicity as possible. By maximizing and deriving other quantities from h-likelihood, they also demonstrate how to use a single algorithm for all members of the class, resulting in a faster algorithm as compared to existing alternatives. Complementing theory with examples, many of...

  15. Likelihood-free methods for tumor progression modeling

    OpenAIRE

    Herold, Daniela

    2014-01-01

    Since it became clear that the number of newly diagnosed cancer cases and the number of deaths from cancer worldwide increases from year to year, a great effort has been put into the field of cancer research. One major point of interest is how and by means of which intermediate steps the development of cancer from an initially benign mass of cells into a large malignant and deadly tumor takes place. In order to shed light onto the details of this process, many models have been developed in th...

  16. Empirical Likelihood Inference for AR(p) Model%AR(p)模型的经验似然推断

    Institute of Scientific and Technical Information of China (English)

    陈燕红; 赵世舜; 宋立新

    2008-01-01

    In this article we study the empirical likelihood inference for AR(p) model.We propose the moment restrictions, by which we get the empirical likelihood estimator of the model parametric, and we also propose an empirical log-likelihood ratio base on this estimator.Our result shows that the EL estimator is asymptotically normal, and the empirical log-likelihood ratio is proved to be asymptotically standard chi-squared.

  17. Empirical Likelihood Inference for MA(q) Model%MA(q)模型的经验似然推断

    Institute of Scientific and Technical Information of China (English)

    陈燕红; 宋立新

    2009-01-01

    In this article we study the empirical likelihood inference for MA(q) model.We propose the moment restrictions,by which we get the empirical likelihood estimator of the model parameter,and we also propose an empirical log-likelihood ratio based on this estimator.Our result shows that the EL estimator is asymptotically normal,and the empirical log-likelihood ratio is proved to be asymptotical standard chi-square distribution.

  18. Maximum Likelihood Dynamic Factor Modeling for Arbitrary "N" and "T" Using SEM

    Science.gov (United States)

    Voelkle, Manuel C.; Oud, Johan H. L.; von Oertzen, Timo; Lindenberger, Ulman

    2012-01-01

    This article has 3 objectives that build on each other. First, we demonstrate how to obtain maximum likelihood estimates for dynamic factor models (the direct autoregressive factor score model) with arbitrary "T" and "N" by means of structural equation modeling (SEM) and compare the approach to existing methods. Second, we go beyond standard time…

  19. Using Data to Tune Nearshore Dynamics Models: A Bayesian Approach with Parametric Likelihood

    CERN Document Server

    Balci, Nusret; Venkataramani, Shankar C

    2013-01-01

    We propose a modification of a maximum likelihood procedure for tuning parameter values in models, based upon the comparison of their output to field data. Our methodology, which uses polynomial approximations of the sample space to increase the computational efficiency, differs from similar Bayesian estimation frameworks in the use of an alternative likelihood distribution, is shown to better address problems in which covariance information is lacking, than its more conventional counterpart. Lack of covariance information is a frequent challenge in large-scale geophysical estimation. This is the case in the geophysical problem considered here. We use a nearshore model for long shore currents and observational data of the same to show the contrast between both maximum likelihood methodologies. Beyond a methodological comparison, this study gives estimates of parameter values for the bottom drag and surface forcing that make the particular model most consistent with data; furthermore, we also derive sensitivit...

  20. Driving the Model to Its Limit: Profile Likelihood Based Model Reduction.

    Science.gov (United States)

    Maiwald, Tim; Hass, Helge; Steiert, Bernhard; Vanlier, Joep; Engesser, Raphael; Raue, Andreas; Kipkeew, Friederike; Bock, Hans H; Kaschek, Daniel; Kreutz, Clemens; Timmer, Jens

    2016-01-01

    In systems biology, one of the major tasks is to tailor model complexity to information content of the data. A useful model should describe the data and produce well-determined parameter estimates and predictions. Too small of a model will not be able to describe the data whereas a model which is too large tends to overfit measurement errors and does not provide precise predictions. Typically, the model is modified and tuned to fit the data, which often results in an oversized model. To restore the balance between model complexity and available measurements, either new data has to be gathered or the model has to be reduced. In this manuscript, we present a data-based method for reducing non-linear models. The profile likelihood is utilised to assess parameter identifiability and designate likely candidates for reduction. Parameter dependencies are analysed along profiles, providing context-dependent suggestions for the type of reduction. We discriminate four distinct scenarios, each associated with a specific model reduction strategy. Iterating the presented procedure eventually results in an identifiable model, which is capable of generating precise and testable predictions. Source code for all toy examples is provided within the freely available, open-source modelling environment Data2Dynamics based on MATLAB available at http://www.data2dynamics.org/, as well as the R packages dMod/cOde available at https://github.com/dkaschek/. Moreover, the concept is generally applicable and can readily be used with any software capable of calculating the profile likelihood. PMID:27588423

  1. Driving the Model to Its Limit: Profile Likelihood Based Model Reduction.

    Science.gov (United States)

    Maiwald, Tim; Hass, Helge; Steiert, Bernhard; Vanlier, Joep; Engesser, Raphael; Raue, Andreas; Kipkeew, Friederike; Bock, Hans H; Kaschek, Daniel; Kreutz, Clemens; Timmer, Jens

    2016-01-01

    In systems biology, one of the major tasks is to tailor model complexity to information content of the data. A useful model should describe the data and produce well-determined parameter estimates and predictions. Too small of a model will not be able to describe the data whereas a model which is too large tends to overfit measurement errors and does not provide precise predictions. Typically, the model is modified and tuned to fit the data, which often results in an oversized model. To restore the balance between model complexity and available measurements, either new data has to be gathered or the model has to be reduced. In this manuscript, we present a data-based method for reducing non-linear models. The profile likelihood is utilised to assess parameter identifiability and designate likely candidates for reduction. Parameter dependencies are analysed along profiles, providing context-dependent suggestions for the type of reduction. We discriminate four distinct scenarios, each associated with a specific model reduction strategy. Iterating the presented procedure eventually results in an identifiable model, which is capable of generating precise and testable predictions. Source code for all toy examples is provided within the freely available, open-source modelling environment Data2Dynamics based on MATLAB available at http://www.data2dynamics.org/, as well as the R packages dMod/cOde available at https://github.com/dkaschek/. Moreover, the concept is generally applicable and can readily be used with any software capable of calculating the profile likelihood.

  2. A Composite Likelihood Inference in Latent Variable Models for Ordinal Longitudinal Responses

    Science.gov (United States)

    Vasdekis, Vassilis G. S.; Cagnone, Silvia; Moustaki, Irini

    2012-01-01

    The paper proposes a composite likelihood estimation approach that uses bivariate instead of multivariate marginal probabilities for ordinal longitudinal responses using a latent variable model. The model considers time-dependent latent variables and item-specific random effects to be accountable for the interdependencies of the multivariate…

  3. Likelihood analysis of spatial capture-recapture models for stratified or class structured populations

    Science.gov (United States)

    Royle, J. Andrew; Sutherland, Christopher S.; Fuller, Angela K.; Sun, Catherine C.

    2015-01-01

    We develop a likelihood analysis framework for fitting spatial capture-recapture (SCR) models to data collected on class structured or stratified populations. Our interest is motivated by the necessity of accommodating the problem of missing observations of individual class membership. This is particularly problematic in SCR data arising from DNA analysis of scat, hair or other material, which frequently yields individual identity but fails to identify the sex. Moreover, this can represent a large fraction of the data and, given the typically small sample sizes of many capture-recapture studies based on DNA information, utilization of the data with missing sex information is necessary. We develop the class structured likelihood for the case of missing covariate values, and then we address the scaling of the likelihood so that models with and without class structured parameters can be formally compared regardless of missing values. We apply our class structured model to black bear data collected in New York in which sex could be determined for only 62 of 169 uniquely identified individuals. The models containing sex-specificity of both the intercept of the SCR encounter probability model and the distance coefficient, and including a behavioral response are strongly favored by log-likelihood. Estimated population sex ratio is strongly influenced by sex structure in model parameters illustrating the importance of rigorous modeling of sex differences in capture-recapture models.

  4. On penalized likelihood estimation for a non-proportional hazards regression model

    OpenAIRE

    Devarajan, Karthik; Ebrahimi, Nader

    2013-01-01

    In this paper, a semi-parametric generalization of the Cox model that permits crossing hazard curves is described. A theoretical framework for estimation in this model is developed based on penalized likelihood methods. It is shown that the optimal solution to the baseline hazard, baseline cumulative hazard and their ratio are hyperbolic splines with knots at the distinct failure times.

  5. On penalized likelihood estimation for a non-proportional hazards regression model.

    Science.gov (United States)

    Devarajan, Karthik; Ebrahimi, Nader

    2013-07-01

    In this paper, a semi-parametric generalization of the Cox model that permits crossing hazard curves is described. A theoretical framework for estimation in this model is developed based on penalized likelihood methods. It is shown that the optimal solution to the baseline hazard, baseline cumulative hazard and their ratio are hyperbolic splines with knots at the distinct failure times.

  6. Inferring fixed effects in a mixed linear model from an integrated likelihood

    DEFF Research Database (Denmark)

    Gianola, Daniel; Sorensen, Daniel

    2008-01-01

    A new method for likelihood-based inference of fixed effects in mixed linear models, with variance components treated as nuisance parameters, is presented. The method uses uniform-integration of the likelihood; the implementation employs the expectation-maximization (EM) algorithm for elimination...... of all nuisances, viewing random effects and variance components as missing data. In a simulation of a grazing trial, the procedure was compared with four widely used estimators of fixed effects in mixed models, and found to be competitive. An analysis of body weight in freshwater crayfish was conducted...

  7. Maximum likelihood Bayesian model averaging and its predictive analysis for groundwater reactive transport models

    Science.gov (United States)

    Lu, Dan; Ye, Ming; Curtis, Gary P.

    2015-10-01

    While Bayesian model averaging (BMA) has been widely used in groundwater modeling, it is infrequently applied to groundwater reactive transport modeling because of multiple sources of uncertainty in the coupled hydrogeochemical processes and because of the long execution time of each model run. To resolve these problems, this study analyzed different levels of uncertainty in a hierarchical way, and used the maximum likelihood version of BMA, i.e., MLBMA, to improve the computational efficiency. This study demonstrates the applicability of MLBMA to groundwater reactive transport modeling in a synthetic case in which twenty-seven reactive transport models were designed to predict the reactive transport of hexavalent uranium (U(VI)) based on observations at a former uranium mill site near Naturita, CO. These reactive transport models contain three uncertain model components, i.e., parameterization of hydraulic conductivity, configuration of model boundary, and surface complexation reactions that simulate U(VI) adsorption. These uncertain model components were aggregated into the alternative models by integrating a hierarchical structure into MLBMA. The modeling results of the individual models and MLBMA were analyzed to investigate their predictive performance. The predictive logscore results show that MLBMA generally outperforms the best model, suggesting that using MLBMA is a sound strategy to achieve more robust model predictions relative to a single model. MLBMA works best when the alternative models are structurally distinct and have diverse model predictions. When correlation in model structure exists, two strategies were used to improve predictive performance by retaining structurally distinct models or assigning smaller prior model probabilities to correlated models. Since the synthetic models were designed using data from the Naturita site, the results of this study are expected to provide guidance for real-world modeling. Limitations of applying MLBMA to the

  8. MEMLET: An Easy-to-Use Tool for Data Fitting and Model Comparison Using Maximum-Likelihood Estimation.

    Science.gov (United States)

    Woody, Michael S; Lewis, John H; Greenberg, Michael J; Goldman, Yale E; Ostap, E Michael

    2016-07-26

    We present MEMLET (MATLAB-enabled maximum-likelihood estimation tool), a simple-to-use and powerful program for utilizing maximum-likelihood estimation (MLE) for parameter estimation from data produced by single-molecule and other biophysical experiments. The program is written in MATLAB and includes a graphical user interface, making it simple to integrate into the existing workflows of many users without requiring programming knowledge. We give a comparison of MLE and other fitting techniques (e.g., histograms and cumulative frequency distributions), showing how MLE often outperforms other fitting methods. The program includes a variety of features. 1) MEMLET fits probability density functions (PDFs) for many common distributions (exponential, multiexponential, Gaussian, etc.), as well as user-specified PDFs without the need for binning. 2) It can take into account experimental limits on the size of the shortest or longest detectable event (i.e., instrument "dead time") when fitting to PDFs. The proper modification of the PDFs occurs automatically in the program and greatly increases the accuracy of fitting the rates and relative amplitudes in multicomponent exponential fits. 3) MEMLET offers model testing (i.e., single-exponential versus double-exponential) using the log-likelihood ratio technique, which shows whether additional fitting parameters are statistically justifiable. 4) Global fitting can be used to fit data sets from multiple experiments to a common model. 5) Confidence intervals can be determined via bootstrapping utilizing parallel computation to increase performance. Easy-to-follow tutorials show how these features can be used. This program packages all of these techniques into a simple-to-use and well-documented interface to increase the accessibility of MLE fitting. PMID:27463130

  9. An Empirical Likelihood Method in a Partially Linear Single-index Model with Right Censored Data

    Institute of Scientific and Technical Information of China (English)

    Yi Ping YANG; Liu Gen XUE; Wei Hu CHENG

    2012-01-01

    Empirical-likelihood-based inference for the parameters in a partially linear single-index model with randomly censored data is investigated.We introduce an estimated empirical likelihood for the parameters using a synthetic data approach and show that its limiting distribution is a mixture of central chi-squared distribution.To attack this difficulty we propose an adjusted empirical likelihood to achieve the standard x2-1imit.Furthermore,since the index is of norm 1,we use this constraint to reduce the dimension of parameters,which increases the accuracy of the confidence regions. A simulation study is carried out to compare its finite-sample properties with the existing method.An application to a real data set is illustrated.

  10. Empirical Likelihood Based Variable Selection for Varying Coefficient Partially Linear Models with Censored Data

    Institute of Scientific and Technical Information of China (English)

    Peixin ZHAO

    2013-01-01

    In this paper,we consider the variable selection for the parametric components of varying coefficient partially linear models with censored data.By constructing a penalized auxiliary vector ingeniously,we propose an empirical likelihood based variable selection procedure,and show that it is consistent and satisfies the sparsity.The simulation studies show that the proposed variable selection method is workable.

  11. On the Relationships between Jeffreys Modal and Weighted Likelihood Estimation of Ability under Logistic IRT Models

    Science.gov (United States)

    Magis, David; Raiche, Gilles

    2012-01-01

    This paper focuses on two estimators of ability with logistic item response theory models: the Bayesian modal (BM) estimator and the weighted likelihood (WL) estimator. For the BM estimator, Jeffreys' prior distribution is considered, and the corresponding estimator is referred to as the Jeffreys modal (JM) estimator. It is established that under…

  12. Rate of strong consistency of quasi maximum likelihood estimate in generalized linear models

    Institute of Scientific and Technical Information of China (English)

    2004-01-01

    [1]McCullagh, P., Nelder, J. A., Generalized Linear Models, New York: Chapman and Hall, 1989.[2]Wedderbum, R. W. M., Quasi-likelihood functions, generalized linear models and Gauss-Newton method,Biometrika, 1974, 61:439-447.[3]Fahrmeir, L., Maximum likelihood estimation in misspecified generalized linear models, Statistics, 1990, 21:487-502.[4]Fahrmeir, L., Kaufmann, H., Consistency and asymptotic normality of the maximum likelihood estimator in generalized linear models, Ann. Statist., 1985, 13: 342-368.[5]Melder, J. A., Pregibon, D., An extended quasi-likelihood function, Biometrika, 1987, 74: 221-232.[6]Bennet, G., Probability inequalities for the sum of independent random variables, JASA, 1962, 57: 33-45.[7]Stout, W. F., Almost Sure Convergence, New York:Academic Press, 1974.[8]Petrov, V, V., Sums of Independent Random Variables, Berlin, New York: Springer-Verlag, 1975.

  13. Maximum Likelihood Estimation for an Innovation Diffusion Model of New Product Acceptance

    OpenAIRE

    David C Schmittlein; Vijay Mahajan

    1982-01-01

    A maximum likelihood approach is proposed for estimating an innovation diffusion model of new product acceptance originally considered by Bass (Bass, F. M. 1969. A new product growth model for consumer durables. (January) 215–227.). The suggested approach allows: (1) computation of approximate standard errors for the diffusion model parameters, and (2) determination of the required sample size for forecasting the adoption level to any desired degree of accuracy. Using histograms from eight di...

  14. Different Manhattan project: automatic statistical model generation

    Science.gov (United States)

    Yap, Chee Keng; Biermann, Henning; Hertzmann, Aaron; Li, Chen; Meyer, Jon; Pao, Hsing-Kuo; Paxia, Salvatore

    2002-03-01

    We address the automatic generation of large geometric models. This is important in visualization for several reasons. First, many applications need access to large but interesting data models. Second, we often need such data sets with particular characteristics (e.g., urban models, park and recreation landscape). Thus we need the ability to generate models with different parameters. We propose a new approach for generating such models. It is based on a top-down propagation of statistical parameters. We illustrate the method in the generation of a statistical model of Manhattan. But the method is generally applicable in the generation of models of large geographical regions. Our work is related to the literature on generating complex natural scenes (smoke, forests, etc) based on procedural descriptions. The difference in our approach stems from three characteristics: modeling with statistical parameters, integration of ground truth (actual map data), and a library-based approach for texture mapping.

  15. %lrasch_mml: A SAS Macro for Marginal Maximum Likelihood Estimation in Longitudinal Polytomous Rasch Models

    Directory of Open Access Journals (Sweden)

    Maja Olsbjerg

    2015-10-01

    Full Text Available Item response theory models are often applied when a number items are used to measure a unidimensional latent variable. Originally proposed and used within educational research, they are also used when focus is on physical functioning or psychological wellbeing. Modern applications often need more general models, typically models for multidimensional latent variables or longitudinal models for repeated measurements. This paper describes a SAS macro that fits two-dimensional polytomous Rasch models using a specification of the model that is sufficiently flexible to accommodate longitudinal Rasch models. The macro estimates item parameters using marginal maximum likelihood estimation. A graphical presentation of item characteristic curves is included.

  16. Parameter Estimation for an Electric Arc Furnace Model Using Maximum Likelihood

    Directory of Open Access Journals (Sweden)

    Jesser J. Marulanda-Durango

    2012-12-01

    Full Text Available In this paper, we present a methodology for estimating the parameters of a model for an electrical arc furnace, by using maximum likelihood estimation. Maximum likelihood estimation is one of the most employed methods for parameter estimation in practical settings. The model for the electrical arc furnace that we consider, takes into account the non-periodic and non-linear variations in the voltage-current characteristic. We use NETLAB, an open source MATLAB® toolbox, for solving a set of non-linear algebraic equations that relate all the parameters to be estimated. Results obtained through simulation of the model in PSCADTM, are contrasted against real measurements taken during the furnance's most critical operating point. We show how the model for the electrical arc furnace, with appropriate parameter tuning, captures with great detail the real voltage and current waveforms generated by the system. Results obtained show a maximum error of 5% for the current's root mean square error.

  17. The empirical likelihood goodness-of-fit test for regression model

    Institute of Scientific and Technical Information of China (English)

    Li-xing ZHU; Yong-song QIN; Wang-li XU

    2007-01-01

    Goodness-of-fit test for regression modes has received much attention in literature. In this paper, empirical likelihood (EL) goodness-of-fit tests for regression models including classical parametric and autoregressive (AR) time series models are proposed. Unlike the existing locally smoothing and globally smoothing methodologies, the new method has the advantage that the tests are self-scale invariant and that the asymptotic null distribution is chi-squared. Simulations are carried out to illustrate the methodology.

  18. Operational risk models and maximum likelihood estimation error for small sample-sizes

    OpenAIRE

    Paul Larsen

    2015-01-01

    Operational risk models commonly employ maximum likelihood estimation (MLE) to fit loss data to heavy-tailed distributions. Yet several desirable properties of MLE (e.g. asymptotic normality) are generally valid only for large sample-sizes, a situation rarely encountered in operational risk. We study MLE in operational risk models for small sample-sizes across a range of loss severity distributions. We apply these results to assess (1) the approximation of parameter confidence intervals by as...

  19. Using automatic programming for simulating reliability network models

    Science.gov (United States)

    Tseng, Fan T.; Schroer, Bernard J.; Zhang, S. X.; Wolfsberger, John W.

    1988-01-01

    This paper presents the development of an automatic programming system for assisting modelers of reliability networks to define problems and then automatically generate the corresponding code in the target simulation language GPSS/PC.

  20. Computing maximum-likelihood estimates for parameters of the National Descriptive Model of Mercury in Fish

    Science.gov (United States)

    Donato, David I.

    2012-01-01

    This report presents the mathematical expressions and the computational techniques required to compute maximum-likelihood estimates for the parameters of the National Descriptive Model of Mercury in Fish (NDMMF), a statistical model used to predict the concentration of methylmercury in fish tissue. The expressions and techniques reported here were prepared to support the development of custom software capable of computing NDMMF parameter estimates more quickly and using less computer memory than is currently possible with available general-purpose statistical software. Computation of maximum-likelihood estimates for the NDMMF by numerical solution of a system of simultaneous equations through repeated Newton-Raphson iterations is described. This report explains the derivation of the mathematical expressions required for computational parameter estimation in sufficient detail to facilitate future derivations for any revised versions of the NDMMF that may be developed.

  1. ABC of SV: Limited Information Likelihood Inference in Stochastic Volatility Jump-Diffusion Models

    DEFF Research Database (Denmark)

    Creel, Michael; Kristensen, Dennis

    We develop novel methods for estimation and filtering of continuous-time models with stochastic volatility and jumps using so-called Approximate Bayesian Computation which build likelihoods based on limited information. The proposed estimators and filters are computationally attractive relative...... to standard likelihood-based versions since they rely on low-dimensional auxiliary statistics and so avoid computation of high-dimensional integrals. Despite their computational simplicity, we find that estimators and filters perform well in practice and lead to precise estimates of model parameters...... and latent variables. We show how the methods can incorporate intra-daily information to improve on the estimation and filtering. In particular, the availability of realized volatility measures help us in learning about parameters and latent states. The method is employed in the estimation of a flexible...

  2. ASYMPTOTIC NORMALITY OF QUASI MAXIMUM LIKELIHOOD ESTIMATE IN GENERALIZED LINEAR MODELS

    Institute of Scientific and Technical Information of China (English)

    YUE LI; CHEN XIRU

    2005-01-01

    For the Generalized Linear Model (GLM), under some conditions including that the specification of the expectation is correct, it is shown that the Quasi Maximum Likelihood Estimate (QMLE) of the parameter-vector is asymptotic normal. It is also shown that the asymptotic covariance matrix of the QMLE reaches its minimum (in the positive-definte sense) in case that the specification of the covariance matrix is correct.

  3. Quasi-likelihood estimation of average treatment effects based on model information

    Institute of Scientific and Technical Information of China (English)

    Zhi-hua SUN

    2007-01-01

    In this paper, the estimation of average treatment effects is considered when we have the model information of the conditional mean and conditional variance for the responses given the covariates. The quasi-likelihood method adapted to treatment effects data is developed to estimate the parameters in the conditional mean and conditional variance models. Based on the model information, we define three estimators by imputation, regression and inverse probability weighted methods.All the estimators are shown asymptotically normal. Our simulation results show that by using the model information, the substantial efficiency gains are obtained which are comparable with the existing estimators.

  4. Quasi-likelihood estimation of average treatment effects based on model information

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    In this paper, the estimation of average treatment effects is considered when we have the model information of the conditional mean and conditional variance for the responses given the covariates. The quasi-likelihood method adapted to treatment effects data is developed to estimate the parameters in the conditional mean and conditional variance models. Based on the model information, we define three estimators by imputation, regression and inverse probability weighted methods. All the estimators are shown asymptotically normal. Our simulation results show that by using the model information, the substantial efficiency gains are obtained which are comparable with the existing estimators.

  5. Computation of the Likelihood in Biallelic Diffusion Models Using Orthogonal Polynomials

    Directory of Open Access Journals (Sweden)

    Claus Vogl

    2014-11-01

    Full Text Available In population genetics, parameters describing forces such as mutation, migration and drift are generally inferred from molecular data. Lately, approximate methods based on simulations and summary statistics have been widely applied for such inference, even though these methods waste information. In contrast, probabilistic methods of inference can be shown to be optimal, if their assumptions are met. In genomic regions where recombination rates are high relative to mutation rates, polymorphic nucleotide sites can be assumed to evolve independently from each other. The distribution of allele frequencies at a large number of such sites has been called “allele-frequency spectrum” or “site-frequency spectrum” (SFS. Conditional on the allelic proportions, the likelihoods of such data can be modeled as binomial. A simple model representing the evolution of allelic proportions is the biallelic mutation-drift or mutation-directional selection-drift diffusion model. With series of orthogonal polynomials, specifically Jacobi and Gegenbauer polynomials, or the related spheroidal wave function, the diffusion equations can be solved efficiently. In the neutral case, the product of the binomial likelihoods with the sum of such polynomials leads to finite series of polynomials, i.e., relatively simple equations, from which the exact likelihoods can be calculated. In this article, the use of orthogonal polynomials for inferring population genetic parameters is investigated.

  6. A likelihood reformulation method in non-normal random effects models.

    Science.gov (United States)

    Liu, Lei; Yu, Zhangsheng

    2008-07-20

    In this paper, we propose a practical computational method to obtain the maximum likelihood estimates (MLE) for mixed models with non-normal random effects. By simply multiplying and dividing a standard normal density, we reformulate the likelihood conditional on the non-normal random effects to that conditional on the normal random effects. Gaussian quadrature technique, conveniently implemented in SAS Proc NLMIXED, can then be used to carry out the estimation process. Our method substantially reduces computational time, while yielding similar estimates to the probability integral transformation method (J. Comput. Graphical Stat. 2006; 15:39-57). Furthermore, our method can be applied to more general situations, e.g. finite mixture random effects or correlated random effects from Clayton copula. Simulations and applications are presented to illustrate our method. PMID:18038445

  7. Empirical likelihood confidence regions of the parameters in a partially linear single-index model

    Institute of Scientific and Technical Information of China (English)

    XUE; Liugen; ZHU; Lixing

    2005-01-01

    In this paper, a partially linear single-index model is investigated, and three empirical log-likelihood ratio statistics for the unknown parameters in the model are suggested. It is proved that the proposed statistics are asymptotically standard chi-square under some suitable conditions, and hence can be used to construct the confidence regions of the parameters. Our methods can also deal with the confidence region construction for the index in the pure single-index model. A simulation study indicates that, in terms of coverage probabilities and average areas of the confidence regions, the proposed methods perform better than the least-squares method.

  8. Approximation of the likelihood ratio statistics in competing risks model under informative random censorship from both sides

    Directory of Open Access Journals (Sweden)

    Abdurahim Akhmedovich Abdushukurov

    2016-03-01

    Full Text Available It is clear that the likelihood ratio statistics plays an important role in theories of asymptotical estimation and hypothesis testing. The aim of the paper is to investigate the asymptotic properties of likelihood ratio statistics in competing risks model with informative random censorship from both sides. We prove the approximation version of the locally asymptotically normality of the likelihood ratio statistics. The results have asymptotic representation of the likelihood ratio statistics using the strong approximation method where local asymptotic normality is obtained as a consequence.

  9. Likelihood Inference of Nonlinear Models Based on a Class of Flexible Skewed Distributions

    Directory of Open Access Journals (Sweden)

    Xuedong Chen

    2014-01-01

    Full Text Available This paper deals with the issue of the likelihood inference for nonlinear models with a flexible skew-t-normal (FSTN distribution, which is proposed within a general framework of flexible skew-symmetric (FSS distributions by combining with skew-t-normal (STN distribution. In comparison with the common skewed distributions such as skew normal (SN, and skew-t (ST as well as scale mixtures of skew normal (SMSN, the FSTN distribution can accommodate more flexibility and robustness in the presence of skewed, heavy-tailed, especially multimodal outcomes. However, for this distribution, a usual approach of maximum likelihood estimates based on EM algorithm becomes unavailable and an alternative way is to return to the original Newton-Raphson type method. In order to improve the estimation as well as the way for confidence estimation and hypothesis test for the parameters of interest, a modified Newton-Raphson iterative algorithm is presented in this paper, based on profile likelihood for nonlinear regression models with FSTN distribution, and, then, the confidence interval and hypothesis test are also developed. Furthermore, a real example and simulation are conducted to demonstrate the usefulness and the superiority of our approach.

  10. Automatic Queuing Model for Banking Applications

    Directory of Open Access Journals (Sweden)

    Dr. Ahmed S. A. AL-Jumaily

    2011-08-01

    Full Text Available Queuing is the process of moving customers in a specific sequence to a specific service according to the customer need. The term scheduling stands for the process of computing a schedule. This may be done by a queuing based scheduler. This paper focuses on the banks lines system, the different queuing algorithms that are used in banks to serve the customers, and the average waiting time. The aim of this paper is to build automatic queuing system for organizing the banks queuing system that can analyses the queue status and take decision which customer to serve. The new queuing architecture model can switch between different scheduling algorithms according to the testing results and the factor of the average waiting time. The main innovation of this work concerns the modeling of the average waiting time is taken into processing, in addition with the process of switching to the scheduling algorithm that gives the best average waiting time.

  11. Robust maximum likelihood estimation for stochastic state space model with observation outliers

    Science.gov (United States)

    AlMutawa, J.

    2016-08-01

    The objective of this paper is to develop a robust maximum likelihood estimation (MLE) for the stochastic state space model via the expectation maximisation algorithm to cope with observation outliers. Two types of outliers and their influence are studied in this paper: namely,the additive outlier (AO) and innovative outlier (IO). Due to the sensitivity of the MLE to AO and IO, we propose two techniques for robustifying the MLE: the weighted maximum likelihood estimation (WMLE) and the trimmed maximum likelihood estimation (TMLE). The WMLE is easy to implement with weights estimated from the data; however, it is still sensitive to IO and a patch of AO outliers. On the other hand, the TMLE is reduced to a combinatorial optimisation problem and hard to implement but it is efficient to both types of outliers presented here. To overcome the difficulty, we apply the parallel randomised algorithm that has a low computational cost. A Monte Carlo simulation result shows the efficiency of the proposed algorithms. An earlier version of this paper was presented at the 8th Asian Control Conference, Kaohsiung, Taiwan, 2011.

  12. Generalized Empirical Likelihood Inference in Semiparametric Regression Model for Longitudinal Data

    Institute of Scientific and Technical Information of China (English)

    Gao Rong LI; Ping TIAN; Liu Gen XUE

    2008-01-01

    In this paper, we consider the semiparametric regression model for longitudinal data. Due to the correlation within groups, a generalized empirical log-likelihood ratio statistic for the unknown parameters in the model is suggested by introducing the working covariance matrix. It is proved that the proposed statistic is asymptotically standard chi-squared under some suitable conditions, and hence it can be used to construct the confidence regions of the parameters. A simulation study is conducted to compare the proposed method with the generalized least squares method in terms of coverage accuracy and average lengths of the confidence intervals.

  13. A QUASI-LIKELIHOOD APPROACH TO PARAMETER ESTIMATION FOR SIMULATABLE STATISTICAL MODELS

    Directory of Open Access Journals (Sweden)

    Markus Baaske

    2014-05-01

    Full Text Available This paper introduces a parameter estimation method for a general class of statistical models. The method exclusively relies on the possibility to conduct simulations for the construction of interpolation-based metamodels of informative empirical characteristics and some subjectively chosen correlation structure of the underlying spatial random process. In the absence of likelihood functions for such statistical models, which is often the case in stochastic geometric modelling, the idea is to follow a quasi-likelihood (QL approach to construct an optimal estimating function surrogate based on a set of interpolated summary statistics. Solving these estimating equations one can account for both the random errors due to simulations and the uncertainty about the meta-models. Thus, putting the QL approach to parameter estimation into a stochastic simulation setting the proposed method essentially consists of finding roots to a sequence of approximating quasiscore functions. As a simple demonstrating example, the proposed method is applied to a special parameter estimation problem of a planar Boolean model with discs. Here, the quasi-score function has a half-analytical, numerically tractable representation and allows for the comparison of the model parameter estimates found by the simulation-based method and obtained from solving the exact quasi-score equations.

  14. Maximum-likelihood model averaging to profile clustering of site types across discrete linear sequences.

    Directory of Open Access Journals (Sweden)

    Zhang Zhang

    2009-06-01

    Full Text Available A major analytical challenge in computational biology is the detection and description of clusters of specified site types, such as polymorphic or substituted sites within DNA or protein sequences. Progress has been stymied by a lack of suitable methods to detect clusters and to estimate the extent of clustering in discrete linear sequences, particularly when there is no a priori specification of cluster size or cluster count. Here we derive and demonstrate a maximum likelihood method of hierarchical clustering. Our method incorporates a tripartite divide-and-conquer strategy that models sequence heterogeneity, delineates clusters, and yields a profile of the level of clustering associated with each site. The clustering model may be evaluated via model selection using the Akaike Information Criterion, the corrected Akaike Information Criterion, and the Bayesian Information Criterion. Furthermore, model averaging using weighted model likelihoods may be applied to incorporate model uncertainty into the profile of heterogeneity across sites. We evaluated our method by examining its performance on a number of simulated datasets as well as on empirical polymorphism data from diverse natural alleles of the Drosophila alcohol dehydrogenase gene. Our method yielded greater power for the detection of clustered sites across a breadth of parameter ranges, and achieved better accuracy and precision of estimation of clusters, than did the existing empirical cumulative distribution function statistics.

  15. Effect of formal and informal likelihood functions on uncertainty assessment in a single event rainfall-runoff model

    Science.gov (United States)

    Nourali, Mahrouz; Ghahraman, Bijan; Pourreza-Bilondi, Mohsen; Davary, Kamran

    2016-09-01

    In the present study, DREAM(ZS), Differential Evolution Adaptive Metropolis combined with both formal and informal likelihood functions, is used to investigate uncertainty of parameters of the HEC-HMS model in Tamar watershed, Golestan province, Iran. In order to assess the uncertainty of 24 parameters used in HMS, three flood events were used to calibrate and one flood event was used to validate the posterior distributions. Moreover, performance of seven different likelihood functions (L1-L7) was assessed by means of DREAM(ZS)approach. Four likelihood functions, L1-L4, Nash-Sutcliffe (NS) efficiency, Normalized absolute error (NAE), Index of agreement (IOA), and Chiew-McMahon efficiency (CM), is considered as informal, whereas remaining (L5-L7) is represented in formal category. L5 focuses on the relationship between the traditional least squares fitting and the Bayesian inference, and L6, is a hetereoscedastic maximum likelihood error (HMLE) estimator. Finally, in likelihood function L7, serial dependence of residual errors is accounted using a first-order autoregressive (AR) model of the residuals. According to the results, sensitivities of the parameters strongly depend on the likelihood function, and vary for different likelihood functions. Most of the parameters were better defined by formal likelihood functions L5 and L7 and showed a high sensitivity to model performance. Posterior cumulative distributions corresponding to the informal likelihood functions L1, L2, L3, L4 and the formal likelihood function L6 are approximately the same for most of the sub-basins, and these likelihood functions depict almost a similar effect on sensitivity of parameters. 95% total prediction uncertainty bounds bracketed most of the observed data. Considering all the statistical indicators and criteria of uncertainty assessment, including RMSE, KGE, NS, P-factor and R-factor, results showed that DREAM(ZS) algorithm performed better under formal likelihood functions L5 and L7

  16. Accuracy of maximum likelihood estimates of a two-state model in single-molecule FRET

    International Nuclear Information System (INIS)

    Photon sequences from single-molecule Förster resonance energy transfer (FRET) experiments can be analyzed using a maximum likelihood method. Parameters of the underlying kinetic model (FRET efficiencies of the states and transition rates between conformational states) are obtained by maximizing the appropriate likelihood function. In addition, the errors (uncertainties) of the extracted parameters can be obtained from the curvature of the likelihood function at the maximum. We study the standard deviations of the parameters of a two-state model obtained from photon sequences with recorded colors and arrival times. The standard deviations can be obtained analytically in a special case when the FRET efficiencies of the states are 0 and 1 and in the limiting cases of fast and slow conformational dynamics. These results are compared with the results of numerical simulations. The accuracy and, therefore, the ability to predict model parameters depend on how fast the transition rates are compared to the photon count rate. In the limit of slow transitions, the key parameters that determine the accuracy are the number of transitions between the states and the number of independent photon sequences. In the fast transition limit, the accuracy is determined by the small fraction of photons that are correlated with their neighbors. The relative standard deviation of the relaxation rate has a “chevron” shape as a function of the transition rate in the log-log scale. The location of the minimum of this function dramatically depends on how well the FRET efficiencies of the states are separated

  17. Parameter Estimation for an Electric Arc Furnace Model Using Maximum Likelihood

    Directory of Open Access Journals (Sweden)

    Jesser J. Marulanda-Durango

    2012-12-01

    Full Text Available In this paper, we present a methodology for estimating the parame-ters of a model for an electrical arc furnace, by using maximum likelihood estimation. Maximum likelihood estimation is one of the most employed methods for parameter estimation in practical settings. The model for the electrical arc furnace that we consider, takes into account the non-periodic and non-linear variations in the voltage-current characteristic. We use NETLAB, an open source MATLAB® toolbox, for solving a set of non-linear algebraic equations that relate all the parameters to be estimated. Results obtained through simulation of the model in PSCADTM, are contrasted against real measurements taken during the furnance's most critical operating point. We show how the model for the electrical arc furnace, with appropriate parameter tuning, captures with great detail the real voltage and current waveforms generated by the system. Results obtained show a maximum error of 5% for the current's root mean square error.

  18. Maximum likelihood fitting of FROC curves under an initial-detection-and-candidate-analysis model

    International Nuclear Information System (INIS)

    We have developed a model for FROC curve fitting that relates the observer's FROC performance not to the ROC performance that would be obtained if the observer's responses were scored on a per image basis, but rather to a hypothesized ROC performance that the observer would obtain in the task of classifying a set of 'candidate detections' as positive or negative. We adopt the assumptions of the Bunch FROC model, namely that the observer's detections are all mutually independent, as well as assumptions qualitatively similar to, but different in nature from, those made by Chakraborty in his AFROC scoring methodology. Under the assumptions of our model, we show that the observer's FROC performance is a linearly scaled version of the candidate analysis ROC curve, where the scaling factors are just given by the FROC operating point coordinates for detecting initial candidates. Further, we show that the likelihood function of the model parameters given observational data takes on a simple form, and we develop a maximum likelihood method for fitting a FROC curve to this data. FROC and AFROC curves are produced for computer vision observer datasets and compared with the results of the AFROC scoring method. Although developed primarily with computer vision schemes in mind, we hope that the methodology presented here will prove worthy of further study in other applications as well

  19. The early maximum likelihood estimation model of audiovisual integration in speech perception

    DEFF Research Database (Denmark)

    Andersen, Tobias

    2015-01-01

    Speech perception is facilitated by seeing the articulatory mouth movements of the talker. This is due to perceptual audiovisual integration, which also causes the McGurk−MacDonald illusion, and for which a comprehensive computational account is still lacking. Decades of research have largely...... focused on the fuzzy logical model of perception (FLMP), which provides excellent fits to experimental observations but also has been criticized for being too flexible, post hoc and difficult to interpret. The current study introduces the early maximum likelihood estimation (MLE) model of audiovisual......-validation can evaluate models of audiovisual integration based on typical data sets taking both goodness-of-fit and model flexibility into account. All models were tested on a published data set previously used for testing the FLMP. Cross-validation favored the early MLE while more conventional error measures...

  20. Maintained Individual Data Distributed Likelihood Estimation (MIDDLE).

    Science.gov (United States)

    Boker, Steven M; Brick, Timothy R; Pritikin, Joshua N; Wang, Yang; von Oertzen, Timo; Brown, Donald; Lach, John; Estabrook, Ryne; Hunter, Michael D; Maes, Hermine H; Neale, Michael C

    2015-01-01

    Maintained Individual Data Distributed Likelihood Estimation (MIDDLE) is a novel paradigm for research in the behavioral, social, and health sciences. The MIDDLE approach is based on the seemingly impossible idea that data can be privately maintained by participants and never revealed to researchers, while still enabling statistical models to be fit and scientific hypotheses tested. MIDDLE rests on the assumption that participant data should belong to, be controlled by, and remain in the possession of the participants themselves. Distributed likelihood estimation refers to fitting statistical models by sending an objective function and vector of parameters to each participant's personal device (e.g., smartphone, tablet, computer), where the likelihood of that individual's data is calculated locally. Only the likelihood value is returned to the central optimizer. The optimizer aggregates likelihood values from responding participants and chooses new vectors of parameters until the model converges. A MIDDLE study provides significantly greater privacy for participants, automatic management of opt-in and opt-out consent, lower cost for the researcher and funding institute, and faster determination of results. Furthermore, if a participant opts into several studies simultaneously and opts into data sharing, these studies automatically have access to individual-level longitudinal data linked across all studies. PMID:26717128

  1. Consistency of the Maximum Likelihood Estimator for general hidden Markov models

    CERN Document Server

    Douc, Randal; Olsson, Jimmy; Van Handel, Ramon

    2009-01-01

    Consider a parametrized family of general hidden Markov models, where both the observed and unobserved components take values in a complete separable metric space. We prove that the maximum likelihood estimator (MLE) of the parameter is strongly consistent under a rather minimal set of assumptions. As special cases of our main result, we obtain consistency in a large class of nonlinear state space models, as well as general results on linear Gaussian state space models and finite state models. A novel aspect of our approach is an information-theoretic technique for proving identifiability, which does not require an explicit representation for the relative entropy rate. Our method of proof could therefore form a foundation for the investigation of MLE consistency in more general dependent and non-Markovian time series. Also of independent interest is a general concentration inequality for $V$-uniformly ergodic Markov chains.

  2. Likelihood based observability analysis and confidence intervals for predictions of dynamic models

    CERN Document Server

    Kreutz, Clemens; Timmer, Jens

    2011-01-01

    Mechanistic dynamic models of biochemical networks such as Ordinary Differential Equations (ODEs) contain unknown parameters like the reaction rate constants and the initial concentrations of the compounds. The large number of parameters as well as their nonlinear impact on the model responses hamper the determination of confidence regions for parameter estimates. At the same time, classical approaches translating the uncertainty of the parameters into confidence intervals for model predictions are hardly feasible. In this article it is shown that a so-called prediction profile likelihood yields reliable confidence intervals for model predictions, despite arbitrarily complex and high-dimensional shapes of the confidence regions for the estimated parameters. Prediction confidence intervals of the dynamic states allow a data-based observability analysis. The approach renders the issue of sampling a high-dimensional parameter space into evaluating one-dimensional prediction spaces. The method is also applicable ...

  3. SEMModComp: An R Package for Calculating Likelihood Ratio Tests for Mean and Covariance Structure Models

    Science.gov (United States)

    Levy, Roy

    2010-01-01

    SEMModComp, a software package for conducting likelihood ratio tests for mean and covariance structure modeling is described. The package is written in R and freely available for download or on request.

  4. Observation Likelihood Model Design and Failure Recovery Scheme toward Reliable Localization of Mobile Robots

    Directory of Open Access Journals (Sweden)

    Chang-bae Moon

    2011-01-01

    Full Text Available Although there have been many researches on mobile robot localization, it is still difficult to obtain reliable localization performance in a human co-existing real environment. Reliability of localization is highly dependent upon developer's experiences because uncertainty is caused by a variety of reasons. We have developed a range sensor based integrated localization scheme for various indoor service robots. Through the experience, we found out that there are several significant experimental issues. In this paper, we provide useful solutions for following questions which are frequently faced with in practical applications: 1 How to design an observation likelihood model? 2 How to detect the localization failure? 3 How to recover from the localization failure? We present design guidelines of observation likelihood model. Localization failure detection and recovery schemes are presented by focusing on abrupt wheel slippage. Experiments were carried out in a typical office building environment. The proposed scheme to identify the localizer status is useful in practical environments. Moreover, the semi-global localization is a computationally efficient recovery scheme from localization failure. The results of experiments and analysis clearly present the usefulness of proposed solutions.

  5. Effects of deceptive packaging and product involvement on purchase intention: an elaboration likelihood model perspective.

    Science.gov (United States)

    Lammers, H B

    2000-04-01

    From an Elaboration Likelihood Model perspective, it was hypothesized that postexposure awareness of deceptive packaging claims would have a greater negative effect on scores for purchase intention by consumers lowly involved rather than highly involved with a product (n = 40). Undergraduates who were classified as either highly or lowly (ns = 20 and 20) involved with M&Ms examined either a deceptive or non-deceptive package design for M&Ms candy and were subsequently informed of the deception employed in the packaging before finally rating their intention to purchase. As anticipated, highly deceived subjects who were low in involvement rated intention to purchase lower than their highly involved peers. Overall, the results attest to the robustness of the model and suggest that the model has implications beyond advertising effects and into packaging effects. PMID:10840911

  6. WOMBAT: a tool for mixed model analyses in quantitative genetics by restricted maximum likelihood (REML).

    Science.gov (United States)

    Meyer, Karin

    2007-11-01

    WOMBAT is a software package for quantitative genetic analyses of continuous traits, fitting a linear, mixed model; estimates of covariance components and the resulting genetic parameters are obtained by restricted maximum likelihood. A wide range of models, comprising numerous traits, multiple fixed and random effects, selected genetic covariance structures, random regression models and reduced rank estimation are accommodated. WOMBAT employs up-to-date numerical and computational methods. Together with the use of efficient compilers, this generates fast executable programs, suitable for large scale analyses. Use of WOMBAT is illustrated for a bivariate analysis. The package consists of the executable program, available for LINUX and WINDOWS environments, manual and a set of worked example, and can be downloaded free of charge from (http://agbu. une.edu.au/~kmeyer/wombat.html). PMID:17973343

  7. Likelihood-free inference of population structure and local adaptation in a Bayesian hierarchical model.

    Science.gov (United States)

    Bazin, Eric; Dawson, Kevin J; Beaumont, Mark A

    2010-06-01

    We address the problem of finding evidence of natural selection from genetic data, accounting for the confounding effects of demographic history. In the absence of natural selection, gene genealogies should all be sampled from the same underlying distribution, often approximated by a coalescent model. Selection at a particular locus will lead to a modified genealogy, and this motivates a number of recent approaches for detecting the effects of natural selection in the genome as "outliers" under some models. The demographic history of a population affects the sampling distribution of genealogies, and therefore the observed genotypes and the classification of outliers. Since we cannot see genealogies directly, we have to infer them from the observed data under some model of mutation and demography. Thus the accuracy of an outlier-based approach depends to a greater or a lesser extent on the uncertainty about the demographic and mutational model. A natural modeling framework for this type of problem is provided by Bayesian hierarchical models, in which parameters, such as mutation rates and selection coefficients, are allowed to vary across loci. It has proved quite difficult computationally to implement fully probabilistic genealogical models with complex demographies, and this has motivated the development of approximations such as approximate Bayesian computation (ABC). In ABC the data are compressed into summary statistics, and computation of the likelihood function is replaced by simulation of data under the model. In a hierarchical setting one may be interested both in hyperparameters and parameters, and there may be very many of the latter--for example, in a genetic model, these may be parameters describing each of many loci or populations. This poses a problem for ABC in that one then requires summary statistics for each locus, which, if used naively, leads to a consequent difficulty in conditional density estimation. We develop a general method for applying

  8. Likelihood- and residual-based evaluation of medium-term earthquake forecast models for California

    Science.gov (United States)

    Schneider, Max; Clements, Robert; Rhoades, David; Schorlemmer, Danijel

    2014-09-01

    Seven competing models for forecasting medium-term earthquake rates in California are quantitatively evaluated using the framework of the Collaboratory for the Study of Earthquake Predictability (CSEP). The model class consists of contrasting versions of the Every Earthquake a Precursor According to Size (EEPAS) and Proximity to Past Earthquakes (PPE) modelling approaches. Models are ranked by their performance on likelihood-based tests, which measure the consistency between a model forecast and observed earthquakes. To directly compare one model against another, we run a classical paired t-test and its non-parametric alternative on an information gain score based on the forecasts. These test scores are complemented by several residual-based methods, which offer detailed spatial information. The experiment period covers 2009 June-2012 September, when California experienced 23 earthquakes above the magnitude threshold. Though all models fail to capture seismicity during an earthquake sequence, spatio-temporal differences between models also emerge. The overall best-performing model has strong time- and magnitude-dependence, weights all earthquakes equally as medium-term precursors of larger events and has a full set of fitted parameters. Models with this time- and magnitude-dependence offer a statistically significant advantage over simpler baseline models. In addition, models that down-weight aftershocks when forecasting larger events have a desirable feature in that they do not overpredict following an observed earthquake sequence. This tendency towards overprediction differs between the simpler model, which is based on fewer parameters, and more complex models that include more parameters.

  9. An I(2) Cointegration Model with Piecewise Linear Trends: Likelihood Analysis and Application

    DEFF Research Database (Denmark)

    Kurita, Takamitsu; Nielsen, Heino Bohn; Rahbæk, Anders

    This paper presents likelihood analysis of the I(2) cointegrated vector autoregression with piecewise linear deterministic terms. Limiting behavior of the maximum likelihood estimators are derived, which is used to further derive the limiting distribution of the likelihood ratio statistic...... asymptotic inference is discussed in detail for one of the cointegration parameters. To illustrate, an empirical analysis of US consumption, income and wealth, 1965 - 2008, is performed, emphasizing the importance of a change in nominal price trends after 1980....

  10. The Role of Item Models in Automatic Item Generation

    Science.gov (United States)

    Gierl, Mark J.; Lai, Hollis

    2012-01-01

    Automatic item generation represents a relatively new but rapidly evolving research area where cognitive and psychometric theories are used to produce tests that include items generated using computer technology. Automatic item generation requires two steps. First, test development specialists create item models, which are comparable to templates…

  11. Inverse Modeling of Respiratory System during Noninvasive Ventilation by Maximum Likelihood Estimation

    Science.gov (United States)

    Saatci, Esra; Akan, Aydin

    2010-12-01

    We propose a procedure to estimate the model parameters of presented nonlinear Resistance-Capacitance (RC) and the widely used linear Resistance-Inductance-Capacitance (RIC) models of the respiratory system by Maximum Likelihood Estimator (MLE). The measurement noise is assumed to be Generalized Gaussian Distributed (GGD), and the variance and the shape factor of the measurement noise are estimated by MLE and Kurtosis method, respectively. The performance of the MLE algorithm is also demonstrated by the Cramer-Rao Lower Bound (CRLB) with artificially produced respiratory signals. Airway flow, mask pressure, and lung volume are measured from patients with Chronic Obstructive Pulmonary Disease (COPD) under the noninvasive ventilation and from healthy subjects. Simulations show that respiratory signals from healthy subjects are better represented by the RIC model compared to the nonlinear RC model. On the other hand, the Patient group respiratory signals are fitted to the nonlinear RC model with lower measurement noise variance, better converged measurement noise shape factor, and model parameter tracks. Also, it is observed that for the Patient group the shape factor of the measurement noise converges to values between 1 and 2 whereas for the Control group shape factor values are estimated in the super-Gaussian area.

  12. Uncertainty in a monthly water balance model using the generalized likelihood uncertainty estimation methodology

    Science.gov (United States)

    Rivera, Diego; Rivas, Yessica; Godoy, Alex

    2015-02-01

    Hydrological models are simplified representations of natural processes and subject to errors. Uncertainty bounds are a commonly used way to assess the impact of an input or model architecture uncertainty in model outputs. Different sets of parameters could have equally robust goodness-of-fit indicators, which is known as Equifinality. We assessed the outputs from a lumped conceptual hydrological model to an agricultural watershed in central Chile under strong interannual variability (coefficient of variability of 25%) by using the Equifinality concept and uncertainty bounds. The simulation period ran from January 1999 to December 2006. Equifinality and uncertainty bounds from GLUE methodology (Generalized Likelihood Uncertainty Estimation) were used to identify parameter sets as potential representations of the system. The aim of this paper is to exploit the use of uncertainty bounds to differentiate behavioural parameter sets in a simple hydrological model. Then, we analyze the presence of equifinality in order to improve the identification of relevant hydrological processes. The water balance model for Chillan River exhibits, at a first stage, equifinality. However, it was possible to narrow the range for the parameters and eventually identify a set of parameters representing the behaviour of the watershed (a behavioural model) in agreement with observational and soft data (calculation of areal precipitation over the watershed using an isohyetal map). The mean width of the uncertainty bound around the predicted runoff for the simulation period decreased from 50 to 20 m3s-1 after fixing the parameter controlling the areal precipitation over the watershed. This decrement is equivalent to decreasing the ratio between simulated and observed discharge from 5.2 to 2.5. Despite the criticisms against the GLUE methodology, such as the lack of statistical formality, it is identified as a useful tool assisting the modeller with the identification of critical parameters.

  13. Uncertainty in a monthly water balance model using the generalized likelihood uncertainty estimation methodology

    Indian Academy of Sciences (India)

    Diego Rivera; Yessica Rivas; Alex Godoy

    2015-02-01

    Hydrological models are simplified representations of natural processes and subject to errors. Uncertainty bounds are a commonly used way to assess the impact of an input or model architecture uncertainty in model outputs. Different sets of parameters could have equally robust goodness-of-fit indicators, which is known as Equifinality. We assessed the outputs from a lumped conceptual hydrological model to an agricultural watershed in central Chile under strong interannual variability (coefficient of variability of 25%) by using the Equifinality concept and uncertainty bounds. The simulation period ran from January 1999 to December 2006. Equifinality and uncertainty bounds from GLUE methodology (Generalized Likelihood Uncertainty Estimation) were used to identify parameter sets as potential representations of the system. The aim of this paper is to exploit the use of uncertainty bounds to differentiate behavioural parameter sets in a simple hydrological model. Then, we analyze the presence of equifinality in order to improve the identification of relevant hydrological processes. The water balance model for Chillan River exhibits, at a first stage, equifinality. However, it was possible to narrow the range for the parameters and eventually identify a set of parameters representing the behaviour of the watershed (a behavioural model) in agreement with observational and soft data (calculation of areal precipitation over the watershed using an isohyetal map). The mean width of the uncertainty bound around the predicted runoff for the simulation period decreased from 50 to 20 m3s−1 after fixing the parameter controlling the areal precipitation over the watershed. This decrement is equivalent to decreasing the ratio between simulated and observed discharge from 5.2 to 2.5. Despite the criticisms against the GLUE methodology, such as the lack of statistical formality, it is identified as a useful tool assisting the modeller with the identification of critical parameters.

  14. A likelihood ratio model for the determination of the geographical origin of olive oil.

    Science.gov (United States)

    Własiuk, Patryk; Martyna, Agnieszka; Zadora, Grzegorz

    2015-01-01

    Food fraud or food adulteration may be of forensic interest for instance in the case of suspected deliberate mislabeling. On account of its potential health benefits and nutritional qualities, geographical origin determination of olive oil might be of special interest. The use of a likelihood ratio (LR) model has certain advantages in contrast to typical chemometric methods because the LR model takes into account the information about the sample rarity in a relevant population. Such properties are of particular interest to forensic scientists and therefore it has been the aim of this study to examine the issue of olive oil classification with the use of different LR models and their pertinence under selected data pre-processing methods (logarithm based data transformations) and feature selection technique. This was carried out on data describing 572 Italian olive oil samples characterised by the content of 8 fatty acids in the lipid fraction. Three classification problems related to three regions of Italy (South, North and Sardinia) have been considered with the use of LR models. The correct classification rate and empirical cross entropy were taken into account as a measure of performance of each model. The application of LR models in determining the geographical origin of olive oil has proven to be satisfactorily useful for the considered issues analysed in terms of many variants of data pre-processing since the rates of correct classifications were close to 100% and considerable reduction of information loss was observed. The work also presents a comparative study of the performance of the linear discriminant analysis in considered classification problems. An approach to the choice of the value of the smoothing parameter is highlighted for the kernel density estimation based LR models as well. PMID:25467458

  15. Statistical bounds and maximum likelihood performance for shot noise limited knife-edge modeled stellar occultation

    Science.gov (United States)

    McNicholl, Patrick J.; Crabtree, Peter N.

    2014-09-01

    Applications of stellar occultation by solar system objects have a long history for determining universal time, detecting binary stars, and providing estimates of sizes of asteroids and minor planets. More recently, extension of this last application has been proposed as a technique to provide information (if not complete shadow images) of geosynchronous satellites. Diffraction has long been recognized as a source of distortion for such occultation measurements, and models subsequently developed to compensate for this degradation. Typically these models employ a knife-edge assumption for the obscuring body. In this preliminary study, we report on the fundamental limitations of knife-edge position estimates due to shot noise in an otherwise idealized measurement. In particular, we address the statistical bounds, both Cramér- Rao and Hammersley-Chapman-Robbins, on the uncertainty in the knife-edge position measurement, as well as the performance of the maximum-likelihood estimator. Results are presented as a function of both stellar magnitude and sensor passband; the limiting case of infinite resolving power is also explored.

  16. Likelihood analysis of species occurrence probability from presence-only data for modelling species distributions

    Science.gov (United States)

    Royle, J. Andrew; Chandler, Richard B.; Yackulic, Charles; Nichols, James D.

    2012-01-01

    1. Understanding the factors affecting species occurrence is a pre-eminent focus of applied ecological research. However, direct information about species occurrence is lacking for many species. Instead, researchers sometimes have to rely on so-called presence-only data (i.e. when no direct information about absences is available), which often results from opportunistic, unstructured sampling. MAXENT is a widely used software program designed to model and map species distribution using presence-only data. 2. We provide a critical review of MAXENT as applied to species distribution modelling and discuss how it can lead to inferential errors. A chief concern is that MAXENT produces a number of poorly defined indices that are not directly related to the actual parameter of interest – the probability of occurrence (ψ). This focus on an index was motivated by the belief that it is not possible to estimate ψ from presence-only data; however, we demonstrate that ψ is identifiable using conventional likelihood methods under the assumptions of random sampling and constant probability of species detection. 3. The model is implemented in a convenient r package which we use to apply the model to simulated data and data from the North American Breeding Bird Survey. We demonstrate that MAXENT produces extreme under-predictions when compared to estimates produced by logistic regression which uses the full (presence/absence) data set. We note that MAXENT predictions are extremely sensitive to specification of the background prevalence, which is not objectively estimated using the MAXENT method. 4. As with MAXENT, formal model-based inference requires a random sample of presence locations. Many presence-only data sets, such as those based on museum records and herbarium collections, may not satisfy this assumption. However, when sampling is random, we believe that inference should be based on formal methods that facilitate inference about interpretable ecological quantities

  17. The Use of Dynamic Stochastic Social Behavior Models to Produce Likelihood Functions for Risk Modeling of Proliferation and Terrorist Attacks

    International Nuclear Information System (INIS)

    The ability to estimate the likelihood of future events based on current and historical data is essential to the decision making process of many government agencies. Successful predictions related to terror events and characterizing the risks will support development of options for countering these events. The predictive tasks involve both technical and social component models. The social components have presented a particularly difficult challenge. This paper outlines some technical considerations of this modeling activity. Both data and predictions associated with the technical and social models will likely be known with differing certainties or accuracies - a critical challenge is linking across these model domains while respecting this fundamental difference in certainty level. This paper will describe the technical approach being taken to develop the social model and identification of the significant interfaces between the technical and social modeling in the context of analysis of diversion of nuclear material

  18. Comparison of Statistical Data Models for Identifying Differentially Expressed Genes Using a Generalized Likelihood Ratio Test

    Directory of Open Access Journals (Sweden)

    Kok-Yong Seng

    2008-01-01

    Full Text Available Currently, statistical techniques for analysis of microarray-generated data sets have deficiencies due to limited understanding of errors inherent in the data. A generalized likelihood ratio (GLR test based on an error model has been recently proposed to identify differentially expressed genes from microarray experiments. However, the use of different error structures under the GLR test has not been evaluated, nor has this method been compared to commonly used statistical tests such as the parametric t-test. The concomitant effects of varying data signal-to-noise ratio and replication number on the performance of statistical tests also remain largely unexplored. In this study, we compared the effects of different underlying statistical error structures on the GLR test’s power in identifying differentially expressed genes in microarray data. We evaluated such variants of the GLR test as well as the one sample t-test based on simulated data by means of receiver operating characteristic (ROC curves. Further, we used bootstrapping of ROC curves to assess statistical significance of differences between the areas under the curves. Our results showed that i the GLR tests outperformed the t-test for detecting differential gene expression, ii the identity of the underlying error structure was important in determining the GLR tests’ performance, and iii signal-to-noise ratio was a more important contributor than sample replication in identifying statistically significant differential gene expression.

  19. Results of the Regional Earthquake Likelihood Models (RELM) test of earthquake forecasts in California.

    Science.gov (United States)

    Lee, Ya-Ting; Turcotte, Donald L; Holliday, James R; Sachs, Michael K; Rundle, John B; Chen, Chien-Chih; Tiampo, Kristy F

    2011-10-01

    The Regional Earthquake Likelihood Models (RELM) test of earthquake forecasts in California was the first competitive evaluation of forecasts of future earthquake occurrence. Participants submitted expected probabilities of occurrence of M ≥ 4.95 earthquakes in 0.1° × 0.1° cells for the period 1 January 1, 2006, to December 31, 2010. Probabilities were submitted for 7,682 cells in California and adjacent regions. During this period, 31 M ≥ 4.95 earthquakes occurred in the test region. These earthquakes occurred in 22 test cells. This seismic activity was dominated by earthquakes associated with the M = 7.2, April 4, 2010, El Mayor-Cucapah earthquake in northern Mexico. This earthquake occurred in the test region, and 16 of the other 30 earthquakes in the test region could be associated with it. Nine complete forecasts were submitted by six participants. In this paper, we present the forecasts in a way that allows the reader to evaluate which forecast is the most "successful" in terms of the locations of future earthquakes. We conclude that the RELM test was a success and suggest ways in which the results can be used to improve future forecasts.

  20. Automatic Modeling of Virtual Humans and Body Clothing

    Institute of Scientific and Technical Information of China (English)

    Nadia Magnenat-Thalmann; Hyewon Seo; Frederic Cordier

    2004-01-01

    Highly realistic virtual human models are rapidly becoming commonplace in computer graphics.These models, often represented by complex shape and requiring labor-intensive process, challenge the problem of automatic modeling. The problem and solutions to automatic modeling of animatable virtual humans are studied. Methods for capturing the shape of real people, parameterization techniques for modeling static shape (the variety of human body shapes) and dynamic shape (how the body shape changes as it moves) of virtual humans are classified, summarized and compared. Finally, methods for clothed virtual humans are reviewed.

  1. Neuro-fuzzy system modeling based on automatic fuzzy clustering

    Institute of Scientific and Technical Information of China (English)

    Yuangang TANG; Fuchun SUN; Zengqi SUN

    2005-01-01

    A neuro-fuzzy system model based on automatic fuzzy clustering is proposed.A hybrid model identification algorithm is also developed to decide the model structure and model parameters.The algorithm mainly includes three parts:1) Automatic fuzzy C-means (AFCM),which is applied to generate fuzzy rules automatically,and then fix on the size of the neuro-fuzzy network,by which the complexity of system design is reducesd greatly at the price of the fitting capability;2) Recursive least square estimation (RLSE).It is used to update the parameters of Takagi-Sugeno model,which is employed to describe the behavior of the system;3) Gradient descent algorithm is also proposed for the fuzzy values according to the back propagation algorithm of neural network.Finally,modeling the dynamical equation of the two-link manipulator with the proposed approach is illustrated to validate the feasibility of the method.

  2. Evaluating score- and feature-based likelihood ratio models for multivariate continuous data: applied to forensic MDMA comparison

    NARCIS (Netherlands)

    A. Bolck; H. Ni; M. Lopatka

    2015-01-01

    Likelihood ratio (LR) models are moving into the forefront of forensic evidence evaluation as these methods are adopted by a diverse range of application areas in forensic science. We examine the fundamentally different results that can be achieved when feature- and score-based methodologies are emp

  3. Automatic bootstrapping of a morphable face model using multiple components

    NARCIS (Netherlands)

    Haar, F.B. ter; Veltkamp, R.C.

    2009-01-01

    We present a new bootstrapping algorithm to automatically enhance a 3D morphable face model with new face data. Our algorithm is based on a morphable model fitting method that uses a set of predefined face components. This fitting method produces accurate model fits to 3D face data with noise and ho

  4. A Microeconomic Interpretation of the Maximum Entropy Estimator of Multinomial Logit Models and Its Equivalence to the Maximum Likelihood Estimator

    Directory of Open Access Journals (Sweden)

    Louis de Grange

    2010-09-01

    Full Text Available Maximum entropy models are often used to describe supply and demand behavior in urban transportation and land use systems. However, they have been criticized for not representing behavioral rules of system agents and because their parameters seems to adjust only to modeler-imposed constraints. In response, it is demonstrated that the solution to the entropy maximization problem with linear constraints is a multinomial logit model whose parameters solve the likelihood maximization problem of this probabilistic model. But this result neither provides a microeconomic interpretation of the entropy maximization problem nor explains the equivalence of these two optimization problems. This work demonstrates that an analysis of the dual of the entropy maximization problem yields two useful alternative explanations of its solution. The first shows that the maximum entropy estimators of the multinomial logit model parameters reproduce rational user behavior, while the second shows that the likelihood maximization problem for multinomial logit models is the dual of the entropy maximization problem.

  5. An EM Algorithm for Maximum Likelihood Estimation of Process Factor Analysis Models

    Science.gov (United States)

    Lee, Taehun

    2010-01-01

    In this dissertation, an Expectation-Maximization (EM) algorithm is developed and implemented to obtain maximum likelihood estimates of the parameters and the associated standard error estimates characterizing temporal flows for the latent variable time series following stationary vector ARMA processes, as well as the parameters defining the…

  6. Maximum likelihood estimation for Cox's regression model under nested case-control sampling

    DEFF Research Database (Denmark)

    Scheike, Thomas Harder; Juul, Anders

    2004-01-01

    Nested case-control sampling is designed to reduce the costs of large cohort studies. It is important to estimate the parameters of interest as efficiently as possible. We present a new maximum likelihood estimator (MLE) for nested case-control sampling in the context of Cox's proportional hazard...

  7. IRT Item Parameter Recovery with Marginal Maximum Likelihood Estimation Using Loglinear Smoothing Models

    Science.gov (United States)

    Casabianca, Jodi M.; Lewis, Charles

    2015-01-01

    Loglinear smoothing (LLS) estimates the latent trait distribution while making fewer assumptions about its form and maintaining parsimony, thus leading to more precise item response theory (IRT) item parameter estimates than standard marginal maximum likelihood (MML). This article provides the expectation-maximization algorithm for MML estimation…

  8. Automatic differentiation, tangent linear models, and (pseudo) adjoints

    Energy Technology Data Exchange (ETDEWEB)

    Bischof, C.H.

    1993-12-31

    This paper provides a brief introduction to automatic differentiation and relates it to the tangent linear model and adjoint approaches commonly used in meteorology. After a brief review of the forward and reverse mode of automatic differentiation, the ADIFOR automatic differentiation tool is introduced, and initial results of a sensitivity-enhanced version of the MM5 PSU/NCAR mesoscale weather model are presented. We also present a novel approach to the computation of gradients that uses a reverse mode approach at the time loop level and a forward mode approach at every time step. The resulting ``pseudoadjoint`` shares the characteristic of an adjoint code that the ratio of gradient to function evaluation does not depend on the number of independent variables. In contrast to a true adjoint approach, however, the nonlinearity of the model plays no role in the complexity of the derivative code.

  9. Mixture model for inferring susceptibility to mastitis in dairy cattle: a procedure for likelihood-based inference

    Directory of Open Access Journals (Sweden)

    Jensen Just

    2004-01-01

    Full Text Available Abstract A Gaussian mixture model with a finite number of components and correlated random effects is described. The ultimate objective is to model somatic cell count information in dairy cattle and to develop criteria for genetic selection against mastitis, an important udder disease. Parameter estimation is by maximum likelihood or by an extension of restricted maximum likelihood. A Monte Carlo expectation-maximization algorithm is used for this purpose. The expectation step is carried out using Gibbs sampling, whereas the maximization step is deterministic. Ranking rules based on the conditional probability of membership in a putative group of uninfected animals, given the somatic cell information, are discussed. Several extensions of the model are suggested.

  10. Using suggestion to model different types of automatic writing.

    Science.gov (United States)

    Walsh, E; Mehta, M A; Oakley, D A; Guilmette, D N; Gabay, A; Halligan, P W; Deeley, Q

    2014-05-01

    Our sense of self includes awareness of our thoughts and movements, and our control over them. This feeling can be altered or lost in neuropsychiatric disorders as well as in phenomena such as "automatic writing" whereby writing is attributed to an external source. Here, we employed suggestion in highly hypnotically suggestible participants to model various experiences of automatic writing during a sentence completion task. Results showed that the induction of hypnosis, without additional suggestion, was associated with a small but significant reduction of control, ownership, and awareness for writing. Targeted suggestions produced a double dissociation between thought and movement components of writing, for both feelings of control and ownership, and additionally, reduced awareness of writing. Overall, suggestion produced selective alterations in the control, ownership, and awareness of thought and motor components of writing, thus enabling key aspects of automatic writing, observed across different clinical and cultural settings, to be modelled.

  11. Adapted Maximum-Likelihood Gaussian Models for Numerical Optimization with Continuous EDAs

    OpenAIRE

    Bosman, Peter; Grahl, J; Thierens, D.

    2007-01-01

    This article focuses on numerical optimization with continuous Estimation-of-Distribution Algorithms (EDAs). Specifically, the focus is on the use of one of the most common and best understood probability distributions: the normal distribution. We first give an overview of the existing research on this topic. We then point out a source of inefficiency in EDAs that make use of the normal distribution with maximum-likelihood (ML) estimates. Scaling the covariance matrix beyond its ML estimate d...

  12. Asymptotic Properties of Maximum Likelihood Estimates in the Mixed Poisson Model

    OpenAIRE

    Lambert, Diane; Tierney, Luke

    1984-01-01

    This paper considers the asymptotic behavior of the maximum likelihood estimators (mle's) of the probabilities of a mixed Poisson distribution with a nonparametric mixing distribution. The vector of estimated probabilities is shown to converge in probability to the vector of mixed probabilities at rate $n^{1/2-\\varepsilon}$ for any $\\varepsilon > 0$ under a generalized $\\chi^2$ distance function. It is then shown that any finite set of the mle's has the same joint limiting distribution as doe...

  13. Formalising responsibility modelling for automatic analysis

    OpenAIRE

    Simpson, Robbie; Storer, Tim

    2015-01-01

    Modelling the structure of social-technical systems as a basis for informing software system design is a difficult compromise. Formal methods struggle to capture the scale and complexity of the heterogeneous organisations that use technical systems. Conversely, informal approaches lack the rigour needed to inform the software design and construction process or enable automated analysis. We revisit the concept of responsibility modelling, which models social technical systems as a collec...

  14. Automatic Building Information Model Query Generation

    Energy Technology Data Exchange (ETDEWEB)

    Jiang, Yufei; Yu, Nan; Ming, Jiang; Lee, Sanghoon; DeGraw, Jason; Yen, John; Messner, John I.; Wu, Dinghao

    2015-12-01

    Energy efficient building design and construction calls for extensive collaboration between different subfields of the Architecture, Engineering and Construction (AEC) community. Performing building design and construction engineering raises challenges on data integration and software interoperability. Using Building Information Modeling (BIM) data hub to host and integrate building models is a promising solution to address those challenges, which can ease building design information management. However, the partial model query mechanism of current BIM data hub collaboration model has several limitations, which prevents designers and engineers to take advantage of BIM. To address this problem, we propose a general and effective approach to generate query code based on a Model View Definition (MVD). This approach is demonstrated through a software prototype called QueryGenerator. By demonstrating a case study using multi-zone air flow analysis, we show how our approach and tool can help domain experts to use BIM to drive building design with less labour and lower overhead cost.

  15. Towards Automatic Processing of Virtual City Models for Simulations

    Science.gov (United States)

    Piepereit, R.; Schilling, A.; Alam, N.; Wewetzer, M.; Pries, M.; Coors, V.

    2016-10-01

    Especially in the field of numerical simulations, such as flow and acoustic simulations, the interest in using virtual 3D models to optimize urban systems is increasing. The few instances in which simulations were already carried out in practice have been associated with an extremely high manual and therefore uneconomical effort for the processing of models. Using different ways of capturing models in Geographic Information System (GIS) and Computer Aided Engineering (CAE), increases the already very high complexity of the processing. To obtain virtual 3D models suitable for simulation, we developed a tool for automatic processing with the goal to establish ties between the world of GIS and CAE. In this paper we introduce a way to use Coons surfaces for the automatic processing of building models in LoD2, and investigate ways to simplify LoD3 models in order to reduce unnecessary information for a numerical simulation.

  16. Automatic Compartment Modelling and Segmentation for Dynamical Renal Scintigraphies

    DEFF Research Database (Denmark)

    Ståhl, Daniel; Åström, Kalle; Overgaard, Niels Christian;

    2011-01-01

    for segmentation of pixels into physical compartments, extract their corresponding time-activity curves and then compute the parameters that are relevant for medical assessment. In this paper we present a fully automatic system that incorporates spatial smoothing constraints, compartment modelling and positivity......Time-resolved medical data has important applications in a large variety of medical applications. In this paper we study automatic analysis of dynamical renal scintigraphies. The traditional analysis pipeline for dynamical renal scintigraphies is to use manual or semiautomatic methods...

  17. Automatic 3D modeling of the urban landscape

    NARCIS (Netherlands)

    I. Esteban; J. Dijk; F. Groen

    2010-01-01

    In this paper we present a fully automatic system for building 3D models of urban areas at the street level. We propose a novel approach for the accurate estimation of the scale consistent camera pose given two previous images. We employ a new method for global optimization and use a novel sampling

  18. Automatic 3D Modeling of the Urban Landscape

    NARCIS (Netherlands)

    Esteban, I.; Dijk, J.; Groen, F.A.

    2010-01-01

    In this paper we present a fully automatic system for building 3D models of urban areas at the street level. We propose a novel approach for the accurate estimation of the scale consistent camera pose given two previous images. We employ a new method for global optimization and use a novel sampling

  19. A Maximum Likelihood Estimator based on First Differences for a Panel Data Tobit Model with Individual Specific Effects

    OpenAIRE

    A.S. Kalwij

    2000-01-01

    This paper proposes an alternative estimation procedure for a panel data Tobit model with individual specific effects based on taking first differences of the equation of interest. This helps to alleviate the sensitivity of the estimates to a specific parameterization of the individual specific effects and some Monte Carlo evidence is provided in support of this. To allow for arbitrary serial correlation estimation takes place in two steps: Maximum Likelihood is applied to each pair of consec...

  20. Geometric model of robotic arc welding for automatic programming

    Institute of Scientific and Technical Information of China (English)

    2000-01-01

    Geometric information is important for automatic programming of arc welding robot. Complete geometric models of robotic arc welding are established in this paper. In the geometric model of weld seam, an equation with seam length as its parameter is introduced to represent any weld seam. The method to determine discrete programming points on a weld seam is presented. In the geometric model of weld workpiece, three class primitives and CSG tree are used to describe weld workpiece. Detailed data structure is presented. In pose transformation of torch, world frame, torch frame and active frame are defined, and transformation between frames is presented. Based on these geometric models, an automatic programming software package for robotic arc welding, RAWCAD, is developed. Experiments show that the geometric models are practical and reliable.

  1. PROCOV: maximum likelihood estimation of protein phylogeny under covarion models and site-specific covarion pattern analysis

    Directory of Open Access Journals (Sweden)

    Wang Huai-Chun

    2009-09-01

    Full Text Available Abstract Background The covarion hypothesis of molecular evolution holds that selective pressures on a given amino acid or nucleotide site are dependent on the identity of other sites in the molecule that change throughout time, resulting in changes of evolutionary rates of sites along the branches of a phylogenetic tree. At the sequence level, covarion-like evolution at a site manifests as conservation of nucleotide or amino acid states among some homologs where the states are not conserved in other homologs (or groups of homologs. Covarion-like evolution has been shown to relate to changes in functions at sites in different clades, and, if ignored, can adversely affect the accuracy of phylogenetic inference. Results PROCOV (protein covarion analysis is a software tool that implements a number of previously proposed covarion models of protein evolution for phylogenetic inference in a maximum likelihood framework. Several algorithmic and implementation improvements in this tool over previous versions make computationally expensive tree searches with covarion models more efficient and analyses of large phylogenomic data sets tractable. PROCOV can be used to identify covarion sites by comparing the site likelihoods under the covarion process to the corresponding site likelihoods under a rates-across-sites (RAS process. Those sites with the greatest log-likelihood difference between a 'covarion' and an RAS process were found to be of functional or structural significance in a dataset of bacterial and eukaryotic elongation factors. Conclusion Covarion models implemented in PROCOV may be especially useful for phylogenetic estimation when ancient divergences between sequences have occurred and rates of evolution at sites are likely to have changed over the tree. It can also be used to study lineage-specific functional shifts in protein families that result in changes in the patterns of site variability among subtrees.

  2. A Batch Rival Penalized Expectation-Maximization Algorithm for Gaussian Mixture Clustering with Automatic Model Selection

    Directory of Open Access Journals (Sweden)

    Jiechang Wen

    2012-01-01

    Full Text Available Within the learning framework of maximum weighted likelihood (MWL proposed by Cheung, 2004 and 2005, this paper will develop a batch Rival Penalized Expectation-Maximization (RPEM algorithm for density mixture clustering provided that all observations are available before the learning process. Compared to the adaptive RPEM algorithm in Cheung, 2004 and 2005, this batch RPEM need not assign the learning rate analogous to the Expectation-Maximization (EM algorithm (Dempster et al., 1977, but still preserves the capability of automatic model selection. Further, the convergence speed of this batch RPEM is faster than the EM and the adaptive RPEM in general. The experiments show the superior performance of the proposed algorithm on the synthetic data and color image segmentation.

  3. Bayesian and maximum likelihood phylogenetic analyses of protein sequence data under relative branch-length differences and model violation

    Directory of Open Access Journals (Sweden)

    Harlow Timothy J

    2005-01-01

    Full Text Available Abstract Background Bayesian phylogenetic inference holds promise as an alternative to maximum likelihood, particularly for large molecular-sequence data sets. We have investigated the performance of Bayesian inference with empirical and simulated protein-sequence data under conditions of relative branch-length differences and model violation. Results With empirical protein-sequence data, Bayesian posterior probabilities provide more-generous estimates of subtree reliability than does the nonparametric bootstrap combined with maximum likelihood inference, reaching 100% posterior probability at bootstrap proportions around 80%. With simulated 7-taxon protein-sequence datasets, Bayesian posterior probabilities are somewhat more generous than bootstrap proportions, but do not saturate. Compared with likelihood, Bayesian phylogenetic inference can be as or more robust to relative branch-length differences for datasets of this size, particularly when among-sites rate variation is modeled using a gamma distribution. When the (known correct model was used to infer trees, Bayesian inference recovered the (known correct tree in 100% of instances in which one or two branches were up to 20-fold longer than the others. At ratios more extreme than 20-fold, topological accuracy of reconstruction degraded only slowly when only one branch was of relatively greater length, but more rapidly when there were two such branches. Under an incorrect model of sequence change, inaccurate trees were sometimes observed at less extreme branch-length ratios, and (particularly for trees with single long branches such trees tended to be more inaccurate. The effect of model violation on accuracy of reconstruction for trees with two long branches was more variable, but gamma-corrected Bayesian inference nonetheless yielded more-accurate trees than did either maximum likelihood or uncorrected Bayesian inference across the range of conditions we examined. Assuming an exponential

  4. Nonlinear model predictive control using automatic differentiation

    OpenAIRE

    Al Seyab, Rihab Khalid Shakir

    2006-01-01

    Although nonlinear model predictive control (NMPC) might be the best choice for a nonlinear plant, it is still not widely used. This is mainly due to the computational burden associated with solving online a set of nonlinear differential equations and a nonlinear dynamic optimization problem in real time. This thesis is concerned with strategies aimed at reducing the computational burden involved in different stages of the NMPC such as optimization problem, state estimation, an...

  5. Towards automatic calibration of 2-dimensional flood propagation models

    Directory of Open Access Journals (Sweden)

    P. Fabio

    2009-11-01

    Full Text Available Hydraulic models for flood propagation description are an essential tool in many fields, e.g. civil engineering, flood hazard and risk assessments, evaluation of flood control measures, etc. Nowadays there are many models of different complexity regarding the mathematical foundation and spatial dimensions available, and most of them are comparatively easy to operate due to sophisticated tools for model setup and control. However, the calibration of these models is still underdeveloped in contrast to other models like e.g. hydrological models or models used in ecosystem analysis. This has basically two reasons: first, the lack of relevant data against the models can be calibrated, because flood events are very rarely monitored due to the disturbances inflicted by them and the lack of appropriate measuring equipment in place. Secondly, especially the two-dimensional models are computationally very demanding and therefore the use of available sophisticated automatic calibration procedures is restricted in many cases. This study takes a well documented flood event in August 2002 at the Mulde River in Germany as an example and investigates the most appropriate calibration strategy for a full 2-D hyperbolic finite element model. The model independent optimiser PEST, that gives the possibility of automatic calibrations, is used. The application of the parallel version of the optimiser to the model and calibration data showed that a it is possible to use automatic calibration in combination of 2-D hydraulic model, and b equifinality of model parameterisation can also be caused by a too large number of degrees of freedom in the calibration data in contrast to a too simple model setup. In order to improve model calibration and reduce equifinality a method was developed to identify calibration data with likely errors that obstruct model calibration.

  6. Automatic balancing of 3D models

    DEFF Research Database (Denmark)

    Christiansen, Asger Nyman; Schmidt, Ryan; Bærentzen, Jakob Andreas

    2014-01-01

    3D printing technologies allow for more diverse shapes than are possible with molds and the cost of making just one single object is negligible compared to traditional production methods. However, not all shapes are suitable for 3D print. One of the remaining costs is therefore human time spent......, in these cases, we will apply a rotation of the object which only deforms the shape a little near the base. No user input is required but it is possible to specify manufacturing constraints related to specific 3D print technologies. Several models have successfully been balanced and printed using both polyjet...

  7. MEMOPS: data modelling and automatic code generation.

    Science.gov (United States)

    Fogh, Rasmus H; Boucher, Wayne; Ionides, John M C; Vranken, Wim F; Stevens, Tim J; Laue, Ernest D

    2010-03-25

    In recent years the amount of biological data has exploded to the point where much useful information can only be extracted by complex computational analyses. Such analyses are greatly facilitated by metadata standards, both in terms of the ability to compare data originating from different sources, and in terms of exchanging data in standard forms, e.g. when running processes on a distributed computing infrastructure. However, standards thrive on stability whereas science tends to constantly move, with new methods being developed and old ones modified. Therefore maintaining both metadata standards, and all the code that is required to make them useful, is a non-trivial problem. Memops is a framework that uses an abstract definition of the metadata (described in UML) to generate internal data structures and subroutine libraries for data access (application programming interfaces--APIs--currently in Python, C and Java) and data storage (in XML files or databases). For the individual project these libraries obviate the need for writing code for input parsing, validity checking or output. Memops also ensures that the code is always internally consistent, massively reducing the need for code reorganisation. Across a scientific domain a Memops-supported data model makes it easier to support complex standards that can capture all the data produced in a scientific area, share them among all programs in a complex software pipeline, and carry them forward to deposition in an archive. The principles behind the Memops generation code will be presented, along with example applications in Nuclear Magnetic Resonance (NMR) spectroscopy and structural biology.

  8. Automatic Texture Mapping of Architectural and Archaeological 3d Models

    Science.gov (United States)

    Kersten, T. P.; Stallmann, D.

    2012-07-01

    Today, detailed, complete and exact 3D models with photo-realistic textures are increasingly demanded for numerous applications in architecture and archaeology. Manual texture mapping of 3D models by digital photographs with software packages, such as Maxon Cinema 4D, Autodesk 3Ds Max or Maya, still requires a complex and time-consuming workflow. So, procedures for automatic texture mapping of 3D models are in demand. In this paper two automatic procedures are presented. The first procedure generates 3D surface models with textures by web services, while the second procedure textures already existing 3D models with the software tmapper. The program tmapper is based on the Multi Layer 3D image (ML3DImage) algorithm and developed in the programming language C++. The studies showing that the visibility analysis using the ML3DImage algorithm is not sufficient to obtain acceptable results of automatic texture mapping. To overcome the visibility problem the Point Cloud Painter algorithm in combination with the Z-buffer-procedure will be applied in the future.

  9. Using UML to Model Web Services for Automatic Composition

    Directory of Open Access Journals (Sweden)

    Amal Elgammal

    2010-07-01

    Full Text Available There is a great interest paid to the web services paradigm nowadays. One of the most important problems related to the web service paradigm is the automatic composition of web services. Several frameworks have been proposed to achieve this novel goal. The most recent and richest framework (model is the Colombo model. However, even for experienced developers, working with Colombo formalisms is low-level, very complex and timeconsuming. We propose to use UML (Unified Modeling Language to model services and service composition in Colombo. By using UML, the web service developer will deal with the high level graphical models of UML avoiding the difficulties of working with the low-level and complex details of Colombo. To be able to use Colombo automatic composition algorithm, we propose to represent Colombo by a set of related XML document types that can be a base for a Colombo language. Moreover, we propose the transformation rules between UML and Colombo proposed XML documents. Next Colombo automatic composition algorithm can be applied to build a composite service that satisfies a given user request. A prototypical implementation of the proposed approach is developed using Visual Paradigm for UML.

  10. Global self-weighted and local quasi-maximum exponential likelihood estimators for ARMA--GARCH/IGARCH models

    CERN Document Server

    Zhu, Ke; 10.1214/11-AOS895

    2012-01-01

    This paper investigates the asymptotic theory of the quasi-maximum exponential likelihood estimators (QMELE) for ARMA--GARCH models. Under only a fractional moment condition, the strong consistency and the asymptotic normality of the global self-weighted QMELE are obtained. Based on this self-weighted QMELE, the local QMELE is showed to be asymptotically normal for the ARMA model with GARCH (finite variance) and IGARCH errors. A formal comparison of two estimators is given for some cases. A simulation study is carried out to assess the performance of these estimators, and a real example on the world crude oil price is given.

  11. Modelling of risk events with uncertain likelihoods and impacts in large infrastructure projects

    DEFF Research Database (Denmark)

    Schjær-Jacobsen, Hans

    2010-01-01

    This paper presents contributions to the mathematical core of risk and uncertainty management in compliance with the principles of New Budgeting laid out in 2008 by the Danish Ministry of Transport to be used in large infrastructure projects. Basically, the new principles are proposed in order...... to prevent future budget overruns. One of the central ideas is to introduce improved risk management processes and the present paper addresses this particular issue. A relevant cost function in terms of unit prices and quantities is developed and an event impact matrix with uncertain impacts from independent...... uncertain risk events is used to calculate the total uncertain risk budget. Cost impacts from the individual risk events on the individual project activities are kept precisely track of in order to comply with the requirements of New Budgeting. Additionally, uncertain likelihoods for the occurrence of risk...

  12. Empirical likelihood-based dimension reduction inference for linear error-in-responses models with validation study

    Institute of Scientific and Technical Information of China (English)

    WANG Qihua; H(a)rdle Wolfgang

    2004-01-01

    In this paper, linear errors-in-response models are considered in the presence of validation data on the responses. A semiparametric dimension reduction technique is employed to define an estimator ofβ with asymptotic normality, the estimated empirical loglikelihoods and the adjusted empirical loglikelihoods for the vector of regression coefficients and linear combinations of the regression coefficients, respectively. The estimated empirical log-likelihoods are shown to be asymptotically distributed as weighted sums of independent x21 and the adjusted empirical loglikelihoods are proved to be asymptotically distributed as standard chi-squares, respectively.

  13. Likelihood for interval-censored observations from multi-state models

    DEFF Research Database (Denmark)

    Commenges, Daniel

    2002-01-01

    multi-state models; illness-death; counting processes; ignorability; interval-censoring; Markov models......multi-state models; illness-death; counting processes; ignorability; interval-censoring; Markov models...

  14. Automatic Generation of 3D Building Models with Multiple Roofs

    Institute of Scientific and Technical Information of China (English)

    Kenichi Sugihara; Yoshitugu Hayashi

    2008-01-01

    Based on building footprints (building polygons) on digital maps, we are proposing the GIS and CG integrated system that automatically generates 3D building models with multiple roofs. Most building polygons' edges meet at right angles (orthogonal polygon). The integrated system partitions orthogonal building polygons into a set of rectangles and places rectangular roofs and box-shaped building bodies on these rectangles. In order to partition an orthogonal polygon, we proposed a useful polygon expression in deciding from which vertex a dividing line is drawn. In this paper, we propose a new scheme for partitioning building polygons and show the process of creating 3D roof models.

  15. Variable-mass Thermodynamics Calculation Model for Gas-operated Automatic Weapon%Variable-mass Thermodynamics Calculation Model for Gas-operated Automatic Weapon

    Institute of Scientific and Technical Information of China (English)

    陈建彬; 吕小强

    2011-01-01

    Aiming at the fact that the energy and mass exchange phenomena exist between barrel and gas-operated device of the automatic weapon, for describing its interior ballistics and dynamic characteristics of the gas-operated device accurately, a new variable-mass thermodynamics model is built. It is used to calculate the automatic mechanism velocity of a certain automatic weapon, the calculation results coincide with the experimental results better, and thus the model is validated. The influences of structure parameters on gas-operated device' s dynamic characteristics are discussed. It shows that the model is valuable for design and accurate performance prediction of gas-operated automatic weapon.

  16. Optimization of a Nucleic Acids united-RESidue 2-Point model (NARES-2P) with a maximum-likelihood approach

    International Nuclear Information System (INIS)

    Coarse-grained models are useful tools to investigate the structural and thermodynamic properties of biomolecules. They are obtained by merging several atoms into one interaction site. Such simplified models try to capture as much as possible information of the original biomolecular system in all-atom representation but the resulting parameters of these coarse-grained force fields still need further optimization. In this paper, a force field optimization method, which is based on maximum-likelihood fitting of the simulated to the experimental conformational ensembles and least-squares fitting of the simulated to the experimental heat-capacity curves, is applied to optimize the Nucleic Acid united-RESidue 2-point (NARES-2P) model for coarse-grained simulations of nucleic acids recently developed in our laboratory. The optimized NARES-2P force field reproduces the structural and thermodynamic data of small DNA molecules much better than the original force field

  17. Maximum Marginal Likelihood Estimation of a Monotonic Polynomial Generalized Partial Credit Model with Applications to Multiple Group Analysis.

    Science.gov (United States)

    Falk, Carl F; Cai, Li

    2016-06-01

    We present a semi-parametric approach to estimating item response functions (IRF) useful when the true IRF does not strictly follow commonly used functions. Our approach replaces the linear predictor of the generalized partial credit model with a monotonic polynomial. The model includes the regular generalized partial credit model at the lowest order polynomial. Our approach extends Liang's (A semi-parametric approach to estimate IRFs, Unpublished doctoral dissertation, 2007) method for dichotomous item responses to the case of polytomous data. Furthermore, item parameter estimation is implemented with maximum marginal likelihood using the Bock-Aitkin EM algorithm, thereby facilitating multiple group analyses useful in operational settings. Our approach is demonstrated on both educational and psychological data. We present simulation results comparing our approach to more standard IRF estimation approaches and other non-parametric and semi-parametric alternatives.

  18. Optimization of a Nucleic Acids united-RESidue 2-Point model (NARES-2P) with a maximum-likelihood approach

    Energy Technology Data Exchange (ETDEWEB)

    He, Yi; Scheraga, Harold A., E-mail: has5@cornell.edu [Department of Chemistry and Chemical Biology, Cornell University, Ithaca, New York 14853 (United States); Liwo, Adam [Faculty of Chemistry, University of Gdańsk, Wita Stwosza 63, 80-308 Gdańsk (Poland)

    2015-12-28

    Coarse-grained models are useful tools to investigate the structural and thermodynamic properties of biomolecules. They are obtained by merging several atoms into one interaction site. Such simplified models try to capture as much as possible information of the original biomolecular system in all-atom representation but the resulting parameters of these coarse-grained force fields still need further optimization. In this paper, a force field optimization method, which is based on maximum-likelihood fitting of the simulated to the experimental conformational ensembles and least-squares fitting of the simulated to the experimental heat-capacity curves, is applied to optimize the Nucleic Acid united-RESidue 2-point (NARES-2P) model for coarse-grained simulations of nucleic acids recently developed in our laboratory. The optimized NARES-2P force field reproduces the structural and thermodynamic data of small DNA molecules much better than the original force field.

  19. WOMBAT——A tool for mixed model analyses in quantitative genetics by restricted maximum likelihood (REML)

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    WOMBAT is a software package for quantitative genetic analyses of continuous traits, fitting a linear, mixed model;estimates of covariance components and the resulting genetic parameters are obtained by restricted maximum likelihood. A wide range of models, comprising numerous traits, multiple fixed and random effects, selected genetic covariance structures, random regression models and reduced rank estimation are accommodated. WOMBAT employs up-to-date numerical and computational methods. Together with the use of efficient compilers, this generates fast executable programs, suitable for large scale analyses.Use of WOMBAT is illustrated for a bivariate analysis. The package consists of the executable program, available for LINUX and WINDOWS environments, manual and a set of worked example, and can be downloaded free of charge from http://agbu.une.edu.au/~kmeyer/wombat.html

  20. Maximum Marginal Likelihood Estimation of a Monotonic Polynomial Generalized Partial Credit Model with Applications to Multiple Group Analysis.

    Science.gov (United States)

    Falk, Carl F; Cai, Li

    2016-06-01

    We present a semi-parametric approach to estimating item response functions (IRF) useful when the true IRF does not strictly follow commonly used functions. Our approach replaces the linear predictor of the generalized partial credit model with a monotonic polynomial. The model includes the regular generalized partial credit model at the lowest order polynomial. Our approach extends Liang's (A semi-parametric approach to estimate IRFs, Unpublished doctoral dissertation, 2007) method for dichotomous item responses to the case of polytomous data. Furthermore, item parameter estimation is implemented with maximum marginal likelihood using the Bock-Aitkin EM algorithm, thereby facilitating multiple group analyses useful in operational settings. Our approach is demonstrated on both educational and psychological data. We present simulation results comparing our approach to more standard IRF estimation approaches and other non-parametric and semi-parametric alternatives. PMID:25487423

  1. Automatic Construction of Predictive Neuron Models through Large Scale Assimilation of Electrophysiological Data

    Science.gov (United States)

    Nogaret, Alain; Meliza, C. Daniel; Margoliash, Daniel; Abarbanel, Henry D. I.

    2016-09-01

    We report on the construction of neuron models by assimilating electrophysiological data with large-scale constrained nonlinear optimization. The method implements interior point line parameter search to determine parameters from the responses to intracellular current injections of zebra finch HVC neurons. We incorporated these parameters into a nine ionic channel conductance model to obtain completed models which we then use to predict the state of the neuron under arbitrary current stimulation. Each model was validated by successfully predicting the dynamics of the membrane potential induced by 20-50 different current protocols. The dispersion of parameters extracted from different assimilation windows was studied. Differences in constraints from current protocols, stochastic variability in neuron output, and noise behave as a residual temperature which broadens the global minimum of the objective function to an ellipsoid domain whose principal axes follow an exponentially decaying distribution. The maximum likelihood expectation of extracted parameters was found to provide an excellent approximation of the global minimum and yields highly consistent kinetics for both neurons studied. Large scale assimilation absorbs the intrinsic variability of electrophysiological data over wide assimilation windows. It builds models in an automatic manner treating all data as equal quantities and requiring minimal additional insight.

  2. An Automatic Registration Algorithm for 3D Maxillofacial Model

    Science.gov (United States)

    Qiu, Luwen; Zhou, Zhongwei; Guo, Jixiang; Lv, Jiancheng

    2016-09-01

    3D image registration aims at aligning two 3D data sets in a common coordinate system, which has been widely used in computer vision, pattern recognition and computer assisted surgery. One challenging problem in 3D registration is that point-wise correspondences between two point sets are often unknown apriori. In this work, we develop an automatic algorithm for 3D maxillofacial models registration including facial surface model and skull model. Our proposed registration algorithm can achieve a good alignment result between partial and whole maxillofacial model in spite of ambiguous matching, which has a potential application in the oral and maxillofacial reparative and reconstructive surgery. The proposed algorithm includes three steps: (1) 3D-SIFT features extraction and FPFH descriptors construction; (2) feature matching using SAC-IA; (3) coarse rigid alignment and refinement by ICP. Experiments on facial surfaces and mandible skull models demonstrate the efficiency and robustness of our algorithm.

  3. Automatic identification of model reductions for discrete stochastic simulation

    Science.gov (United States)

    Wu, Sheng; Fu, Jin; Li, Hong; Petzold, Linda

    2012-07-01

    Multiple time scales in cellular chemical reaction systems present a challenge for the efficiency of stochastic simulation. Numerous model reductions have been proposed to accelerate the simulation of chemically reacting systems by exploiting time scale separation. However, these are often identified and deployed manually, requiring expert knowledge. This is time-consuming, prone to error, and opportunities for model reduction may be missed, particularly for large models. We propose an automatic model analysis algorithm using an adaptively weighted Petri net to dynamically identify opportunities for model reductions for both the stochastic simulation algorithm and tau-leaping simulation, with no requirement of expert knowledge input. Results are presented to demonstrate the utility and effectiveness of this approach.

  4. Automatic Generation of Symbolic Model for Parameterized Synchronous Systems

    Institute of Scientific and Technical Information of China (English)

    Wei-Wen Xu

    2004-01-01

    With the purpose of making the verification of parameterized system more general and easier, in this paper, a new and intuitive language PSL (Parameterized-system Specification Language) is proposed to specify a class of parameterized synchronous systems. From a PSL script, an automatic method is proposed to generate a constraint-based symbolic model. The model can concisely symbolically represent the collections of global states by counting the number of processes in a given state. Moreover, a theorem has been proved that there is a simulation relation between the original system and its symbolic model. Since the abstract and symbolic techniques are exploited in the symbolic model, state-explosion problem in traditional verification methods is efficiently avoided. Based on the proposed symbolic model, a reachability analysis procedure is implemented using ANSI C++ on UNIX platform. Thus, a complete tool for verifying the parameterized synchronous systems is obtained and tested for some cases. The experimental results show that the method is satisfactory.

  5. The Work Ratio--modeling the likelihood of return to work for workers with musculoskeletal disorders: A fuzzy logic approach.

    Science.gov (United States)

    Apalit, Nathan

    2010-01-01

    The world of musculoskeletal disorders (MSDs) is complicated and fuzzy. Fuzzy logic provides a precise framework for complex problems characterized by uncertainty, vagueness and imprecision. Although fuzzy logic would appear to be an ideal modeling language to help address the complexity of MSDs, little research has been done in this regard. The Work Ratio is a novel mathematical model that uses fuzzy logic to provide a numerical and linguistic valuation of the likelihood of return to work and remaining at work. It can be used for a worker with any MSD at any point in time. Basic mathematical concepts from set theory and fuzzy logic are reviewed. A case study is then used to illustrate the use of the Work Ratio. Its potential strengths and limitations are discussed. Further research of its use with a variety of MSDs, settings and multidisciplinary teams is needed to confirm its universal value.

  6. Maximum Likelihood Estimation in Latent Class Models For Contingency Table Data

    OpenAIRE

    Fienberg, S.E.; Hersh, P.; Rinaldo, A.; Zhou, Y

    2007-01-01

    Statistical models with latent structure have a history going back to the 1950s and have seen widespread use in the social sciences and, more recently, in computational biology and in machine learning. Here we study the basic latent class model proposed originally by the sociologist Paul F. Lazarfeld for categorical variables, and we explain its geometric structure. We draw parallels between the statistical and geometric properties of latent class models and we illustrate geometrically the ca...

  7. Maximum likelihood estimators for extended growth curve model with orthogonal between-individual design matrices

    NARCIS (Netherlands)

    Klein, Daniel; Zezula, Ivan

    2015-01-01

    The extended growth curve model is discussed in this paper. There are two versions of the model studied in the literature, which differ in the way how the column spaces of the design matrices are nested. The nesting is applied either to the between-individual or to the within-individual design matri

  8. Regularization for Generalized Additive Mixed Models by Likelihood-Based Boosting

    OpenAIRE

    Groll, Andreas; Tutz, Gerhard

    2012-01-01

    With the emergence of semi- and nonparametric regression the generalized linear mixed model has been expanded to account for additive predictors. In the present paper an approach to variable selection is proposed that works for generalized additive mixed models. In contrast to common procedures it can be used in high-dimensional settings where many covariates are available and the form of the influence is unknown. It is constructed as a componentwise boosting method and hence is able to pe...

  9. Modeling of a Multiple Digital Automatic Gain Control System

    Institute of Scientific and Technical Information of China (English)

    WANG Jingdian; LU Xiuhong; ZHANG Li

    2008-01-01

    Automatic gain control (AGC) has been used in many applications. The key features of AGC, including a steady state output and static/dynamic timing response, depend mainly on key parameters such as the reference and the filter coefficients. A simple model developed to describe AGC systems based on several simple assumptions shows that AGC always converges to the reference and that the timing constant depends on the filter coefficients. Measures are given to prevent oscillations and limit cycle effects. The simple AGC system is adapted to a multiple AGC system for a TV tuner in a much more efficient model. Simulations using the C language are 16 times faster than those with MATLAB, and 10 times faster than those with a mixed register transfer level (RTL)-simulation program with integrated circuit emphasis (SPICE) model.

  10. Model Considerations for Memory-based Automatic Music Transcription

    Science.gov (United States)

    Albrecht, Štěpán; Šmídl, Václav

    2009-12-01

    The problem of automatic music description is considered. The recorded music is modeled as a superposition of known sounds from a library weighted by unknown weights. Similar observation models are commonly used in statistics and machine learning. Many methods for estimation of the weights are available. These methods differ in the assumptions imposed on the weights. In Bayesian paradigm, these assumptions are typically expressed in the form of prior probability density function (pdf) on the weights. In this paper, commonly used assumptions about music signal are summarized and complemented by a new assumption. These assumptions are translated into pdfs and combined into a single prior density using combination of pdfs. Validity of the model is tested in simulation using synthetic data.

  11. An automatic fault management model for distribution networks

    Energy Technology Data Exchange (ETDEWEB)

    Lehtonen, M.; Haenninen, S. [VTT Energy, Espoo (Finland); Seppaenen, M. [North-Carelian Power Co (Finland); Antila, E.; Markkila, E. [ABB Transmit Oy (Finland)

    1998-08-01

    An automatic computer model, called the FI/FL-model, for fault location, fault isolation and supply restoration is presented. The model works as an integrated part of the substation SCADA, the AM/FM/GIS system and the medium voltage distribution network automation systems. In the model, three different techniques are used for fault location. First, by comparing the measured fault current to the computed one, an estimate for the fault distance is obtained. This information is then combined, in order to find the actual fault point, with the data obtained from the fault indicators in the line branching points. As a third technique, in the absence of better fault location data, statistical information of line section fault frequencies can also be used. For combining the different fault location information, fuzzy logic is used. As a result, the probability weights for the fault being located in different line sections, are obtained. Once the faulty section is identified, it is automatically isolated by remote control of line switches. Then the supply is restored to the remaining parts of the network. If needed, reserve connections from other adjacent feeders can also be used. During the restoration process, the technical constraints of the network are checked. Among these are the load carrying capacity of line sections, voltage drop and the settings of relay protection. If there are several possible network topologies, the model selects the technically best alternative. The FI/IL-model has been in trial use at two substations of the North-Carelian Power Company since November 1996. This chapter lists the practical experiences during the test use period. Also the benefits of this kind of automation are assessed and future developments are outlined

  12. Maximum likelihood estimation of neutral model parameters for multiple samples with different degrees of dispersal limitation

    NARCIS (Netherlands)

    Etienne, Rampal S.

    2009-01-01

    In a recent paper, I presented a sampling formula for species abundances from multiple samples according to the prevailing neutral model of biodiversity, but practical implementation for parameter estimation was only possible when these samples were from local communities that were assumed to be equ

  13. Estimation of Spatial Sample Selection Models : A Partial Maximum Likelihood Approach

    NARCIS (Netherlands)

    Rabovic, Renata; Cizek, Pavel

    2016-01-01

    To analyze data obtained by non-random sampling in the presence of cross-sectional dependence, estimation of a sample selection model with a spatial lag of a latent dependent variable or a spatial error in both the selection and outcome equations is considered. Since there is no estimation framework

  14. Maximum likelihood Bayesian averaging of airflow models in unsaturated fractured tuff using Occam and variance windows

    NARCIS (Netherlands)

    Morales-Casique, E.; Neuman, S.P.; Vesselinov, V.V.

    2010-01-01

    We use log permeability and porosity data obtained from single-hole pneumatic packer tests in six boreholes drilled into unsaturated fractured tuff near Superior, Arizona, to postulate, calibrate and compare five alternative variogram models (exponential, exponential with linear drift, power, trunca

  15. Using the Extended Parallel Process Model to Examine Teachers' Likelihood of Intervening in Bullying

    Science.gov (United States)

    Duong, Jeffrey; Bradshaw, Catherine P.

    2013-01-01

    Background: Teachers play a critical role in protecting students from harm in schools, but little is known about their attitudes toward addressing problems like bullying. Previous studies have rarely used theoretical frameworks, making it difficult to advance this area of research. Using the Extended Parallel Process Model (EPPM), we examined the…

  16. A GIS Model for Minefield Area Prediction: The Minefield Likelihood Procedure

    OpenAIRE

    Chamberlayne, Edward Pye

    2002-01-01

    Existing minefields left over from previous conflicts pose a grave threat to humanitarian relief operations, domestic everyday life, and future military operations. The remaining minefields in Afghanistan, from the decade long war with the Soviet Union, are just one example of this global problem. The purpose of this research is to develop a methodology that will predict areas where minefields are the most likely to exist through use of a GIS model. The concept is to combine geospatial dat...

  17. Use of Maximum Likelihood-Mixed Models to select stable reference genes: a case of heat stress response in sheep

    Directory of Open Access Journals (Sweden)

    Salces Judit

    2011-08-01

    Full Text Available Abstract Background Reference genes with stable expression are required to normalize expression differences of target genes in qPCR experiments. Several procedures and companion software have been proposed to find the most stable genes. Model based procedures are attractive because they provide a solid statistical framework. NormFinder, a widely used software, uses a model based method. The pairwise comparison procedure implemented in GeNorm is a simpler procedure but one of the most extensively used. In the present work a statistical approach based in Maximum Likelihood estimation under mixed models was tested and compared with NormFinder and geNorm softwares. Sixteen candidate genes were tested in whole blood samples from control and heat stressed sheep. Results A model including gene and treatment as fixed effects, sample (animal, gene by treatment, gene by sample and treatment by sample interactions as random effects with heteroskedastic residual variance in gene by treatment levels was selected using goodness of fit and predictive ability criteria among a variety of models. Mean Square Error obtained under the selected model was used as indicator of gene expression stability. Genes top and bottom ranked by the three approaches were similar; however, notable differences for the best pair of genes selected for each method and the remaining genes of the rankings were shown. Differences among the expression values of normalized targets for each statistical approach were also found. Conclusions Optimal statistical properties of Maximum Likelihood estimation joined to mixed model flexibility allow for more accurate estimation of expression stability of genes under many different situations. Accurate selection of reference genes has a direct impact over the normalized expression values of a given target gene. This may be critical when the aim of the study is to compare expression rate differences among samples under different environmental

  18. Automatic Model Generation Framework for Computational Simulation of Cochlear Implantation

    DEFF Research Database (Denmark)

    Mangado Lopez, Nerea; Ceresa, Mario; Duchateau, Nicolas;

    2015-01-01

    Recent developments in computational modeling of cochlear implantation are promising to study in silico the performance of the implant before surgery. However, creating a complete computational model of the patient's anatomy while including an external device geometry remains challenging. To addr......Recent developments in computational modeling of cochlear implantation are promising to study in silico the performance of the implant before surgery. However, creating a complete computational model of the patient's anatomy while including an external device geometry remains challenging......'s CT image, an accurate model of the patient-specific cochlea anatomy is obtained. An algorithm based on the parallel transport frame is employed to perform the virtual insertion of the cochlear implant. Our automatic framework also incorporates the surrounding bone and nerve fibers and assigns...... constitutive parameters to all components of the finite element model. This model can then be used to study in silico the effects of the electrical stimulation of the cochlear implant. Results are shown on a total of 25 models of patients. In all cases, a final mesh suitable for finite element simulations...

  19. An automatic invisible axion in the SUSY preon model

    Science.gov (United States)

    Babu, K. S.; Choi, Kiwoon; Pati, J. C.; Zhang, X.

    1994-08-01

    It is shown that the recently proposed preon model which provides a unified origin of the diverse mass scales and an explanation of family replication as well as of inter-family mass-hierarchy, naturally possesses a Peccei-Quinn (PQ) symmetry whose spontaneous breaking leads to an automatic invisible axion. Existence of the PQ-symmetry is simply a consequence of supersymmetry and the requirement of minimality in the field-content and interactions, which proposes that the lagrangian should possess only those terms which are dictated by the gauge principle and no others. In addition to the axion, the model also generates two superlight Goldstone bosons and their superpartners all of which are cosmologically safe.

  20. Automatically extracting sheet-metal features from solid model

    Institute of Scientific and Technical Information of China (English)

    刘志坚; 李建军; 王义林; 李材元; 肖祥芷

    2004-01-01

    With the development of modern industry,sheet-metal parts in mass production have been widely applied in mechanical,communication,electronics,and light industries in recent decades; but the advances in sheet-metal part design and manufacturing remain too slow compared with the increasing importance of sheet-metal parts in modern industry. This paper proposes a method for automatically extracting features from an arbitrary solid model of sheet-metal parts; whose characteristics are used for classification and graph-based representation of the sheet-metal features to extract the features embodied in a sheet-metal part. The extracting feature process can be divided for valid checking of the model geometry,feature matching,and feature relationship. Since the extracted features include abundant geometry and engineering information,they will be effective for downstream application such as feature rebuilding and stamping process planning.

  1. Automatic generation of matrix element derivatives for tight binding models

    Science.gov (United States)

    Elena, Alin M.; Meister, Matthias

    2005-10-01

    Tight binding (TB) models are one approach to the quantum mechanical many-particle problem. An important role in TB models is played by hopping and overlap matrix elements between the orbitals on two atoms, which of course depend on the relative positions of the atoms involved. This dependence can be expressed with the help of Slater-Koster parameters, which are usually taken from tables. Recently, a way to generate these tables automatically was published. If TB approaches are applied to simulations of the dynamics of a system, also derivatives of matrix elements can appear. In this work we give general expressions for first and second derivatives of such matrix elements. Implemented in a tight binding computer program, like, for instance, DINAMO, they obviate the need to type all the required derivatives of all occurring matrix elements by hand.

  2. Efficient Word Reading: Automaticity of Print-Related Skills Indexed by Rapid Automatized Naming through Cusp-Catastrophe Modeling

    Science.gov (United States)

    Sideridis, Georgios D.; Simos, Panagiotis; Mouzaki, Angeliki; Stamovlasis, Dimitrios

    2016-01-01

    The study explored the moderating role of rapid automatized naming (RAN) in reading achievement through a cusp-catastrophe model grounded on nonlinear dynamic systems theory. Data were obtained from a community sample of 496 second through fourth graders who were followed longitudinally over 2 years and split into 2 random subsamples (validation…

  3. Likelihood Analysis of Seasonal Cointegration

    DEFF Research Database (Denmark)

    Johansen, Søren; Schaumburg, Ernst

    1999-01-01

    The error correction model for seasonal cointegration is analyzed. Conditions are found under which the process is integrated of order 1 and cointegrated at seasonal frequency, and a representation theorem is given. The likelihood function is analyzed and the numerical calculation of the maximum...... likelihood estimators is discussed. The asymptotic distribution of the likelihood ratio test for cointegrating rank is given. It is shown that the estimated cointegrating vectors are asymptotically mixed Gaussian. The results resemble the results for cointegration at zero frequency when expressed in terms...

  4. Towards Automatic Semantic Labelling of 3D City Models

    Science.gov (United States)

    Rook, M.; Biljecki, F.; Diakité, A. A.

    2016-10-01

    The lack of semantic information in many 3D city models is a considerable limiting factor in their use, as a lot of applications rely on semantics. Such information is not always available, since it is not collected at all times, it might be lost due to data transformation, or its lack may be caused by non-interoperability in data integration from other sources. This research is a first step in creating an automatic workflow that semantically labels plain 3D city model represented by a soup of polygons, with semantic and thematic information, as defined in the CityGML standard. The first step involves the reconstruction of the topology, which is used in a region growing algorithm that clusters upward facing adjacent triangles. Heuristic rules, embedded in a decision tree, are used to compute a likeliness score for these regions that either represent the ground (terrain) or a RoofSurface. Regions with a high likeliness score, to one of the two classes, are used to create a decision space, which is used in a support vector machine (SVM). Next, topological relations are utilised to select seeds that function as a start in a region growing algorithm, to create regions of triangles of other semantic classes. The topological relationships of the regions are used in the aggregation of the thematic building features. Finally, the level of detail is detected to generate the correct output in CityGML. The results show an accuracy between 85 % and 99 % in the automatic semantic labelling on four different test datasets. The paper is concluded by indicating problems and difficulties implying the next steps in the research.

  5. Automatic Construction of Anomaly Detectors from Graphical Models

    Energy Technology Data Exchange (ETDEWEB)

    Ferragut, Erik M [ORNL; Darmon, David M [ORNL; Shue, Craig A [ORNL; Kelley, Stephen [ORNL

    2011-01-01

    Detection of rare or previously unseen attacks in cyber security presents a central challenge: how does one search for a sufficiently wide variety of types of anomalies and yet allow the process to scale to increasingly complex data? In particular, creating each anomaly detector manually and training each one separately presents untenable strains on both human and computer resources. In this paper we propose a systematic method for constructing a potentially very large number of complementary anomaly detectors from a single probabilistic model of the data. Only one model needs to be trained, but numerous detectors can then be implemented. This approach promises to scale better than manual methods to the complex heterogeneity of real-life data. As an example, we develop a Latent Dirichlet Allocation probability model of TCP connections entering Oak Ridge National Laboratory. We show that several detectors can be automatically constructed from the model and will provide anomaly detection at flow, sub-flow, and host (both server and client) levels. This demonstrates how the fundamental connection between anomaly detection and probabilistic modeling can be exploited to develop more robust operational solutions.

  6. Improving on hidden Markov models: An articulatorily constrained, maximum likelihood approach to speech recognition and speech coding

    Energy Technology Data Exchange (ETDEWEB)

    Hogden, J.

    1996-11-05

    The goal of the proposed research is to test a statistical model of speech recognition that incorporates the knowledge that speech is produced by relatively slow motions of the tongue, lips, and other speech articulators. This model is called Maximum Likelihood Continuity Mapping (Malcom). Many speech researchers believe that by using constraints imposed by articulator motions, we can improve or replace the current hidden Markov model based speech recognition algorithms. Unfortunately, previous efforts to incorporate information about articulation into speech recognition algorithms have suffered because (1) slight inaccuracies in our knowledge or the formulation of our knowledge about articulation may decrease recognition performance, (2) small changes in the assumptions underlying models of speech production can lead to large changes in the speech derived from the models, and (3) collecting measurements of human articulator positions in sufficient quantity for training a speech recognition algorithm is still impractical. The most interesting (and in fact, unique) quality of Malcom is that, even though Malcom makes use of a mapping between acoustics and articulation, Malcom can be trained to recognize speech using only acoustic data. By learning the mapping between acoustics and articulation using only acoustic data, Malcom avoids the difficulties involved in collecting articulator position measurements and does not require an articulatory synthesizer model to estimate the mapping between vocal tract shapes and speech acoustics. Preliminary experiments that demonstrate that Malcom can learn the mapping between acoustics and articulation are discussed. Potential applications of Malcom aside from speech recognition are also discussed. Finally, specific deliverables resulting from the proposed research are described.

  7. Electricity prices forecasting by automatic dynamic harmonic regression models

    International Nuclear Information System (INIS)

    The changes experienced by electricity markets in recent years have created the necessity for more accurate forecast tools of electricity prices, both for producers and consumers. Many methodologies have been applied to this aim, but in the view of the authors, state space models are not yet fully exploited. The present paper proposes a univariate dynamic harmonic regression model set up in a state space framework for forecasting prices in these markets. The advantages of the approach are threefold. Firstly, a fast automatic identification and estimation procedure is proposed based on the frequency domain. Secondly, the recursive algorithms applied offer adaptive predictions that compare favourably with respect to other techniques. Finally, since the method is based on unobserved components models, explicit information about trend, seasonal and irregular behaviours of the series can be extracted. This information is of great value to the electricity companies' managers in order to improve their strategies, i.e. it provides management innovations. The good forecast performance and the rapid adaptability of the model to changes in the data are illustrated with actual prices taken from the PJM interconnection in the US and for the Spanish market for the year 2002

  8. Empirical likelihood-based dimension reduction inference for linear error-in-responses models with validation study

    Institute of Scientific and Technical Information of China (English)

    2004-01-01

    [1]Fuller, W. A., Measurement Error Models, New York: John Wiley & Sons Inc., 1987.[2]Carroll, R. J., Ruppert, D., Stefanski, L. W., Measurement Error in Nonlinear Models, New York: Chapman and Hall, 1995.[3]Wittes, J., Lakatos, E., Probstfied, J., Surrogate endpoints in clinical trails: Cardiovascular diseases, Statist,Med., 1989, 8: 415-425.[4]Buonaccorsi, J. P., Measurement error in the response in the general linear model, J. Amer. Statist. Assoc., 1996,91(434): 633-642.[5]Carroll, R. J., Stefanski, L. A., Approximate quasi-likelihood estimation in models with surrogate predictors, J.Amer. Statist. Assoc., 1990, 85: 652-663.[6]Pepe, M. S., Inference using surrogate outcome data and a validation sample, Biometrika, 1992, 79: 355-365.[7]Duncan, G., Hill, D., An investigations of the extent and consequences of measurement error in labor-economics survey data, Journal of Labor Economics, 1985, 3: 508-532.[8]Stefanski, L. A., Carrol, R. J., Conditional scores and optimal scores for generalized linear measurement error models, Biometrika, 1987, 74:703-716.[9]Carroll, R. J., Wand, M. P., Semiparametric estimation in logistic measure error models, J. Roy. Statist. Soc.,Ser B, 1991, 53: 652-663.[10]Pepe, M. S., Fleming, T. R., A general nonparametric method for dealing with errors in missing or surrogate covariate data, J. Amer. Statist. Assoc. 1991, 86:108-113.[11]Pepe, M. S., Reilly, M., Fleming, T. R., Auxiliary outcome data and the mean score method, J. Statist. Plan.Inference, 1994, 42: 137-160.[12]Reilly, M., Pepe, M. S., A mean score method for missing and auxiliary covariate data in regression models,Biometrika, 1995, 82: 299-314.[13]Carroll, R. J., Knickerbocker, R. K., Wang, C. Y., Dimension reduction in a semiparametric regression model with errors in covariates, The Annals of Statistics, 1995, 23: 161-181.[14]Sepanski, J. H., Lee, L. F., Semiparametric estimation of nonlinear error-in-variables models

  9. 拟似然非线性模型的某些渐近推断:几何方法%SOME ASYMPTOTIC INFERENCE IN QUASI-LIKELIHOOD NONLINEAR MODELS:A GEOMETRIC APPROACH

    Institute of Scientific and Technical Information of China (English)

    韦博成; 唐年胜; 王学仁

    2000-01-01

    A modified Bates and Watts geometric framework is proposed for quasi-likelihood nonlinear models in Euclidean inner product space.Based on the modified geometric framework,some asymptotic inference in terms of curvatures for quasi-likelihood nonlinear models is studied.Several previous results for nonlinear regression models and exponential family nonlinear models etc.are extended to quasi-likelihood nonlinear models.

  10. Automatic prediction of facial trait judgments: appearance vs. structural models.

    Directory of Open Access Journals (Sweden)

    Mario Rojas

    Full Text Available Evaluating other individuals with respect to personality characteristics plays a crucial role in human relations and it is the focus of attention for research in diverse fields such as psychology and interactive computer systems. In psychology, face perception has been recognized as a key component of this evaluation system. Multiple studies suggest that observers use face information to infer personality characteristics. Interactive computer systems are trying to take advantage of these findings and apply them to increase the natural aspect of interaction and to improve the performance of interactive computer systems. Here, we experimentally test whether the automatic prediction of facial trait judgments (e.g. dominance can be made by using the full appearance information of the face and whether a reduced representation of its structure is sufficient. We evaluate two separate approaches: a holistic representation model using the facial appearance information and a structural model constructed from the relations among facial salient points. State of the art machine learning methods are applied to a derive a facial trait judgment model from training data and b predict a facial trait value for any face. Furthermore, we address the issue of whether there are specific structural relations among facial points that predict perception of facial traits. Experimental results over a set of labeled data (9 different trait evaluations and classification rules (4 rules suggest that a prediction of perception of facial traits is learnable by both holistic and structural approaches; b the most reliable prediction of facial trait judgments is obtained by certain type of holistic descriptions of the face appearance; and c for some traits such as attractiveness and extroversion, there are relationships between specific structural features and social perceptions.

  11. Rising Above Chaotic Likelihoods

    CERN Document Server

    Du, Hailiang

    2014-01-01

    Berliner (Likelihood and Bayesian prediction for chaotic systems, J. Am. Stat. Assoc. 1991) identified a number of difficulties in using the likelihood function within the Bayesian paradigm for state estimation and parameter estimation of chaotic systems. Even when the equations of the system are given, he demonstrated "chaotic likelihood functions" of initial conditions and parameter values in the 1-D Logistic Map. Chaotic likelihood functions, while ultimately smooth, have such complicated small scale structure as to cast doubt on the possibility of identifying high likelihood estimates in practice. In this paper, the challenge of chaotic likelihoods is overcome by embedding the observations in a higher dimensional sequence-space, which is shown to allow good state estimation with finite computational power. An Importance Sampling approach is introduced, where Pseudo-orbit Data Assimilation is employed in the sequence-space in order first to identify relevant pseudo-orbits and then relevant trajectories. Es...

  12. Empirical likelihood method in survival analysis

    CERN Document Server

    Zhou, Mai

    2015-01-01

    Add the Empirical Likelihood to Your Nonparametric ToolboxEmpirical Likelihood Method in Survival Analysis explains how to use the empirical likelihood method for right censored survival data. The author uses R for calculating empirical likelihood and includes many worked out examples with the associated R code. The datasets and code are available for download on his website and CRAN.The book focuses on all the standard survival analysis topics treated with empirical likelihood, including hazard functions, cumulative distribution functions, analysis of the Cox model, and computation of empiric

  13. Automatic component calibration and error diagnostics for model-based accelerator control. Phase I final report

    International Nuclear Information System (INIS)

    Phase I work studied the feasibility of developing software for automatic component calibration and error correction in beamline optics models. A prototype application was developed that corrects quadrupole field strength errors in beamline models

  14. ModelMage: a tool for automatic model generation, selection and management.

    Science.gov (United States)

    Flöttmann, Max; Schaber, Jörg; Hoops, Stephan; Klipp, Edda; Mendes, Pedro

    2008-01-01

    Mathematical modeling of biological systems usually involves implementing, simulating, and discriminating several candidate models that represent alternative hypotheses. Generating and managing these candidate models is a tedious and difficult task and can easily lead to errors. ModelMage is a tool that facilitates management of candidate models. It is designed for the easy and rapid development, generation, simulation, and discrimination of candidate models. The main idea of the program is to automatically create a defined set of model alternatives from a single master model. The user provides only one SBML-model and a set of directives from which the candidate models are created by leaving out species, modifiers or reactions. After generating models the software can automatically fit all these models to the data and provides a ranking for model selection, in case data is available. In contrast to other model generation programs, ModelMage aims at generating only a limited set of models that the user can precisely define. ModelMage uses COPASI as a simulation and optimization engine. Thus, all simulation and optimization features of COPASI are readily incorporated. ModelMage can be downloaded from http://sysbio.molgen.mpg.de/modelmage and is distributed as free software. PMID:19425122

  15. Evaluation of Model Recognition for Grammar-Based Automatic 3d Building Model Reconstruction

    Science.gov (United States)

    Yu, Qian; Helmholz, Petra; Belton, David

    2016-06-01

    In recent years, 3D city models are in high demand by many public and private organisations, and the steadily growing capacity in both quality and quantity are increasing demand. The quality evaluation of these 3D models is a relevant issue both from the scientific and practical points of view. In this paper, we present a method for the quality evaluation of 3D building models which are reconstructed automatically from terrestrial laser scanning (TLS) data based on an attributed building grammar. The entire evaluation process has been performed in all the three dimensions in terms of completeness and correctness of the reconstruction. Six quality measures are introduced to apply on four datasets of reconstructed building models in order to describe the quality of the automatic reconstruction, and also are assessed on their validity from the evaluation point of view.

  16. Automatic removal of eye movement artifacts from the EEG using ICA and the dipole model

    Institute of Scientific and Technical Information of China (English)

    Weidong Zhou; Jean Gotman

    2009-01-01

    12 patients were analyzed.The experimental results indicate that ICA with the dipole model is very efficient at automatically subtracting the eye movement artifacts,while retaining the EEG slow waves and making their interpretation easier.

  17. Towards a Pattern-based Automatic Generation of Logical Specifications for Software Models

    OpenAIRE

    Klimek, Radoslaw

    2014-01-01

    The work relates to the automatic generation of logical specifications, considered as sets of temporal logic formulas, extracted directly from developed software models. The extraction process is based on the assumption that the whole developed model is structured using only predefined workflow patterns. A method of automatic transformation of workflow patterns to logical specifications is proposed. Applying the presented concepts enables bridging the gap between the benefits of deductive rea...

  18. Simplifying Likelihood Ratios

    OpenAIRE

    McGee, Steven

    2002-01-01

    Likelihood ratios are one of the best measures of diagnostic accuracy, although they are seldom used, because interpreting them requires a calculator to convert back and forth between “probability” and “odds” of disease. This article describes a simpler method of interpreting likelihood ratios, one that avoids calculators, nomograms, and conversions to “odds” of disease. Several examples illustrate how the clinician can use this method to refine diagnostic decisions at the bedside.

  19. Matching and Clustering: Two Steps Towards Automatic Model Generation in Computer Vision

    OpenAIRE

    Gros, Patrick

    1993-01-01

    International audience In this paper, we present a general frame for a system of automatic modelling and recognition of 3D polyhedral objects. Such a system has many applications for robotics : recognition, localization, grasping,...Here we focus upon one main aspect of the system : when many images of one 3D object are taken from different unknown viewpoints, how to recognize those of them which represent the same aspect of the object ? Briefly, it is possible to determine automatically i...

  20. Maximum likelihood polynomial regression for robust speech recognition

    Institute of Scientific and Technical Information of China (English)

    LU Yong; WU Zhenyang

    2011-01-01

    The linear hypothesis is the main disadvantage of maximum likelihood linear re- gression (MLLR). This paper applies the polynomial regression method to model adaptation and establishes a nonlinear model adaptation algorithm using maximum likelihood polyno

  1. A new method for automatic discontinuity traces sampling on rock mass 3D model

    Science.gov (United States)

    Umili, G.; Ferrero, A.; Einstein, H. H.

    2013-02-01

    A new automatic method for discontinuity traces mapping and sampling on a rock mass digital model is described in this work. The implemented procedure allows one to automatically identify discontinuity traces on a Digital Surface Model: traces are detected directly as surface breaklines, by means of maximum and minimum principal curvature values of the vertices that constitute the model surface. Color influence and user errors, that usually characterize the trace mapping on images, are eliminated. Also trace sampling procedures based on circular windows and circular scanlines have been implemented: they are used to infer trace data and to calculate values of mean trace length, expected discontinuity diameter and intensity of rock discontinuities. The method is tested on a case study: results obtained applying the automatic procedure on the DSM of a rock face are compared to those obtained performing a manual sampling on the orthophotograph of the same rock face.

  2. Semi-automatic simulation model generation of virtual dynamic networks for production flow planning

    Science.gov (United States)

    Krenczyk, D.; Skolud, B.; Olender, M.

    2016-08-01

    Computer modelling, simulation and visualization of production flow allowing to increase the efficiency of production planning process in dynamic manufacturing networks. The use of the semi-automatic model generation concept based on parametric approach supporting processes of production planning is presented. The presented approach allows the use of simulation and visualization for verification of production plans and alternative topologies of manufacturing network configurations as well as with automatic generation of a series of production flow scenarios. Computational examples with the application of Enterprise Dynamics simulation software comprising the steps of production planning and control for manufacturing network have been also presented.

  3. A cultural evolutionary programming approach to automatic analytical modeling of electrochemical phenomena through impedance spectroscopy

    CERN Document Server

    Arpaia, Pasquale

    2009-01-01

    An approach to automatic analytical modeling of electrochemical impedance spectroscopy data by evolutionary programming based on cultural algorithms is proposed. A solution-search strategy based on a cultural mechanism is exploited for defining the equivalent-circuit model automatically: information on search advance is transmitted to all potential solutions, rather than only to a small inheriting subset, such as in a traditional genetic approach. Moreover, with respect to the state of the art, also specific information related to constraints on the application physics knowledge is transferred. Experimental results of the proposed approach implementation in impedance spectroscopy for general-purpose electrochemical circuit analysis and for corrosion monitoring and diagnosing are presented.

  4. An Approach Using a 1D Hydraulic Model, Landsat Imaging and Generalized Likelihood Uncertainty Estimation for an Approximation of Flood Discharge

    Directory of Open Access Journals (Sweden)

    Seung Oh Lee

    2013-10-01

    Full Text Available Collection and investigation of flood information are essential to understand the nature of floods, but this has proved difficult in data-poor environments, or in developing or under-developed countries due to economic and technological limitations. The development of remote sensing data, GIS, and modeling techniques have, therefore, proved to be useful tools in the analysis of the nature of floods. Accordingly, this study attempts to estimate a flood discharge using the generalized likelihood uncertainty estimation (GLUE methodology and a 1D hydraulic model, with remote sensing data and topographic data, under the assumed condition that there is no gauge station in the Missouri river, Nebraska, and Wabash River, Indiana, in the United States. The results show that the use of Landsat leads to a better discharge approximation on a large-scale reach than on a small-scale. Discharge approximation using the GLUE depended on the selection of likelihood measures. Consideration of physical conditions in study reaches could, therefore, contribute to an appropriate selection of informal likely measurements. The river discharge assessed by using Landsat image and the GLUE Methodology could be useful in supplementing flood information for flood risk management at a planning level in ungauged basins. However, it should be noted that this approach to the real-time application might be difficult due to the GLUE procedure.

  5. A Maximum Likelihood Estimator of a Markov Model for Disease Activity in Crohn's Disease and Ulcerative Colitis for Annually Aggregated Partial Observations

    DEFF Research Database (Denmark)

    Borg, Søren; Persson, U.; Jess, T.;

    2010-01-01

    cycle length of 1 month. The purpose of these models was to enable evaluation of interventions that would shorten relapses or postpone future relapses. An exact maximum likelihood estimator was developed that disaggregates the yearly observations into monthly transition probabilities between remission...... observed data and has good face validity. The disease activity model is less suitable for UC due to its transient nature through the presence of curative surgery...... Hospital, Copenhagen, Denmark, during 1991 to 1993. The data were aggregated over calendar years; for each year, the number of relapses and the number of surgical operations were recorded. Our aim was to estimate Markov models for disease activity in CD and UC, in terms of relapse and remission, with a...

  6. A Maximum Likelihood Estimator of a Markov Model for Disease Activity in Crohn's Disease and Ulcerative Colitis for Annually Aggregated Partial Observations

    DEFF Research Database (Denmark)

    Borg, Søren; Persson, U.; Jess, T.;

    2010-01-01

    Hospital, Copenhagen, Denmark, during 1991 to 1993. The data were aggregated over calendar years; for each year, the number of relapses and the number of surgical operations were recorded. Our aim was to estimate Markov models for disease activity in CD and UC, in terms of relapse and remission...... data and has good face validity. The disease activity model is less suitable for UC due to its transient nature through the presence of curative surgery......, with a cycle length of 1 month. The purpose of these models was to enable evaluation of interventions that would shorten relapses or postpone future relapses. An exact maximum likelihood estimator was developed that disaggregates the yearly observations into monthly transition probabilities between remission...

  7. Towards an automatic model transformation mechanism from UML state machines to DEVS models

    Directory of Open Access Journals (Sweden)

    Ariel González

    2015-08-01

    Full Text Available The development of complex event-driven systems requires studies and analysis prior to deployment with the goal of detecting unwanted behavior. UML is a language widely used by the software engineering community for modeling these systems through state machines, among other mechanisms. Currently, these models do not have appropriate execution and simulation tools to analyze the real behavior of systems. Existing tools do not provide appropriate libraries (sampling from a probability distribution, plotting, etc. both to build and to analyze models. Modeling and simulation for design and prototyping of systems are widely used techniques to predict, investigate and compare the performance of systems. In particular, the Discrete Event System Specification (DEVS formalism separates the modeling and simulation; there are several tools available on the market that run and collect information from DEVS models. This paper proposes a model transformation mechanism from UML state machines to DEVS models in the Model-Driven Development (MDD context, through the declarative QVT Relations language, in order to perform simulations using tools, such as PowerDEVS. A mechanism to validate the transformation is proposed. Moreover, examples of application to analyze the behavior of an automatic banking machine and a control system of an elevator are presented.

  8. A Study of Automatic Migration of Programs Across the Java Event Models

    OpenAIRE

    Kumar, Bharath M; Lakshminarayanan, R.; Srikant, YN

    2000-01-01

    Evolution of a framework forces a change in the design of an application, which is based on the framework. The same is the case when the Java event model changed from the Inher- itance model to the Event Delegation model. We summarize our experiences when attempting an automatic and elegant migration across the event models. Further, we also necessi- tate the need for extra documentation in patterns that will help programs evolve better.

  9. Automatic fitting of spiking neuron models to electrophysiological recordings

    Directory of Open Access Journals (Sweden)

    Cyrille Rossant

    2010-03-01

    Full Text Available Spiking models can accurately predict the spike trains produced by cortical neurons in response to somatically injected currents. Since the specific characteristics of the model depend on the neuron, a computational method is required to fit models to electrophysiological recordings. The fitting procedure can be very time consuming both in terms of computer simulations and in terms of code writing. We present algorithms to fit spiking models to electrophysiological data (time-varying input and spike trains that can run in parallel on graphics processing units (GPUs. The model fitting library is interfaced with Brian, a neural network simulator in Python. If a GPU is present it uses just-in-time compilation to translate model equations into optimized code. Arbitrary models can then be defined at script level and run on the graphics card. This tool can be used to obtain empirically validated spiking models of neurons in various systems. We demonstrate its use on public data from the INCF Quantitative Single-Neuron Modeling 2009 competition by comparing the performance of a number of neuron spiking models.

  10. Evaluating treatment effectiveness under model misspecification: a comparison of targeted maximum likelihood estimation with bias-corrected matching

    OpenAIRE

    Kreif, N.; Gruber, S.; Radice, Rosalba; Grieve, R; J S Sekhon

    2014-01-01

    Statistical approaches for estimating treatment effectiveness commonly model the endpoint, or the propensity score, using parametric regressions such as generalised linear models. Misspecification of these models can lead to biased parameter estimates. We compare two approaches that combine the propensity score and the endpoint regression, and can make weaker modelling assumptions, by using machine learning approaches to estimate the regression function and the propensity score. Targeted maxi...

  11. Automatic model-based face reconstruction and recognition

    OpenAIRE

    Breuer, Pia

    2011-01-01

    Three-dimensional Morphable Models (3DMM) are known to be valuable tools for both face reconstruction and face recognition. These models are particularly relevant in safety applications or Computer Graphics. In this thesis, contributions are made to address the major difficulties preceding and during the fitting process of the Morphable Model in the framework of a fully automated system.It is shown to which extent the reconstruction and recognition results depend on the initialization and wha...

  12. AUTOMATIC MESH GENERATION OF 3-D GEOMETRIC MODELS

    Institute of Scientific and Technical Information of China (English)

    刘剑飞

    2003-01-01

    In this paper the presentation of the ball-packing method is reviewed,and a scheme to generate mesh for complex 3-D geometric models is given,which consists of 4 steps:(1)create nodes in 3-D models by ball-packing method,(2)connect nodes to generate mesh by 3-D Delaunay triangulation,(3)retrieve the boundary of the model after Delaunay triangulation,(4)improve the mesh.

  13. Automatic Generation of 3D Building Models for Sustainable Development

    OpenAIRE

    Sugihara, Kenichi

    2015-01-01

    3D city models are important in urban planning for sustainable development. Urban planners draw maps for efficient land use and a compact city. 3D city models based on these maps are quite effective in understanding what, if this alternative plan is realized, the image of a sustainable city will be. However, enormous time and labour has to be consumed to create these 3D models, using 3D modelling software such as 3ds Max or SketchUp. In order to automate the laborious steps, a GIS and CG inte...

  14. Fully automatic adjoints: a robust and efficient mechanism for generating adjoint ocean models

    Science.gov (United States)

    Ham, D. A.; Farrell, P. E.; Funke, S. W.; Rognes, M. E.

    2012-04-01

    The problem of generating and maintaining adjoint models is sufficiently difficult that typically only the most advanced and well-resourced community ocean models achieve it. There are two current technologies which each suffer from their own limitations. Algorithmic differentiation, also called automatic differentiation, is employed by models such as the MITGCM [2] and the Alfred Wegener Institute model FESOM [3]. This technique is very difficult to apply to existing code, and requires a major initial investment to prepare the code for automatic adjoint generation. AD tools may also have difficulty with code employing modern software constructs such as derived data types. An alternative is to formulate the adjoint differential equation and to discretise this separately. This approach, known as the continuous adjoint and employed in ROMS [4], has the disadvantage that two different model code bases must be maintained and manually kept synchronised as the model develops. The discretisation of the continuous adjoint is not automatically consistent with that of the forward model, producing an additional source of error. The alternative presented here is to formulate the flow model in the high level language UFL (Unified Form Language) and to automatically generate the model using the software of the FEniCS project. In this approach it is the high level code specification which is differentiated, a task very similar to the formulation of the continuous adjoint [5]. However since the forward and adjoint models are generated automatically, the difficulty of maintaining them vanishes and the software engineering process is therefore robust. The scheduling and execution of the adjoint model, including the application of an appropriate checkpointing strategy is managed by libadjoint [1]. In contrast to the conventional algorithmic differentiation description of a model as a series of primitive mathematical operations, libadjoint employs a new abstraction of the simulation

  15. Active Shapes for Automatic 3D Modeling of Buildings

    NARCIS (Netherlands)

    Sirmacek, B.; Lindenbergh, R.C.

    2015-01-01

    Recent technological developments help us to acquire high quality 3D measurements of our urban environment. However, these measurements, which come as point clouds or Digital Surface Models (DSM), do not directly give 3D geometrical models of buildings. In addition to that, they are not suitable for

  16. Towards automatic model based controller design for reconfigurable plants

    DEFF Research Database (Denmark)

    Michelsen, Axel Gottlieb; Stoustrup, Jakob; Izadi-Zamanabadi, Roozbeh

    2008-01-01

    This paper introduces model-based Plug and Play Process Control, a novel concept for process control, which allows a model-based control system to be reconfigured when a sensor or an actuator is plugged into a controlled process. The work reported in this paper focuses on composing a monolithic m...

  17. On divergences tests for composite hypotheses under composite likelihood

    OpenAIRE

    Martin, Nirian; Pardo, Leandro; Zografos, Konstantinos

    2016-01-01

    It is well-known that in some situations it is not easy to compute the likelihood function as the datasets might be large or the model is too complex. In that contexts composite likelihood, derived by multiplying the likelihoods of subjects of the variables, may be useful. The extension of the classical likelihood ratio test statistics to the framework of composite likelihoods is used as a procedure to solve the problem of testing in the context of composite likelihood. In this paper we intro...

  18. BIVOPROB: A Computer Program for Maximum-Likelihood Estimation of Bivariate Ordered-Probit Models for Censored Data

    OpenAIRE

    Calhoun, C. A.

    1989-01-01

    Despite the large number of models devoted to the statistical analysis of censored data, relatively little attention has been given to the case of censored discrete outcomes. In this paper, the author presents a technical description and user's guide to a computer program for estimating bivariate ordered-probit models for censored and uncensored data. The model and program are currently being applied in an analysis of World Fertility Survey data for Europe and the United States, and the resul...

  19. The Maximum Likelihood Threshold of a Graph

    OpenAIRE

    Gross, Elizabeth; Sullivant, Seth

    2014-01-01

    The maximum likelihood threshold of a graph is the smallest number of data points that guarantees that maximum likelihood estimates exist almost surely in the Gaussian graphical model associated to the graph. We show that this graph parameter is connected to the theory of combinatorial rigidity. In particular, if the edge set of a graph $G$ is an independent set in the $n-1$-dimensional generic rigidity matroid, then the maximum likelihood threshold of $G$ is less than or equal to $n$. This c...

  20. Usefulness and limitations of dK random graph models to predict interactions and functional homogeneity in biological networks under a pseudo-likelihood parameter estimation approach

    Directory of Open Access Journals (Sweden)

    Luan Yihui

    2009-09-01

    Full Text Available Abstract Background Many aspects of biological functions can be modeled by biological networks, such as protein interaction networks, metabolic networks, and gene coexpression networks. Studying the statistical properties of these networks in turn allows us to infer biological function. Complex statistical network models can potentially more accurately describe the networks, but it is not clear whether such complex models are better suited to find biologically meaningful subnetworks. Results Recent studies have shown that the degree distribution of the nodes is not an adequate statistic in many molecular networks. We sought to extend this statistic with 2nd and 3rd order degree correlations and developed a pseudo-likelihood approach to estimate the parameters. The approach was used to analyze the MIPS and BIOGRID yeast protein interaction networks, and two yeast coexpression networks. We showed that 2nd order degree correlation information gave better predictions of gene interactions in both protein interaction and gene coexpression networks. However, in the biologically important task of predicting functionally homogeneous modules, degree correlation information performs marginally better in the case of the MIPS and BIOGRID protein interaction networks, but worse in the case of gene coexpression networks. Conclusion Our use of dK models showed that incorporation of degree correlations could increase predictive power in some contexts, albeit sometimes marginally, but, in all contexts, the use of third-order degree correlations decreased accuracy. However, it is possible that other parameter estimation methods, such as maximum likelihood, will show the usefulness of incorporating 2nd and 3rd degree correlations in predicting functionally homogeneous modules.

  1. A Method for Modeling the Virtual Instrument Automatic Test System Based on the Petri Net

    Institute of Scientific and Technical Information of China (English)

    MA Min; CHEN Guang-ju

    2005-01-01

    Virtual instrument is playing the important role in automatic test system. This paper introduces a composition of a virtual instrument automatic test system and takes the VXIbus based a test software platform which is developed by CAT lab of the UESTC as an example. Then a method to model this system based on Petri net is proposed. Through this method, we can analyze the test task scheduling to prevent the deadlock or resources conflict. At last, this paper analyzes the feasibility of this method.

  2. Automatic generation of groundwater model hydrostratigraphy from AEM resistivity and boreholes

    DEFF Research Database (Denmark)

    Marker, Pernille Aabye; Foged, N.; Christiansen, A. V.;

    2014-01-01

    and heterogeneity, which spatially scarce borehole lithology data may overlook, are well resolved in AEM surveys. This study presents a semi-automatic sequential hydrogeophysical inversion method for the integration of AEM and borehole data into regional groundwater models in sedimentary areas, where sand/ clay...

  3. A Stochastic Approach for Automatic and Dynamic Modeling of Students' Learning Styles in Adaptive Educational Systems

    Science.gov (United States)

    Dorça, Fabiano Azevedo; Lima, Luciano Vieira; Fernandes, Márcia Aparecida; Lopes, Carlos Roberto

    2012-01-01

    Considering learning and how to improve students' performances, an adaptive educational system must know how an individual learns best. In this context, this work presents an innovative approach for student modeling through probabilistic learning styles combination. Experiments have shown that our approach is able to automatically detect and…

  4. Unidirectional high fiber content composites: Automatic 3D FE model generation and damage simulation

    DEFF Research Database (Denmark)

    Qing, Hai; Mishnaevsky, Leon

    2009-01-01

    A new method and a software code for the automatic generation of 3D micromechanical FE models of unidirectional long-fiber-reinforced composite (LFRC) with high fiber volume fraction with random fiber arrangement are presented. The fiber arrangement in the cross-section is generated through random...

  5. Revisiting the Steam-Boiler Case Study with LUTESS : Modeling for Automatic Test Generation

    OpenAIRE

    Papailiopoulou, Virginia; Seljimi, Besnik; Parissis, Ioannis

    2009-01-01

    International audience LUTESS is a testing tool for synchronous software making possible to automatically build test data generators. The latter rely on a formal model of the program environment composed of a set of invariant properties, supposed to hold for every software execution. Additional assumptions can be used to guide the test data generation. The environment descriptions together with the assumptions correspond to a test model of the program. In this paper, we apply this modeling...

  6. Automatic generation of computable implementation guides from clinical information models

    OpenAIRE

    Boscá Tomás, Diego; Maldonado Segura, José Alberto; Moner Cano, David; Robles Viejo, Montserrat

    2015-01-01

    Clinical information models are increasingly used to describe the contents of Electronic Health Records. Implementation guides are a common specification mechanism used to define such models. They contain, among other reference materials, all the constraints and rules that clinical information must obey. However, these implementation guides typically are oriented to human-readability, and thus cannot be processed by computers. As a consequence, they must be reinterpreted and trans...

  7. Mathematical modeling and analytical solution for stretching force of automatic feed mechanism

    Institute of Scientific and Technical Information of China (English)

    魏志芳; 陈国光

    2008-01-01

    Load of an automatic feed mechanism is composed of the stretching force of feed belt at the entrance to lower flexible guidance and the friction force between feed belt and flexible guidance. A mathematical model for computing the load was presented. An optimization problem was formulated to determine the attitude of the flexible guidance based on the principle that the potential energy stored in the system was the minimum at the equilibrium. Then the friction force was obtained according to the attitude of guide leaves and the moving velocity of the feed belt and the friction factor. Consequently, the load of the automatic feed mechanism can be calculated. Finally, an example was given to compute the load when the horizontal and elevating firing angles of the automation were respectively 45° and 30°. The computing result can be a criterion to determine the designing parameters of automat.

  8. Likelihood estimators for multivariate extremes

    KAUST Repository

    Huser, Raphael Georges

    2015-11-17

    The main approach to inference for multivariate extremes consists in approximating the joint upper tail of the observations by a parametric family arising in the limit for extreme events. The latter may be expressed in terms of componentwise maxima, high threshold exceedances or point processes, yielding different but related asymptotic characterizations and estimators. The present paper clarifies the connections between the main likelihood estimators, and assesses their practical performance. We investigate their ability to estimate the extremal dependence structure and to predict future extremes, using exact calculations and simulation, in the case of the logistic model.

  9. An automatic 3D CAD model errors detection method of aircraft structural part for NC machining

    Directory of Open Access Journals (Sweden)

    Bo Huang

    2015-10-01

    Full Text Available Feature-based NC machining, which requires high quality of 3D CAD model, is widely used in machining aircraft structural part. However, there has been little research on how to automatically detect the CAD model errors. As a result, the user has to manually check the errors with great effort before NC programming. This paper proposes an automatic CAD model errors detection approach for aircraft structural part. First, the base faces are identified based on the reference directions corresponding to machining coordinate systems. Then, the CAD models are partitioned into multiple local regions based on the base faces. Finally, the CAD model error types are evaluated based on the heuristic rules. A prototype system based on CATIA has been developed to verify the effectiveness of the proposed approach.

  10. CADLIVE toolbox for MATLAB: automatic dynamic modeling of biochemical networks with comprehensive system analysis.

    Science.gov (United States)

    Inoue, Kentaro; Maeda, Kazuhiro; Miyabe, Takaaki; Matsuoka, Yu; Kurata, Hiroyuki

    2014-09-01

    Mathematical modeling has become a standard technique to understand the dynamics of complex biochemical systems. To promote the modeling, we had developed the CADLIVE dynamic simulator that automatically converted a biochemical map into its associated mathematical model, simulated its dynamic behaviors and analyzed its robustness. To enhance the feasibility by CADLIVE and extend its functions, we propose the CADLIVE toolbox available for MATLAB, which implements not only the existing functions of the CADLIVE dynamic simulator, but also the latest tools including global parameter search methods with robustness analysis. The seamless, bottom-up processes consisting of biochemical network construction, automatic construction of its dynamic model, simulation, optimization, and S-system analysis greatly facilitate dynamic modeling, contributing to the research of systems biology and synthetic biology. This application can be freely downloaded from http://www.cadlive.jp/CADLIVE_MATLAB/ together with an instruction.

  11. Automaticity and control in prospective memory: a computational model.

    Directory of Open Access Journals (Sweden)

    Sam J Gilbert

    Full Text Available Prospective memory (PM refers to our ability to realize delayed intentions. In event-based PM paradigms, participants must act on an intention when they detect the occurrence of a pre-established cue. Some theorists propose that in such paradigms PM responding can only occur when participants deliberately initiate processes for monitoring their environment for appropriate cues. Others propose that perceptual processing of PM cues can directly trigger PM responding in the absence of strategic monitoring, at least under some circumstances. In order to address this debate, we present a computational model implementing the latter account, using a parallel distributed processing (interactive activation framework. In this model PM responses can be triggered directly as a result of spreading activation from units representing perceptual inputs. PM responding can also be promoted by top-down monitoring for PM targets. The model fits a wide variety of empirical findings from PM paradigms, including the effect of maintaining PM intentions on ongoing response time and the intention superiority effect. The model also makes novel predictions concerning the effect of stimulus degradation on PM performance, the shape of response time distributions on ongoing and prospective memory trials, and the effects of instructing participants to make PM responses instead of ongoing responses or alongside them. These predictions were confirmed in two empirical experiments. We therefore suggest that PM should be considered to result from the interplay between bottom-up triggering of PM responses by perceptual input, and top-down monitoring for appropriate cues. We also show how the model can be extended to simulate encoding new intentions and subsequently deactivating them, and consider links between the model's performance and results from neuroimaging.

  12. Introductory statistical inference with the likelihood function

    CERN Document Server

    Rohde, Charles A

    2014-01-01

    This textbook covers the fundamentals of statistical inference and statistical theory including Bayesian and frequentist approaches and methodology possible without excessive emphasis on the underlying mathematics. This book is about some of the basic principles of statistics that are necessary to understand and evaluate methods for analyzing complex data sets. The likelihood function is used for pure likelihood inference throughout the book. There is also coverage of severity and finite population sampling. The material was developed from an introductory statistical theory course taught by the author at the Johns Hopkins University’s Department of Biostatistics. Students and instructors in public health programs will benefit from the likelihood modeling approach that is used throughout the text. This will also appeal to epidemiologists and psychometricians.  After a brief introduction, there are chapters on estimation, hypothesis testing, and maximum likelihood modeling. The book concludes with secti...

  13. 区间数据情形下线性模型的经验似然推断%EMPIRICAL LIKELIHOOD-BASED INFERENCE IN LINEAR MODELS WITH INTERVAL CENSORED DATA

    Institute of Scientific and Technical Information of China (English)

    何其祥; 郑明

    2005-01-01

    An empirical likelihood approach to estimate the coefficients in linear model with interval censored responses is developed in this paper.By constructing unbiased transformation of interval censored data,an empirical log-likelihood function with asymptotic χ2 is derived.The confidence regions for the coefficients are constructed.Some simulation results indicate that the method performs better than the normal approximation method in term of coverage accuracies.

  14. Automatic Relevance Determination for multi-way models

    DEFF Research Database (Denmark)

    Mørup, Morten; Hansen, Lars Kai

    2009-01-01

    of components of data within the Tucker and CP structure. For the Tucker and CP model the approach performs better than heuristics such as the Bayesian Information Criterion, Akaikes Information Criterion, DIFFIT and the numerical convex hull (NumConvHull) while operating only at the cost of estimating...

  15. On Automatic Modeling and Use of Domain-specific Ontologies

    DEFF Research Database (Denmark)

    Andreasen, Troels; Knappe, Rasmus; Bulskov, Henrik

    2005-01-01

    In this paper, we firstly introduce an approach to the modeling of a domain-specific ontology for use in connection with a given document collection. Secondly, we present a methodology for deriving conceptual similarity from the domain-specific ontology. Adopted for ontology representation is a s...

  16. Information Model for Connection Management in Automatic Switched Optical Network

    Institute of Scientific and Technical Information of China (English)

    Xu Yunbin(徐云斌); Song Hongsheng; Gui Xuan; Zhang Jie; Gu Wanyi

    2004-01-01

    The three types of connections (Permanent Connection, Soft Permanent Connection and Switched Connection) provided by ASON can adapt the requirement of different network services. Management and maintenance of these three connections are the most important aspect of ASON management. The information models proposed in this paper are used for the purpose of ASON connection management. Firstly a new information model is proposed to meet the requirement for the control plane introduced by ASON. In this model, a new class ControlNE is given, and the relationship between the ControlNE and the transport NE (network element) is also defined. Then this paper proposes information models for the three types of connections for the first time, and analyzes the relationship between the three kinds of connections and the basic network transport entities. Finally, the paper defines some CORBA interfaces for the management of the three connections. In these interfaces, some operations such as create or release a connection are defined, and some other operations can manage the performance of the three kinds of connections, which is necessary for a distributed management system.

  17. Automatic Segmentation of Vertebrae from Radiographs: A Sample-Driven Active Shape Model Approach

    DEFF Research Database (Denmark)

    Mysling, Peter; Petersen, Peter Kersten; Nielsen, Mads;

    2011-01-01

    Segmentation of vertebral contours is an essential task in the design of automatic tools for vertebral fracture assessment. In this paper, we propose a novel segmentation technique which does not require operator interaction. The proposed technique solves the segmentation problem in a hierarchical...... manner. In a first phase, a coarse estimate of the overall spine alignment and the vertebra locations is computed using a shape model sampling scheme. These samples are used to initialize a second phase of active shape model search, under a nonlinear model of vertebra appearance. The search...... is constrained by a conditional shape model, based on the variability of the coarse spine location estimates. The technique is evaluated on a data set of manually annotated lumbar radiographs. The results compare favorably to the previous work in automatic vertebra segmentation, in terms of both segmentation...

  18. Automatic segmentation of vertebral arteries in CT angiography using combined circular and cylindrical model fitting

    Science.gov (United States)

    Lee, Min Jin; Hong, Helen; Chung, Jin Wook

    2014-03-01

    We propose an automatic vessel segmentation method of vertebral arteries in CT angiography using combined circular and cylindrical model fitting. First, to generate multi-segmented volumes, whole volume is automatically divided into four segments by anatomical properties of bone structures along z-axis of head and neck. To define an optimal volume circumscribing vertebral arteries, anterior-posterior bounding and side boundaries are defined as initial extracted vessel region. Second, the initial vessel candidates are tracked using circular model fitting. Since boundaries of the vertebral arteries are ambiguous in case the arteries pass through the transverse foramen in the cervical vertebra, the circle model is extended along z-axis to cylinder model for considering additional vessel information of neighboring slices. Finally, the boundaries of the vertebral arteries are detected using graph-cut optimization. From the experiments, the proposed method provides accurate results without bone artifacts and eroded vessels in the cervical vertebra.

  19. Maximum Likelihood Methods in Treating Outliers and Symmetrically Heavy-Tailed Distributions for Nonlinear Structural Equation Models with Missing Data

    Science.gov (United States)

    Lee, Sik-Yum; Xia, Ye-Mao

    2006-01-01

    By means of more than a dozen user friendly packages, structural equation models (SEMs) are widely used in behavioral, education, social, and psychological research. As the underlying theory and methods in these packages are vulnerable to outliers and distributions with longer-than-normal tails, a fundamental problem in the field is the…

  20. Automatic generation of computable implementation guides from clinical information models.

    Science.gov (United States)

    Boscá, Diego; Maldonado, José Alberto; Moner, David; Robles, Montserrat

    2015-06-01

    Clinical information models are increasingly used to describe the contents of Electronic Health Records. Implementation guides are a common specification mechanism used to define such models. They contain, among other reference materials, all the constraints and rules that clinical information must obey. However, these implementation guides typically are oriented to human-readability, and thus cannot be processed by computers. As a consequence, they must be reinterpreted and transformed manually into an executable language such as Schematron or Object Constraint Language (OCL). This task can be difficult and error prone due to the big gap between both representations. The challenge is to develop a methodology for the specification of implementation guides in such a way that humans can read and understand easily and at the same time can be processed by computers. In this paper, we propose and describe a novel methodology that uses archetypes as basis for generation of implementation guides. We use archetypes to generate formal rules expressed in Natural Rule Language (NRL) and other reference materials usually included in implementation guides such as sample XML instances. We also generate Schematron rules from NRL rules to be used for the validation of data instances. We have implemented these methods in LinkEHR, an archetype editing platform, and exemplify our approach by generating NRL rules and implementation guides from EN ISO 13606, openEHR, and HL7 CDA archetypes. PMID:25910958

  1. On the method of the automatic modeling in hydraulic pipe networks

    Institute of Scientific and Technical Information of China (English)

    孙以泽; 徐本洲; 王祖温

    2003-01-01

    In this paper the dynamic characteristics in pipes are analyzed with frequency method, and puts for-ward a simple and practical describing method. By establishing the model library beforehand, the modeling ofthe pipe-net is completed automatically, and we can accurately calculate the impedance characteristics of thepipe network, achieve the reasonable configuration of the pipe network, so that to decrease the pressure pulsa-tion.

  2. Towards the Availability of the Distributed Cluster Rendering System: Automatic Modeling and Verification

    DEFF Research Database (Denmark)

    Wang, Kemin; Jiang, Zhengtao; Wang, Yongbin;

    2012-01-01

    , whenever the number of node-n and related parameters vary, we can create the PRISM model file rapidly and then we can use PRISM model checker to verify ralated system properties. At the end of this study, we analyzed and verified the availability distributions of the Distributed Cluster Rendering System......In this study, we proposed a Continuous Time Markov Chain Model towards the availability of n-node clusters of Distributed Rendering System. It's an infinite one, we formalized it, based on the model, we implemented a software, which can automatically model with PRISM language. With the tool...

  3. Prediction model for outcome after low-back surgery: individualized likelihood of complication, hospital readmission, return to work, and 12-month improvement in functional disability.

    Science.gov (United States)

    McGirt, Matthew J; Sivaganesan, Ahilan; Asher, Anthony L; Devin, Clinton J

    2015-12-01

    OBJECT Lumbar spine surgery has been demonstrated to be efficacious for many degenerative spine conditions. However, there is wide variability in outcome after spine surgery at the individual patient level. All stakeholders in spine care will benefit from identification of the unique patient or disease subgroups that are least likely to benefit from surgery, are prone to costly complications, and have increased health care utilization. There remains a large demand for individual patient-level predictive analytics to guide decision support to optimize outcomes at the patient and population levels. METHODS One thousand eight hundred three consecutive patients undergoing spine surgery for various degenerative lumbar diagnoses were prospectively enrolled and followed for 1 year. A comprehensive patient interview and health assessment was performed at baseline and at 3 and 12 months after surgery. All predictive covariates were selected a priori. Eighty percent of the sample was randomly selected for model development, and 20% for model validation. Linear regression was performed with Bayesian model averaging to model 12-month ODI (Oswestry Disability Index). Logistic regression with Bayesian model averaging was used to model likelihood of complications, 30-day readmission, need for inpatient rehabilitation, and return to work. Goodness-of-fit was assessed via R(2) for 12-month ODI and via the c-statistic, area under the receiver operating characteristic curve (AUC), for the categorical endpoints. Discrimination (predictive performance) was assessed, using R(2) for the ODI model and the c-statistic for the categorical endpoint models. Calibration was assessed using a plot of predicted versus observed values for the ODI model and the Hosmer-Lemeshow test for the categorical endpoint models. RESULTS On average, all patient-reported outcomes (PROs) were improved after surgery (ODI baseline vs 12 month: 50.4 vs 29.5%, p offer spine surgery specifically to those who are most

  4. A CAD based automatic modeling method for primitive solid based Monte Carlo calculation geometry

    International Nuclear Information System (INIS)

    The Multi-Physics Coupling Analysis Modeling Program (MCAM), developed by FDS Team, China, is an advanced modeling tool aiming to solve the modeling challenges for multi-physics coupling simulation. The automatic modeling method for SuperMC, the Super Monte Carlo Calculation Program for Nuclear and Radiation Process, was recently developed and integrated in MCAM5.2. This method could bi-convert between CAD model and SuperMC input file. While converting from CAD model to SuperMC model, the CAD model was decomposed into several convex solids set, and then corresponding SuperMC convex basic solids were generated and output. While inverting from SuperMC model to CAD model, the basic primitive solids was created and related operation was done to according the SuperMC model. This method was benchmarked with ITER Benchmark model. The results showed that the method was correct and effective. (author)

  5. Likelihood inference for unions of interacting discs

    DEFF Research Database (Denmark)

    Møller, Jesper; Helisová, Katarina

    is specified with respect to a given marked Poisson model (i.e. a Boolean model). We show how edge effects and other complications can be handled by considering a certain conditional likelihood. Our methodology is illustrated by analyzing Peter Diggle's heather dataset, where we discuss the results......To the best of our knowledge, this is the first paper which discusses likelihood inference or a random set using a germ-grain model, where the individual grains are unobservable edge effects occur, and other complications appear. We consider the case where the grains form a disc process modelled...... of simulation-based maximum likelihood inference and the effect of specifying different reference Poisson models....

  6. Automatic Navigation for Rat-Robots with Modeling of the Human Guidance

    Institute of Scientific and Technical Information of China (English)

    Chao Sun; Nenggan Zheng; Xinlu Zhang; Weidong Chen; Xiaoxiang Zheng

    2013-01-01

    A bio-robot system refers to an animal equipped with Brain-Computer Interface (BCI),through which the outer stimulation is delivered directly into the animal's brain to control its behaviors.The development ofbio-robots suffers from the dependency on real-time guidance by human operators.Because of its inherent difficulties,there is no feasible method for automatic controlling of bio-robots yet.In this paper,we propose a new method to realize the automatic navigation for bio-robots.A General Regression Neural Network (GRNN) is adopted to analyze and model the controlling procedure of human operations.Comparing to the traditional approaches with explicit controlling rules,our algorithm learns the controlling process and imitates the decision-making of human-beings to steer the rat-robot automatically.In real-time navigation experiments,our method successfully controls bio-robots to follow given paths automatically and precisely.This work would be significant for future applications of bio-robots and provide a new way to realize hybrid intelligent systems with artificial intelligence and natural biological intelligence combined together.

  7. Automatic Segmentation Framework of Building Anatomical Mouse Model for Bioluminescence Tomography

    OpenAIRE

    Abdullah Alali

    2013-01-01

    Bioluminescence tomography is known as a highly ill-posed inverse problem. To improve the reconstruction performance by introducing anatomical structures as a priori knowledge, an automatic segmentation framework has been proposed in this paper to extract the mouse whole-body organs and tissues, which enables to build up a heterogeneous mouse model for reconstruction of bioluminescence tomography. Finally, an in vivo mouse experiment has been conducted to evaluate this framework by using an X...

  8. On the automatic compilation of e-learning models to planning

    OpenAIRE

    Garrido Tejero, Antonio; Morales, Lluvia; Fernandez, Susana; Onaindia De La Rivaherrera, Eva; Borrajo, Daniel; Castillo, Luis

    2013-01-01

    This paper presents a general approach to automatically compile e-learning models to planning, allowing us to easily generate plans, in the form of learning designs, by using existing domain-independent planners. The idea is to compile, first, a course defined in a standard e-learning language into a planning domain, and, second, a file containing students learning information into a planning problem. We provide a common compilation and extend it to three particular approaches that cover a fu...

  9. A Stochastic Approach for Automatic and Dynamic Modeling of Students' Learning Styles in Adaptive Educational Systems

    OpenAIRE

    Fabiano Azevedo DORÇA; Luciano Vieira LIMA; Márcia Aparecida FERNANDES; Carlos Roberto LOPES

    2012-01-01

    Considering learning and how to improve students' performances, an adaptive educational system must know how an individual learns best. In this context, this work presents an innovative approach for student modeling through probabilistic learning styles combination. Experiments have shown that our approach is able to automatically detect and precisely adjust students' learning styles, based on the non-deterministic and non-stationary aspects of learning styles. Because of the probabilistic an...

  10. Augmented Likelihood Image Reconstruction.

    Science.gov (United States)

    Stille, Maik; Kleine, Matthias; Hägele, Julian; Barkhausen, Jörg; Buzug, Thorsten M

    2016-01-01

    The presence of high-density objects remains an open problem in medical CT imaging. Data of projections passing through objects of high density, such as metal implants, are dominated by noise and are highly affected by beam hardening and scatter. Reconstructed images become less diagnostically conclusive because of pronounced artifacts that manifest as dark and bright streaks. A new reconstruction algorithm is proposed with the aim to reduce these artifacts by incorporating information about shape and known attenuation coefficients of a metal implant. Image reconstruction is considered as a variational optimization problem. The afore-mentioned prior knowledge is introduced in terms of equality constraints. An augmented Lagrangian approach is adapted in order to minimize the associated log-likelihood function for transmission CT. During iterations, temporally appearing artifacts are reduced with a bilateral filter and new projection values are calculated, which are used later on for the reconstruction. A detailed evaluation in cooperation with radiologists is performed on software and hardware phantoms, as well as on clinically relevant patient data of subjects with various metal implants. Results show that the proposed reconstruction algorithm is able to outperform contemporary metal artifact reduction methods such as normalized metal artifact reduction.

  11. GIS Data Based Automatic High-Fidelity 3D Road Network Modeling

    Science.gov (United States)

    Wang, Jie; Shen, Yuzhong

    2011-01-01

    3D road models are widely used in many computer applications such as racing games and driving simulations_ However, almost all high-fidelity 3D road models were generated manually by professional artists at the expense of intensive labor. There are very few existing methods for automatically generating 3D high-fidelity road networks, especially those existing in the real world. This paper presents a novel approach thai can automatically produce 3D high-fidelity road network models from real 2D road GIS data that mainly contain road. centerline in formation. The proposed method first builds parametric representations of the road centerlines through segmentation and fitting . A basic set of civil engineering rules (e.g., cross slope, superelevation, grade) for road design are then selected in order to generate realistic road surfaces in compliance with these rules. While the proposed method applies to any types of roads, this paper mainly addresses automatic generation of complex traffic interchanges and intersections which are the most sophisticated elements in the road networks

  12. Grammar-based Automatic 3D Model Reconstruction from Terrestrial Laser Scanning Data

    Science.gov (United States)

    Yu, Q.; Helmholz, P.; Belton, D.; West, G.

    2014-04-01

    The automatic reconstruction of 3D buildings has been an important research topic during the last years. In this paper, a novel method is proposed to automatically reconstruct the 3D building models from segmented data based on pre-defined formal grammar and rules. Such segmented data can be extracted e.g. from terrestrial or mobile laser scanning devices. Two steps are considered in detail. The first step is to transform the segmented data into 3D shapes, for instance using the DXF (Drawing Exchange Format) format which is a CAD data file format used for data interchange between AutoCAD and other program. Second, we develop a formal grammar to describe the building model structure and integrate the pre-defined grammars into the reconstruction process. Depending on the different segmented data, the selected grammar and rules are applied to drive the reconstruction process in an automatic manner. Compared with other existing approaches, our proposed method allows the model reconstruction directly from 3D shapes and takes the whole building into account.

  13. Data base structure and Management for Automatic Calculation of 210Pb Dating Methods Applying Different Models

    International Nuclear Information System (INIS)

    The introduction of macros in try calculation sheets allows the automatic application of various dating models using unsupported ''210 Pb data from a data base. The calculation books the contain the models have been modified to permit the implementation of these macros. The Marine and Aquatic Radioecology group of CIEMAT (MARG) will be involved in new European Projects, thus new models have been developed. This report contains a detailed description of: a) the new implement macros b) the design of a dating Menu in the calculation sheet and c) organization and structure of the data base. (Author) 4 refs

  14. Accelerated maximum likelihood parameter estimation for stochastic biochemical systems

    Directory of Open Access Journals (Sweden)

    Daigle Bernie J

    2012-05-01

    Full Text Available Abstract Background A prerequisite for the mechanistic simulation of a biochemical system is detailed knowledge of its kinetic parameters. Despite recent experimental advances, the estimation of unknown parameter values from observed data is still a bottleneck for obtaining accurate simulation results. Many methods exist for parameter estimation in deterministic biochemical systems; methods for discrete stochastic systems are less well developed. Given the probabilistic nature of stochastic biochemical models, a natural approach is to choose parameter values that maximize the probability of the observed data with respect to the unknown parameters, a.k.a. the maximum likelihood parameter estimates (MLEs. MLE computation for all but the simplest models requires the simulation of many system trajectories that are consistent with experimental data. For models with unknown parameters, this presents a computational challenge, as the generation of consistent trajectories can be an extremely rare occurrence. Results We have developed Monte Carlo Expectation-Maximization with Modified Cross-Entropy Method (MCEM2: an accelerated method for calculating MLEs that combines advances in rare event simulation with a computationally efficient version of the Monte Carlo expectation-maximization (MCEM algorithm. Our method requires no prior knowledge regarding parameter values, and it automatically provides a multivariate parameter uncertainty estimate. We applied the method to five stochastic systems of increasing complexity, progressing from an analytically tractable pure-birth model to a computationally demanding model of yeast-polarization. Our results demonstrate that MCEM2 substantially accelerates MLE computation on all tested models when compared to a stand-alone version of MCEM. Additionally, we show how our method identifies parameter values for certain classes of models more accurately than two recently proposed computationally efficient methods

  15. Verossimilhança na seleção de modelos para predição espacial Likelihood in the selection of models for spatial prediction

    Directory of Open Access Journals (Sweden)

    Cristiano Nunes Nesi

    2013-04-01

    the area from the available measurements on 48 experimental plots located in Xanxerê/SC with emphasis on the methodological framework. Choices of covariates in the model and for data transformation define four modeling options to be assessed. The Matèrn correlation function was used, evaluated at values 0.5; 1.5 and 2.5 for smoothness parameter. Models were compared by the maximized logarithm of the likelihood function and also by cross validation. The model with transformed response variable, including coordinates of the area as covariates and the value of 0.5 for the smoothness parameter was selected. The cross validation measures did not add relevant information to the likelihood, and the analysis highlights care must be taken with globally or locally atypical data, as well as the need of objective choice based on different candidate models which ought to be the focus of geostatistical modeling to ensure results compatible with reality.

  16. Likelihood inference for unions of interacting discs

    DEFF Research Database (Denmark)

    Møller, Jesper; Helisova, K.

    2010-01-01

    with respect to a given marked Poisson model (i.e. a Boolean model). We show how edge effects and other complications can be handled by considering a certain conditional likelihood. Our methodology is illustrated by analysing Peter Diggle's heather data set, where we discuss the results of simulation...

  17. An Automatic Optical and SAR Image Registration Method Using Iterative Multi-Level and Refinement Model

    Science.gov (United States)

    Xu, C.; Sui, H. G.; Li, D. R.; Sun, K. M.; Liu, J. Y.

    2016-06-01

    Automatic image registration is a vital yet challenging task, particularly for multi-sensor remote sensing images. Given the diversity of the data, it is unlikely that a single registration algorithm or a single image feature will work satisfactorily for all applications. Focusing on this issue, the mainly contribution of this paper is to propose an automatic optical-to-SAR image registration method using -level and refinement model: Firstly, a multi-level strategy of coarse-to-fine registration is presented, the visual saliency features is used to acquire coarse registration, and then specific area and line features are used to refine the registration result, after that, sub-pixel matching is applied using KNN Graph. Secondly, an iterative strategy that involves adaptive parameter adjustment for re-extracting and re-matching features is presented. Considering the fact that almost all feature-based registration methods rely on feature extraction results, the iterative strategy improve the robustness of feature matching. And all parameters can be automatically and adaptively adjusted in the iterative procedure. Thirdly, a uniform level set segmentation model for optical and SAR images is presented to segment conjugate features, and Voronoi diagram is introduced into Spectral Point Matching (VSPM) to further enhance the matching accuracy between two sets of matching points. Experimental results show that the proposed method can effectively and robustly generate sufficient, reliable point pairs and provide accurate registration.

  18. Automatic sleep staging based on ECG signals using hidden Markov models.

    Science.gov (United States)

    Ying Chen; Xin Zhu; Wenxi Chen

    2015-08-01

    This study is designed to investigate the feasibility of automatic sleep staging using features only derived from electrocardiography (ECG) signal. The study was carried out using the framework of hidden Markov models (HMMs). The mean, and SD values of heart rates (HRs) computed from each 30-second epoch served as the features. The two feature sequences were first detrended by ensemble empirical mode decomposition (EEMD), formed as a two-dimensional feature vector, and then converted into code vectors by vector quantization (VQ) method. The output VQ indexes were utilized to estimate parameters for HMMs. The proposed model was tested and evaluated on a group of healthy individuals using leave-one-out cross-validation. The automatic sleep staging results were compared with PSG estimated ones. Results showed accuracies of 82.2%, 76.0%, 76.1% and 85.5% for deep, light, REM and wake sleep, respectively. The findings proved that HRs-based HMM approach is feasible for automatic sleep staging and can pave a way for developing more efficient, robust, and simple sleep staging system suitable for home application. PMID:26736316

  19. New semi-automatic ROI setting system for brain PET images based on elastic model

    Energy Technology Data Exchange (ETDEWEB)

    Tanizaki, Naoaki; Okamura, Tetsuya (Sumitomo Heavy Industries Ltd., Kanagawa (Japan). Research and Development Center); Senda, Michio; Toyama, Hinako; Ishii, Kenji

    1994-10-01

    We have developed a semi-automatic ROI setting system for brain PET images. It is based on the elastic network model that fits the standard ROI atlas into individual brain image. The standard ROI atlas is a set of segments that represent each anatomical region. For transformation, the operator needs to set only three kinds of district anatomical features: manually determined midsagittal line, brain contour line determined with SNAKES algorithm semi-automatically, a few manually determined specific ROIs to be used for exact transformation. Improvement of the operation time and the inter-operator variance were demonstrated in the experiment by comparing with the conventional manual ROI setting. The operation time was reduced to 50% in almost all cases. And the inter-operator variance was reduced to one seventh in the maximum case. (author).

  20. Automatic parametrization of implicit solvent models for the blind prediction of solvation free energies

    CERN Document Server

    Wang, Bao; Wei, Guowei

    2016-01-01

    In this work, a systematic protocol is proposed to automatically parametrize implicit solvent models with polar and nonpolar components. The proposed protocol utilizes the classical Poisson model or the Kohn-Sham density functional theory (KSDFT) based polarizable Poisson model for modeling polar solvation free energies. For the nonpolar component, either the standard model of surface area, molecular volume, and van der Waals interactions, or a model with atomic surface areas and molecular volume is employed. Based on the assumption that similar molecules have similar parametrizations, we develop scoring and ranking algorithms to classify solute molecules. Four sets of radius parameters are combined with four sets of charge force fields to arrive at a total of 16 different parametrizations for the Poisson model. A large database with 668 experimental data is utilized to validate the proposed protocol. The lowest leave-one-out root mean square (RMS) error for the database is 1.33k cal/mol. Additionally, five s...

  1. LanHEP - a package for automatic generation of Feynman rules in gauge models

    CERN Document Server

    Semenov, A Yu

    1996-01-01

    We consider the general problem of derivation of the Feynman rules for the matrix elements in momentum representation from the given Lagrangian in coordinate space invariant under the transformation of some gauge group. LanHEP package presented in this paper allows to define in a convenient way the gauge model Lagrangian in canonical form and then to generate automatically the Feynman rules that can be used in the following calculation of the physical processes by means of CompHEP package. The detailed description of LanHEP commands is given and several examples of LanHEP applications (QED, QCD, Standard Model in the t'Hooft-Feynman gauge) are presented.

  2. Likelihood devices in spatial statistics

    NARCIS (Netherlands)

    Zwet, E.W. van

    2001-01-01

    One of the main themes of this thesis is the application to spatial data of modern semi- and nonparametric methods. Another, closely related theme is maximum likelihood estimation from spatial data. Maximum likelihood estimation is not common practice in spatial statistics. The method of moments and

  3. Automatic Texture Reconstruction of 3d City Model from Oblique Images

    Science.gov (United States)

    Kang, Junhua; Deng, Fei; Li, Xinwei; Wan, Fang

    2016-06-01

    In recent years, the photorealistic 3D city models are increasingly important in various geospatial applications related to virtual city tourism, 3D GIS, urban planning, real-estate management. Besides the acquisition of high-precision 3D geometric data, texture reconstruction is also a crucial step for generating high-quality and visually realistic 3D models. However, most of the texture reconstruction approaches are probably leading to texture fragmentation and memory inefficiency. In this paper, we introduce an automatic framework of texture reconstruction to generate textures from oblique images for photorealistic visualization. Our approach include three major steps as follows: mesh parameterization, texture atlas generation and texture blending. Firstly, mesh parameterization procedure referring to mesh segmentation and mesh unfolding is performed to reduce geometric distortion in the process of mapping 2D texture to 3D model. Secondly, in the texture atlas generation step, the texture of each segmented region in texture domain is reconstructed from all visible images with exterior orientation and interior orientation parameters. Thirdly, to avoid color discontinuities at boundaries between texture regions, the final texture map is generated by blending texture maps from several corresponding images. We evaluated our texture reconstruction framework on a dataset of a city. The resulting mesh model can get textured by created texture without resampling. Experiment results show that our method can effectively mitigate the occurrence of texture fragmentation. It is demonstrated that the proposed framework is effective and useful for automatic texture reconstruction of 3D city model.

  4. Automatic geometric modeling, mesh generation and FE analysis for pipelines with idealized defects and arbitrary location

    Energy Technology Data Exchange (ETDEWEB)

    Motta, R.S.; Afonso, S.M.B.; Willmersdorf, R.B.; Lyra, P.R.M. [Universidade Federal de Pernambuco (UFPE), Recife, PE (Brazil); Cabral, H.L.D. [TRANSPETRO, Rio de Janeiro, RJ (Brazil); Andrade, E.Q. [Petroleo Brasileiro S.A. (PETROBRAS), Rio de Janeiro, RJ (Brazil)

    2009-07-01

    Although the Finite Element Method (FEM) has proved to be a powerful tool to predict the failure pressure of corroded pipes, the generation of good computational models of pipes with corrosion defects can take several days. This makes the use of computational simulation procedure difficult to apply in practice. The main purpose of this work is to develop a set of computational tools to produce automatically models of pipes with defects, ready to be analyzed with commercial FEM programs, starting from a few parameters that locate and provide the main dimensions of the defect or a series of defects. Here these defects can be internal and external and also assume general spatial locations along the pipe. Idealized rectangular and elliptic geometries can be generated. These tools were based on MSC.PATRAN pre and post-processing programs and were written with PCL (Patran Command Language). The program for the automatic generation of models (PIPEFLAW) has a simplified and customized graphical interface, so that an engineer with basic notions of computational simulation with the FEM can generate rapidly models that result in precise and reliable simulations. Some examples of models of pipes with defects generated by the PIPEFLAW system are shown, and the results of numerical analyses, done with the tools presented in this work, are compared with, empiric results. (author)

  5. Model-based automatic 3d building model generation by integrating LiDAR and aerial images

    Science.gov (United States)

    Habib, A.; Kwak, E.; Al-Durgham, M.

    2011-12-01

    Accurate, detailed, and up-to-date 3D building models are important for several applications such as telecommunication network planning, urban planning, and military simulation. Existing building reconstruction approaches can be classified according to the data sources they use (i.e., single versus multi-sensor approaches), the processing strategy (i.e., data-driven, model-driven, or hybrid), or the amount of user interaction (i.e., manual, semiautomatic, or fully automated). While it is obvious that 3D building models are important components for many applications, they still lack the economical and automatic techniques for their generation while taking advantage of the available multi-sensory data and combining processing strategies. In this research, an automatic methodology for building modelling by integrating multiple images and LiDAR data is proposed. The objective of this research work is to establish a framework for automatic building generation by integrating data driven and model-driven approaches while combining the advantages of image and LiDAR datasets.

  6. Automatic localization of IASLC-defined mediastinal lymph node stations on CT images using fuzzy models

    Science.gov (United States)

    Matsumoto, Monica M. S.; Beig, Niha G.; Udupa, Jayaram K.; Archer, Steven; Torigian, Drew A.

    2014-03-01

    Lung cancer is associated with the highest cancer mortality rates among men and women in the United States. The accurate and precise identification of the lymph node stations on computed tomography (CT) images is important for staging disease and potentially for prognosticating outcome in patients with lung cancer, as well as for pretreatment planning and response assessment purposes. To facilitate a standard means of referring to lymph nodes, the International Association for the Study of Lung Cancer (IASLC) has recently proposed a definition of the different lymph node stations and zones in the thorax. However, nodal station identification is typically performed manually by visual assessment in clinical radiology. This approach leaves room for error due to the subjective and potentially ambiguous nature of visual interpretation, and is labor intensive. We present a method of automatically recognizing the mediastinal IASLC-defined lymph node stations by modifying a hierarchical fuzzy modeling approach previously developed for body-wide automatic anatomy recognition (AAR) in medical imagery. Our AAR-lymph node (AAR-LN) system follows the AAR methodology and consists of two steps. In the first step, the various lymph node stations are manually delineated on a set of CT images following the IASLC definitions. These delineations are then used to build a fuzzy hierarchical model of the nodal stations which are considered as 3D objects. In the second step, the stations are automatically located on any given CT image of the thorax by using the hierarchical fuzzy model and object recognition algorithms. Based on 23 data sets used for model building, 22 independent data sets for testing, and 10 lymph node stations, a mean localization accuracy of within 1-6 voxels has been achieved by the AAR-LN system.

  7. AUTOMATIC MODEL SELECTION FOR 3D RECONSTRUCTION OF BUILDINGS FROM SATELLITE IMAGARY

    Directory of Open Access Journals (Sweden)

    T. Partovi

    2013-09-01

    Full Text Available Through the improvements of satellite sensor and matching technology, the derivation of 3D models from space borne stereo data obtained a lot of interest for various applications such as mobile navigation, urban planning, telecommunication, and tourism. The automatic reconstruction of 3D building models from space borne point cloud data is still an active research topic. The challenging problem in this field is the relatively low quality of the Digital Surface Model (DSM generated by stereo matching of satellite data comparing to airborne LiDAR data. In order to establish an efficient method to achieve high quality models and complete automation from the mentioned DSM, in this paper a new method based on a model-driven strategy is proposed. For improving the results, refined orthorectified panchromatic images are introduced into the process as additional data. The idea of this method is based on ridge line extraction and analysing height values in direction of and perpendicular to the ridgeline direction. After applying pre-processing to the orthorectified data, some feature descriptors are extracted from the DSM, to improve the automatic ridge line detection. Applying RANSAC a line is fitted to each group of ridge points. Finally these ridge lines are refined by matching them or closing gaps. In order to select the type of roof model the heights of point in extension of the ridge line and height differences perpendicular to the ridge line are analysed. After roof model selection, building edge information is extracted from canny edge detection and parameters derived from the roof parts. Then the best model is fitted to extracted façade roofs based on detected type of model. Each roof is modelled independently and final 3D buildings are reconstructed by merging the roof models with the corresponding walls.

  8. Likelihood analysis of large-scale flows

    CERN Document Server

    Jaffe, A; Jaffe, Andrew; Kaiser, Nick

    1994-01-01

    We apply a likelihood analysis to the data of Lauer & Postman 1994. With P(k) parametrized by (\\sigma_8, \\Gamma), the likelihood function peaks at \\sigma_8\\simeq0.9, \\Gamma\\simeq0.05, indicating at face value very strong large-scale power, though at a level incompatible with COBE. There is, however, a ridge of likelihood such that more conventional power spectra do not seem strongly disfavored. The likelihood calculated using as data only the components of the bulk flow solution peaks at higher \\sigma_8, as suggested by other analyses, but is rather broad. The likelihood incorporating both bulk flow and shear gives a different picture. The components of the shear are all low, and this pulls the peak to lower amplitudes as a compromise. The velocity data alone are therefore {\\em consistent} with models with very strong large scale power which generates a large bulk flow, but the small shear (which also probes fairly large scales) requires that the power would have to be at {\\em very} large scales, which is...

  9. A semi-automatic method for developing an anthropomorphic numerical model of dielectric anatomy by MRI

    Energy Technology Data Exchange (ETDEWEB)

    Mazzurana, M [ITC-irst - Bioelectromagnetism Laboratory, FCS Department, 38050 Povo, Trento (Italy); Sandrini, L [ITC-irst - Bioelectromagnetism Laboratory, FCS Department, 38050 Povo, Trento (Italy); Vaccari, A [ITC-irst - Bioelectromagnetism Laboratory, FCS Department, 38050 Povo, Trento (Italy); Malacarne, C [ITC-irst - Bioelectromagnetism Laboratory, FCS Department, 38050 Povo, Trento (Italy); Cristoforetti, L [ITC-irst - Bioelectromagnetism Laboratory, FCS Department, 38050 Povo, Trento (Italy); Pontalti, R [ITC-irst - Bioelectromagnetism Laboratory, FCS Department, 38050 Povo, Trento (Italy)

    2003-10-07

    Complex permittivity values have a dominant role in the overall consideration of interaction between radiofrequency electromagnetic fields and living matter, and in related applications such as electromagnetic dosimetry. There are still some concerns about the accuracy of published data and about their variability due to the heterogeneous nature of biological tissues. The aim of this study is to provide an alternative semi-automatic method by which numerical dielectric human models for dosimetric studies can be obtained. Magnetic resonance imaging (MRI) tomography was used to acquire images. A new technique was employed to correct nonuniformities in the images and frequency-dependent transfer functions to correlate image intensity with complex permittivity were used. The proposed method provides frequency-dependent models in which permittivity and conductivity vary with continuity-even in the same tissue-reflecting the intrinsic realistic spatial dispersion of such parameters. The human model is tested with an FDTD (finite difference time domain) algorithm at different frequencies; the results of layer-averaged and whole-body-averaged SAR (specific absorption rate) are compared with published work, and reasonable agreement has been found. Due to the short time needed to obtain a whole body model, this semi-automatic method may be suitable for efficient study of various conditions that can determine large differences in the SAR distribution, such as body shape, posture, fat-to-muscle ratio, height and weight.

  10. A semi-automatic method for developing an anthropomorphic numerical model of dielectric anatomy by MRI

    International Nuclear Information System (INIS)

    Complex permittivity values have a dominant role in the overall consideration of interaction between radiofrequency electromagnetic fields and living matter, and in related applications such as electromagnetic dosimetry. There are still some concerns about the accuracy of published data and about their variability due to the heterogeneous nature of biological tissues. The aim of this study is to provide an alternative semi-automatic method by which numerical dielectric human models for dosimetric studies can be obtained. Magnetic resonance imaging (MRI) tomography was used to acquire images. A new technique was employed to correct nonuniformities in the images and frequency-dependent transfer functions to correlate image intensity with complex permittivity were used. The proposed method provides frequency-dependent models in which permittivity and conductivity vary with continuity-even in the same tissue-reflecting the intrinsic realistic spatial dispersion of such parameters. The human model is tested with an FDTD (finite difference time domain) algorithm at different frequencies; the results of layer-averaged and whole-body-averaged SAR (specific absorption rate) are compared with published work, and reasonable agreement has been found. Due to the short time needed to obtain a whole body model, this semi-automatic method may be suitable for efficient study of various conditions that can determine large differences in the SAR distribution, such as body shape, posture, fat-to-muscle ratio, height and weight

  11. Automatic, Global and Dynamic Student Modeling in a Ubiquitous Learning Environment

    Directory of Open Access Journals (Sweden)

    Sabine Graf

    2009-03-01

    Full Text Available Ubiquitous learning allows students to learn at any time and any place. Adaptivity plays an important role in ubiquitous learning, aiming at providing students with adaptive and personalized learning material, activities, and information at the right place and the right time. However, for providing rich adaptivity, the student model needs to be able to gather a variety of information about the students. In this paper, an automatic, global, and dynamic student modeling approach is introduced, which aims at identifying and frequently updating information about students’ progress, learning styles, interests and knowledge level, problem solving abilities, preferences for using the system, social connectivity, and current location. This information is gathered in an automatic way, using students’ behavior and actions in different learning situations provided by different components/services of the ubiquitous learning environment. By providing a comprehensive student model, students can be supported by rich adaptivity in every component/service of the learning environment. Furthermore, the information in the student model can help in giving teachers a better understanding about the students’ learning process.

  12. A 6D CAD Model for the Automatic Assessment of Building Sustainability

    Directory of Open Access Journals (Sweden)

    Ping Yung

    2014-08-01

    Full Text Available Current building assessment methods limit themselves in their environmental impact by failing to consider the other two aspects of sustainability: the economic and the social. They tend to be complex and costly to run, and therefore are of limited value in comparing design options. This paper proposes and develops a model for the automatic assessment of a building’s sustainability life cycle with the building information modelling (BIM approach and its enabling technologies. A 6D CAD model is developed which could be used as a design aid instead of as a post-construction evaluation tool. 6D CAD includes 3D design as well as a fourth dimension (schedule, a fifth dimension (cost and a sixth dimension (sustainability. The model can automatically derive quantities (5D, calculate economic (5D and 6D, environmental and social impacts (6D, and evaluate the sustainability performance of alternative design options. The sustainability assessment covers the life cycle stages of a building, namely material production, construction, operation, maintenance, demolition and disposal.

  13. Nonparametric (smoothed) likelihood and integral equations

    CERN Document Server

    Groeneboom, Piet

    2012-01-01

    We show that there is an intimate connection between the theory of nonparametric (smoothed) maximum likelihood estimators for certain inverse problems and integral equations. This is illustrated by estimators for interval censoring and deconvolution problems. We also discuss the asymptotic efficiency of the MLE for smooth functionals in these models.

  14. Maximum likelihood estimation for integrated diffusion processes

    DEFF Research Database (Denmark)

    Baltazar-Larios, Fernando; Sørensen, Michael

    EM-algorithm to obtain maximum likelihood estimates of the parameters in the diffusion model. As part of the algorithm, we use a recent simple method for approximate simulation of diffusion bridges. In simulation studies for the Ornstein-Uhlenbeck process and the CIR process the proposed method works...

  15. Automatic Seamline Network Generation for Urban Orthophoto Mosaicking with the Use of a Digital Surface Model

    Directory of Open Access Journals (Sweden)

    Qi Chen

    2014-12-01

    Full Text Available Intelligent seamline selection for image mosaicking is an area of active research in the fields of massive data processing, computer vision, photogrammetry and remote sensing. In mosaicking applications for digital orthophoto maps (DOMs, the visual transition in mosaics is mainly caused by differences in positioning accuracy, image tone and relief displacement of high ground objects between overlapping DOMs. Among these three factors, relief displacement, which prevents the seamless mosaicking of images, is relatively more difficult to address. To minimize visual discontinuities, many optimization algorithms have been studied for the automatic selection of seamlines to avoid high ground objects. Thus, a new automatic seamline selection algorithm using a digital surface model (DSM is proposed. The main idea of this algorithm is to guide a seamline toward a low area on the basis of the elevation information in a DSM. Given that the elevation of a DSM is not completely synchronous with a DOM, a new model, called the orthoimage elevation synchronous model (OESM, is derived and introduced. OESM can accurately reflect the elevation information for each DOM unit. Through the morphological processing of the OESM data in the overlapping area, an initial path network is obtained for seamline selection. Subsequently, a cost function is defined on the basis of several measurements, and Dijkstra’s algorithm is adopted to determine the least-cost path from the initial network. Finally, the proposed algorithm is employed for automatic seamline network construction; the effective mosaic polygon of each image is determined, and a seamless mosaic is generated. The experiments with three different datasets indicate that the proposed method meets the requirements for seamline network construction. In comparative trials, the generated seamlines pass through fewer ground objects with low time consumption.

  16. Constraint optimization model of a scheduling problem for a robotic arm in automatic systems

    DEFF Research Database (Denmark)

    Kristiansen, Ewa; Smith, Stephen F.; Kristiansen, Morten

    2014-01-01

    are characteristics of the painting process application itself. Unlike spot-welding, painting tasks require movement of the entire robot arm. In addition to minimizing intertask duration, the scheduler must strive to maximize painting quality and the problem is formulated as a multi-objective optimization problem....... The scheduling model is implemented as a stand-alone module using constraint programming, and integrated with a larger automatic system. The results of a number of simulation experiments with simple parts are reported, both to characterize the functionality of the scheduler and to illustrate the operation...

  17. Sequential Clustering based Facial Feature Extraction Method for Automatic Creation of Facial Models from Orthogonal Views

    CERN Document Server

    Ghahari, Alireza

    2009-01-01

    Multiview 3D face modeling has attracted increasing attention recently and has become one of the potential avenues in future video systems. We aim to make more reliable and robust automatic feature extraction and natural 3D feature construction from 2D features detected on a pair of frontal and profile view face images. We propose several heuristic algorithms to minimize possible errors introduced by prevalent nonperfect orthogonal condition and noncoherent luminance. In our approach, we first extract the 2D features that are visible to both cameras in both views. Then, we estimate the coordinates of the features in the hidden profile view based on the visible features extracted in the two orthogonal views. Finally, based on the coordinates of the extracted features, we deform a 3D generic model to perform the desired 3D clone modeling. Present study proves the scope of resulted facial models for practical applications like face recognition and facial animation.

  18. A Model for Semi-Automatic Composition of Educational Content from Open Repositories of Learning Objects

    Directory of Open Access Journals (Sweden)

    Paula Andrea Rodríguez Marín

    2014-04-01

    Full Text Available Learning objects (LOs repositories are important in building educational content and should allow search, retrieval and composition processes to be successfully developed to reach educational goals. However, such processes require so much time-consuming and not always provide the desired results. Thus, the aim of this paper is to propose a model for the semiautomatic composition of LOs, which are automatically recovered from open repositories. For the development of model, various text similarity measures are discussed, while for calibration and validation some comparison experiments were performed using the results obtained by teachers. Experimental results show that when using a value of k (number of LOs selected of at least 3, the percentage of similarities between the model and such made by experts exceeds 75%. To conclude, it can be established that the model proposed allows teachers to save time and effort for LOs selection by performing a pre-filter process.

  19. Automatic Assessment of Craniofacial Growth in a Mouse Model of Crouzon Syndrome

    DEFF Research Database (Denmark)

    Thorup, Signe Strann; Larsen, Rasmus; Darvann, Tron Andre;

    2009-01-01

    BACKGROUND & PURPOSE: Crouzon syndrome is characterized by growth disturbances caused by premature craniosynostosis. A mouse model with mutation Fgfr2C342Y, equivalent to the most common Crouzon syndrome mutation (henceforth called the Crouzon mouse model), has a phenotype showing many parallels...... to the human counterpart. Quantifying growth in the Crouzon mouse model could test hypotheses of the relationship between craniosynostosis and dysmorphology, leading to better understanding of the causes of Crouzon syndrome as well as providing knowledge relevant for surgery planning. METHODS: Automatic non......-rigid volumetric image registration was applied to micro-CT scans of ten 4-week and twenty 6-week euthanized mice for growth modeling. Each age group consisted of 50% normal and 50% Crouzon mice. Four 3D mean shapes, one for each mouse-type and age group were created. Extracting a dense field of growth vectors...

  20. Focusing on media body ideal images triggers food intake among restrained eaters: a test of restraint theory and the elaboration likelihood model.

    Science.gov (United States)

    Boyce, Jessica A; Kuijer, Roeline G

    2014-04-01

    Although research consistently shows that images of thin women in the media (media body ideals) affect women negatively (e.g., increased weight dissatisfaction and food intake), this effect is less clear among restrained eaters. The majority of experiments demonstrate that restrained eaters - identified with the Restraint Scale - consume more food than do other participants after viewing media body ideal images; whereas a minority of experiments suggest that such images trigger restrained eaters' dietary restraint. Weight satisfaction and mood results are just as variable. One reason for these inconsistent results might be that different methods of image exposure (e.g., slideshow vs. film) afford varying levels of attention. Therefore, we manipulated attention levels and measured participants' weight satisfaction and food intake. We based our hypotheses on the elaboration likelihood model and on restraint theory. We hypothesised that advertent (i.e., processing the images via central routes of persuasion) and inadvertent (i.e., processing the images via peripheral routes of persuasion) exposure would trigger differing degrees of weight dissatisfaction and dietary disinhibition among restrained eaters (cf. restraint theory). Participants (N = 174) were assigned to one of four conditions: advertent or inadvertent exposure to media or control images. The dependent variables were measured in a supposedly unrelated study. Although restrained eaters' weight satisfaction was not significantly affected by either media exposure condition, advertent (but not inadvertent) media exposure triggered restrained eaters' eating. These results suggest that teaching restrained eaters how to pay less attention to media body ideal images might be an effective strategy in media-literary interventions.

  1. Focusing on media body ideal images triggers food intake among restrained eaters: a test of restraint theory and the elaboration likelihood model.

    Science.gov (United States)

    Boyce, Jessica A; Kuijer, Roeline G

    2014-04-01

    Although research consistently shows that images of thin women in the media (media body ideals) affect women negatively (e.g., increased weight dissatisfaction and food intake), this effect is less clear among restrained eaters. The majority of experiments demonstrate that restrained eaters - identified with the Restraint Scale - consume more food than do other participants after viewing media body ideal images; whereas a minority of experiments suggest that such images trigger restrained eaters' dietary restraint. Weight satisfaction and mood results are just as variable. One reason for these inconsistent results might be that different methods of image exposure (e.g., slideshow vs. film) afford varying levels of attention. Therefore, we manipulated attention levels and measured participants' weight satisfaction and food intake. We based our hypotheses on the elaboration likelihood model and on restraint theory. We hypothesised that advertent (i.e., processing the images via central routes of persuasion) and inadvertent (i.e., processing the images via peripheral routes of persuasion) exposure would trigger differing degrees of weight dissatisfaction and dietary disinhibition among restrained eaters (cf. restraint theory). Participants (N = 174) were assigned to one of four conditions: advertent or inadvertent exposure to media or control images. The dependent variables were measured in a supposedly unrelated study. Although restrained eaters' weight satisfaction was not significantly affected by either media exposure condition, advertent (but not inadvertent) media exposure triggered restrained eaters' eating. These results suggest that teaching restrained eaters how to pay less attention to media body ideal images might be an effective strategy in media-literary interventions. PMID:24854816

  2. Semi-automatic registration of 3D orthodontics models from photographs

    Science.gov (United States)

    Destrez, Raphaël.; Treuillet, Sylvie; Lucas, Yves; Albouy-Kissi, Benjamin

    2013-03-01

    In orthodontics, a common practice used to diagnose and plan the treatment is the dental cast. After digitization by a CT-scan or a laser scanner, the obtained 3D surface models can feed orthodontics numerical tools for computer-aided diagnosis and treatment planning. One of the pre-processing critical steps is the 3D registration of dental arches to obtain the occlusion of these numerical models. For this task, we propose a vision based method to automatically compute the registration based on photos of patient mouth. From a set of matched singular points between two photos and the dental 3D models, the rigid transformation to apply to the mandible to be in contact with the maxillary may be computed by minimizing the reprojection errors. During a precedent study, we established the feasibility of this visual registration approach with a manual selection of singular points. This paper addresses the issue of automatic point detection. Based on a priori knowledge, histogram thresholding and edge detection are used to extract specific points in 2D images. Concurrently, curvatures information detects 3D corresponding points. To improve the quality of the final registration, we also introduce a combined optimization of the projection matrix with the 2D/3D point positions. These new developments are evaluated on real data by considering the reprojection errors and the deviation angles after registration in respect to the manual reference occlusion realized by a specialist.

  3. Automatic processing and modeling of GPR data for pavement thickness and properties

    Science.gov (United States)

    Olhoeft, Gary R.; Smith, Stanley S., III

    2000-04-01

    A GSSI SIR-8 with 1 GHz air-launched horn antennas has been modified to acquire data from a moving vehicle. Algorithms have been developed to acquire the data, and to automatically calibrate, position, process, and full waveform model it without operator intervention. Vehicle suspension system bounce is automatically compensated (for varying antenna height). Multiple scans are modeled by full waveform inversion that is remarkably robust and relatively insensitive to noise. Statistical parameters and histograms are generated for the thickness and dielectric permittivity of concrete or asphalt pavements. The statistical uncertainty with which the thickness is determined is given with each thickness measurement, along with the dielectric permittivity of the pavement material and of the subgrade material at each location. Permittivities are then converted into equivalent density and water content. Typical statistical uncertainties in thickness are better than 0.4 cm in 20 cm thick pavement. On a Pentium laptop computer, the data may be processed and modeled to have cross-sectional images and computed pavement thickness displayed in real time at highway speeds.

  4. Automatic Gauge Control in Rolling Process Based on Multiple Smith Predictor Models

    Directory of Open Access Journals (Sweden)

    Jiangyun Li

    2014-01-01

    Full Text Available Automatic rolling process is a high-speed system which always requires high-speed control and communication capabilities. Meanwhile, it is also a typical complex electromechanical system; distributed control has become the mainstream of computer control system for rolling mill. Generally, the control system adopts the 2-level control structure—basic automation (Level 1 and process control (Level 2—to achieve the automatic gauge control. In Level 1, there is always a certain distance between the roll gap of each stand and the thickness testing point, leading to the time delay of gauge control. Smith predictor is a method to cope with time-delay system, but the practical feedback control based on traditional Smith predictor cannot get the ideal control result, because the time delay is hard to be measured precisely and in some situations it may vary in a certain range. In this paper, based on adaptive Smith predictor, we employ multiple models to cover the uncertainties of time delay. The optimal model will be selected by the proposed switch mechanism. Simulations show that the proposed multiple Smith model method exhibits excellent performance in improving the control result even for system with jumping time delay.

  5. Modeling and automatic feedback control of tremor: adaptive estimation of deep brain stimulation.

    Directory of Open Access Journals (Sweden)

    Muhammad Rehan

    Full Text Available This paper discusses modeling and automatic feedback control of (postural and rest tremor for adaptive-control-methodology-based estimation of deep brain stimulation (DBS parameters. The simplest linear oscillator-based tremor model, between stimulation amplitude and tremor, is investigated by utilizing input-output knowledge. Further, a nonlinear generalization of the oscillator-based tremor model, useful for derivation of a control strategy involving incorporation of parametric-bound knowledge, is provided. Using the Lyapunov method, a robust adaptive output feedback control law, based on measurement of the tremor signal from the fingers of a patient, is formulated to estimate the stimulation amplitude required to control the tremor. By means of the proposed control strategy, an algorithm is developed for estimation of DBS parameters such as amplitude, frequency and pulse width, which provides a framework for development of an automatic clinical device for control of motor symptoms. The DBS parameter estimation results for the proposed control scheme are verified through numerical simulations.

  6. MATHEMATICAL MODELING OF THE UNPUT DEVICES IN AUTOMATIC LOCOMOTIVE SIGNALING SYSTEM

    Directory of Open Access Journals (Sweden)

    O. O. Gololobova

    2014-03-01

    Full Text Available Purpose. To examine the operation of the automatic locomotive signaling system (ALS, to find out the influence of external factors on the devices operation and the quality of the code information derived from track circuit information, as well as to enable modeling of failure occurrences that may appear during operation. Methodology. To achieve this purpose, the main obstacles in ALS operation and the reasons for their occurrence were considered and the system structure principle was researched. The mathematical model for input equipment of the continuous automatic locomotive signaling system (ALS with the number coding was developed. It was designed taking into account all the types of code signals “R”, “Y”, “RY” and equivalent scheme of replacing the filter with a frequency of 50 Hz. Findings. The operation of ALSN with a signal current frequency of 50 Hz was examined. The adequate mathematical model of input equipment of ALS with a frequency of 50 Hz was developed. Originality. The computer model of input equipment of ALS system in the environment of MATLAB+Simulink was developed. The results of the computer modeling on the outlet of the filter during delivering every type of code combination were given in the article. Practical value. With the use of developed mathematical model of ALS system operation we have an opportunity to study, research and determine behavior of the circuit during the normal operation mode and failure occurrences. Also there is a possibility to develop and apply different scheme decisions in modeling environment MATLAB+Simulink for reducing the influence of obstacles on the functional capability of ALS and to model the occurrence of possible difficulties.

  7. Empirical likelihood inference for diffusion processes with jumps

    Institute of Scientific and Technical Information of China (English)

    2010-01-01

    In this paper, we consider the empirical likelihood inference for the jump-diffusion model. We construct the confidence intervals based on the empirical likelihood for the infinitesimal moments in the jump-diffusion models. They are better than the confidence intervals which are based on the asymptotic normality of point estimates.

  8. Automatic procedure for realistic 3D finite element modelling of human brain for bioelectromagnetic computations

    Energy Technology Data Exchange (ETDEWEB)

    Aristovich, K Y; Khan, S H, E-mail: kirill.aristovich.1@city.ac.u [School of Engineering and Mathematical Sciences, City University London, Northampton Square, London EC1V 0HB (United Kingdom)

    2010-07-01

    Realistic computer modelling of biological objects requires building of very accurate and realistic computer models based on geometric and material data, type, and accuracy of numerical analyses. This paper presents some of the automatic tools and algorithms that were used to build accurate and realistic 3D finite element (FE) model of whole-brain. These models were used to solve the forward problem in magnetic field tomography (MFT) based on Magnetoencephalography (MEG). The forward problem involves modelling and computation of magnetic fields produced by human brain during cognitive processing. The geometric parameters of the model were obtained from accurate Magnetic Resonance Imaging (MRI) data and the material properties - from those obtained from Diffusion Tensor MRI (DTMRI). The 3D FE models of the brain built using this approach has been shown to be very accurate in terms of both geometric and material properties. The model is stored on the computer in Computer-Aided Parametrical Design (CAD) format. This allows the model to be used in a wide a range of methods of analysis, such as finite element method (FEM), Boundary Element Method (BEM), Monte-Carlo Simulations, etc. The generic model building approach presented here could be used for accurate and realistic modelling of human brain and many other biological objects.

  9. Semi-Automatic Building Models and FAÇADE Texture Mapping from Mobile Phone Images

    Science.gov (United States)

    Jeong, J.; Kim, T.

    2016-06-01

    Research on 3D urban modelling has been actively carried out for a long time. Recently the need of 3D urban modelling research is increased rapidly due to improved geo-web services and popularized smart devices. Nowadays 3D urban models provided by, for example, Google Earth use aerial photos for 3D urban modelling but there are some limitations: immediate update for the change of building models is difficult, many buildings are without 3D model and texture, and large resources for maintaining and updating are inevitable. To resolve the limitations mentioned above, we propose a method for semi-automatic building modelling and façade texture mapping from mobile phone images and analyze the result of modelling with actual measurements. Our method consists of camera geometry estimation step, image matching step, and façade mapping step. Models generated from this method were compared with actual measurement value of real buildings. Ratios of edge length of models and measurements were compared. Result showed 5.8% average error of length ratio. Through this method, we could generate a simple building model with fine façade textures without expensive dedicated tools and dataset.

  10. Automatic versus manual model differentiation to compute sensitivities and solve non-linear inverse problems

    Science.gov (United States)

    Elizondo, D.; Cappelaere, B.; Faure, Ch.

    2002-04-01

    Emerging tools for automatic differentiation (AD) of computer programs should be of great benefit for the implementation of many derivative-based numerical methods such as those used for inverse modeling. The Odyssée software, one such tool for Fortran 77 codes, has been tested on a sample model that solves a 2D non-linear diffusion-type equation. Odyssée offers both the forward and the reverse differentiation modes, that produce the tangent and the cotangent models, respectively. The two modes have been implemented on the sample application. A comparison is made with a manually-produced differentiated code for this model (MD), obtained by solving the adjoint equations associated with the model's discrete state equations. Following a presentation of the methods and tools and of their relative advantages and drawbacks, the performances of the codes produced by the manual and automatic methods are compared, in terms of accuracy and of computing efficiency (CPU and memory needs). The perturbation method (finite-difference approximation of derivatives) is also used as a reference. Based on the test of Taylor, the accuracy of the two AD modes proves to be excellent and as high as machine precision permits, a good indication of Odyssée's capability to produce error-free codes. In comparison, the manually-produced derivatives (MD) sometimes appear to be slightly biased, which is likely due to the fact that a theoretical model (state equations) and a practical model (computer program) do not exactly coincide, while the accuracy of the perturbation method is very uncertain. The MD code largely outperforms all other methods in computing efficiency, a subject of current research for the improvement of AD tools. Yet these tools can already be of considerable help for the computer implementation of many numerical methods, avoiding the tedious task of hand-coding the differentiation of complex algorithms.

  11. Composite likelihood estimation of demographic parameters

    Directory of Open Access Journals (Sweden)

    Garrigan Daniel

    2009-11-01

    Full Text Available Abstract Background Most existing likelihood-based methods for fitting historical demographic models to DNA sequence polymorphism data to do not scale feasibly up to the level of whole-genome data sets. Computational economies can be achieved by incorporating two forms of pseudo-likelihood: composite and approximate likelihood methods. Composite likelihood enables scaling up to large data sets because it takes the product of marginal likelihoods as an estimator of the likelihood of the complete data set. This approach is especially useful when a large number of genomic regions constitutes the data set. Additionally, approximate likelihood methods can reduce the dimensionality of the data by summarizing the information in the original data by either a sufficient statistic, or a set of statistics. Both composite and approximate likelihood methods hold promise for analyzing large data sets or for use in situations where the underlying demographic model is complex and has many parameters. This paper considers a simple demographic model of allopatric divergence between two populations, in which one of the population is hypothesized to have experienced a founder event, or population bottleneck. A large resequencing data set from human populations is summarized by the joint frequency spectrum, which is a matrix of the genomic frequency spectrum of derived base frequencies in two populations. A Bayesian Metropolis-coupled Markov chain Monte Carlo (MCMCMC method for parameter estimation is developed that uses both composite and likelihood methods and is applied to the three different pairwise combinations of the human population resequence data. The accuracy of the method is also tested on data sets sampled from a simulated population model with known parameters. Results The Bayesian MCMCMC method also estimates the ratio of effective population size for the X chromosome versus that of the autosomes. The method is shown to estimate, with reasonable

  12. Invariants and Likelihood Ratio Statistics

    OpenAIRE

    McCullagh, P.; Cox, D. R.

    1986-01-01

    Because the likelihood ratio statistic is invariant under reparameterization, it is possible to make a large-sample expansion of the statistic itself and of its expectation in terms of invariants. In particular, the Bartlett adjustment factor can be expressed in terms of invariant combinations of cumulants of the first two log-likelihood derivatives. Such expansions are given, first for a scalar parameter and then for vector parameters. Geometrical interpretation is given where possible and s...

  13. Automatic Generation of Building Models with Levels of Detail 1-3

    Science.gov (United States)

    Nguatem, W.; Drauschke, M.; Mayer, H.

    2016-06-01

    We present a workflow for the automatic generation of building models with levels of detail (LOD) 1 to 3 according to the CityGML standard (Gröger et al., 2012). We start with orienting unsorted image sets employing (Mayer et al., 2012), we compute depth maps using semi-global matching (SGM) (Hirschmüller, 2008), and fuse these depth maps to reconstruct dense 3D point clouds (Kuhn et al., 2014). Based on planes segmented from these point clouds, we have developed a stochastic method for roof model selection (Nguatem et al., 2013) and window model selection (Nguatem et al., 2014). We demonstrate our workflow up to the export into CityGML.

  14. GRACE/SUSY Automatic Generation of Tree Amplitudes in the Minimal Supersymmetric Standard Model

    CERN Document Server

    Fujimoto, J

    2003-01-01

    GRACE/SUSY is a program package for generating the tree-level amplitude and evaluating the corresponding cross section of processes of the minimal supersymmetric extension of the standard model (MSSM). The Higgs potential adopted in the system, however, is assumed to have a more general form indicated by the two-Higgs-doublet model. This system is an extension of GRACE for the standard model(SM) of the electroweak and strong interactions. For a given MSSM process the Feynman graphs and amplitudes at tree-level are automatically created. The Monte-Carlo phase space integration by means of BASES gives the total and differential cross sections. When combined with SPRING, an event generator, the program package provides us with the simulation of the SUSY particle productions.

  15. A Parallel Interval Computation Model for Global Optimization with Automatic Load Balancing

    Institute of Scientific and Technical Information of China (English)

    Yong Wu; Arun Kumar

    2012-01-01

    In this paper,we propose a decentralized parallel computation model for global optimization using interval analysis.The model is adaptive to any number of processors and the workload is automatically and evenly distributed among all processors by alternative message passing.The problems received by each processor are processed based on their local dominance properties,which avoids unnecessary interval evaluations.Further,the problem is treated as a whole at the beginning of computation so that no initial decomposition scheme is required.Numerical experiments indicate that the model works well and is stable with different number of parallel processors,distributes the load evenly among the processors,and provides an impressive speedup,especially when the problem is time-consuming to solve.

  16. Learning to Automatically Detect Features for Mobile Robots Using Second-Order Hidden Markov Models

    Directory of Open Access Journals (Sweden)

    Richard Washington

    2008-11-01

    Full Text Available In this paper, we propose a new method based on Hidden Markov Models to interpret temporal sequences of sensor data from mobile robots to automatically detect features. Hidden Markov Models have been used for a long time in pattern recognition, especially in speech recognition. Their main advantages over other methods (such as neural networks are their ability to model noisy temporal signals of variable length. We show in this paper that this approach is well suited for interpretation of temporal sequences of mobile-robot sensor data. We present two distinct experiments and results: the first one in an indoor environment where a mobile robot learns to detect features like open doors or T- intersections, the second one in an outdoor environment where a different mobile robot has to identify situations like climbing a hill or crossing a rock.

  17. Modeling Technology for Automatic Test System Software Based on Automatic Test Markup Language Standard%基于ATML标准的ATS软件建模技术

    Institute of Scientific and Technical Information of China (English)

    杨占才; 王红; 范利花; 张桂英; 杨小辉

    2013-01-01

      论述了国外ATML标准体系结构和构成ATML标准所有子模型的描述方法,提出了在现有ATS软件平台基础上,实现兼容ATML标准所需的建模流程设计、模型识别及模型运行流程设计等技术途径,为实现ATS软件平台的通用性、开放性及武器装备各种维护级别的测试资源的共享奠定了技术基础。%  all the model definition method, system architecture and the expression manner for ATML standard are discussed. Several major technology problems for the existing automatic test system software platform compatible with ATML standard are presented, such as the design for modeling flow, model identification and model running flow. All the Technology Foundation is supplied for resolving the general and open issues of the automatic test system software platform, and the testing resources share for all the maintenance level.

  18. Automatic Lung Tumor Segmentation on PET/CT Images Using Fuzzy Markov Random Field Model

    Directory of Open Access Journals (Sweden)

    Yu Guo

    2014-01-01

    Full Text Available The combination of positron emission tomography (PET and CT images provides complementary functional and anatomical information of human tissues and it has been used for better tumor volume definition of lung cancer. This paper proposed a robust method for automatic lung tumor segmentation on PET/CT images. The new method is based on fuzzy Markov random field (MRF model. The combination of PET and CT image information is achieved by using a proper joint posterior probability distribution of observed features in the fuzzy MRF model which performs better than the commonly used Gaussian joint distribution. In this study, the PET and CT simulation images of 7 non-small cell lung cancer (NSCLC patients were used to evaluate the proposed method. Tumor segmentations with the proposed method and manual method by an experienced radiation oncologist on the fused images were performed, respectively. Segmentation results obtained with the two methods were similar and Dice’s similarity coefficient (DSC was 0.85 ± 0.013. It has been shown that effective and automatic segmentations can be achieved with this method for lung tumors which locate near other organs with similar intensities in PET and CT images, such as when the tumors extend into chest wall or mediastinum.

  19. Lightning Protection Performance Assessment of Transmission Line Based on ATP model Automatic Generation

    Directory of Open Access Journals (Sweden)

    Luo Hanwu

    2016-01-01

    Full Text Available This paper presents a novel method to solve the initial lightning breakdown current by combing ATP and MATLAB simulation software effectively, with the aims to evaluate the lightning protection performance of transmission line. Firstly, the executable ATP simulation model is generated automatically according to the required information such as power source parameters, tower parameters, overhead line parameters, grounding resistance and lightning current parameters, etc. through an interface program coded by MATLAB. Then, the data are extracted from the generated LIS files which can be obtained by executing the ATP simulation model, the occurrence of transmission lie breakdown can be determined by the relative data in LIS file. The lightning current amplitude should be reduced when the breakdown occurs, and vice the verse. Thus the initial lightning breakdown current of a transmission line with given parameters can be determined accurately by continuously changing the lightning current amplitude, which is realized by a loop computing algorithm that is coded by MATLAB software. The method proposed in this paper can generate the ATP simulation program automatically, and facilitates the lightning protection performance assessment of transmission line.

  20. Vestige: Maximum likelihood phylogenetic footprinting

    Directory of Open Access Journals (Sweden)

    Maxwell Peter

    2005-05-01

    Full Text Available Abstract Background Phylogenetic footprinting is the identification of functional regions of DNA by their evolutionary conservation. This is achieved by comparing orthologous regions from multiple species and identifying the DNA regions that have diverged less than neutral DNA. Vestige is a phylogenetic footprinting package built on the PyEvolve toolkit that uses probabilistic molecular evolutionary modelling to represent aspects of sequence evolution, including the conventional divergence measure employed by other footprinting approaches. In addition to measuring the divergence, Vestige allows the expansion of the definition of a phylogenetic footprint to include variation in the distribution of any molecular evolutionary processes. This is achieved by displaying the distribution of model parameters that represent partitions of molecular evolutionary substitutions. Examination of the spatial incidence of these effects across regions of the genome can identify DNA segments that differ in the nature of the evolutionary process. Results Vestige was applied to a reference dataset of the SCL locus from four species and provided clear identification of the known conserved regions in this dataset. To demonstrate the flexibility to use diverse models of molecular evolution and dissect the nature of the evolutionary process Vestige was used to footprint the Ka/Ks ratio in primate BRCA1 with a codon model of evolution. Two regions of putative adaptive evolution were identified illustrating the ability of Vestige to represent the spatial distribution of distinct molecular evolutionary processes. Conclusion Vestige provides a flexible, open platform for phylogenetic footprinting. Underpinned by the PyEvolve toolkit, Vestige provides a framework for visualising the signatures of evolutionary processes across the genome of numerous organisms simultaneously. By exploiting the maximum-likelihood statistical framework, the complex interplay between mutational

  1. Likelihood-based inference with singular information matrix

    OpenAIRE

    Rotnitzky, Andrea; David R Cox; Bottai, Matteo; Robins, James

    2000-01-01

    We consider likelihood-based asymptotic inference for a p-dimensional parameter θ of an identifiable parametric model with singular information matrix of rank p-1 at θ=θ* and likelihood differentiable up to a specific order. We derive the asymptotic distribution of the likelihood ratio test statistics for the simple null hypothesis that θ=θ* and of the maximum likelihood estimator (MLE) of θ when θ=θ*. We show that there exists a reparametrization such that the MLE of the last p-1 components ...

  2. Slow Dynamics Model of Compressed Air Energy Storage and Battery Storage Technologies for Automatic Generation Control

    Energy Technology Data Exchange (ETDEWEB)

    Krishnan, Venkat; Das, Trishna

    2016-05-01

    Increasing variable generation penetration and the consequent increase in short-term variability makes energy storage technologies look attractive, especially in the ancillary market for providing frequency regulation services. This paper presents slow dynamics model for compressed air energy storage and battery storage technologies that can be used in automatic generation control studies to assess the system frequency response and quantify the benefits from storage technologies in providing regulation service. The paper also represents the slow dynamics model of the power system integrated with storage technologies in a complete state space form. The storage technologies have been integrated to the IEEE 24 bus system with single area, and a comparative study of various solution strategies including transmission enhancement and combustion turbine have been performed in terms of generation cycling and frequency response performance metrics.

  3. Automatic sleep classification using a data-driven topic model reveals latent sleep states

    DEFF Research Database (Denmark)

    Koch, Henriette; Christensen, Julie Anja Engelhard; Frandsen, Rune;

    2014-01-01

    Latent Dirichlet Allocation. Model application was tested on control subjects and patients with periodic leg movements (PLM) representing a non-neurodegenerative group, and patients with idiopathic REM sleep behavior disorder (iRBD) and Parkinson's Disease (PD) representing a neurodegenerative group......Background: The golden standard for sleep classification uses manual scoring of polysomnography despite points of criticism such as oversimplification, low inter-rater reliability and the standard being designed on young and healthy subjects. New method: To meet the criticism and reveal the latent...... sleep states, this study developed a general and automatic sleep classifier using a data-driven approach. Spectral EEG and EOG measures and eye correlation in 1 s windows were calculated and each sleep epoch was expressed as a mixture of probabilities of latent sleep states by using the topic model...

  4. Automatic corpus callosum segmentation using a deformable active Fourier contour model

    Science.gov (United States)

    Vachet, Clement; Yvernault, Benjamin; Bhatt, Kshamta; Smith, Rachel G.; Gerig, Guido; Cody Hazlett, Heather; Styner, Martin

    2012-03-01

    The corpus callosum (CC) is a structure of interest in many neuroimaging studies of neuro-developmental pathology such as autism. It plays an integral role in relaying sensory, motor and cognitive information from homologous regions in both hemispheres. We have developed a framework that allows automatic segmentation of the corpus callosum and its lobar subdivisions. Our approach employs constrained elastic deformation of flexible Fourier contour model, and is an extension of Szekely's 2D Fourier descriptor based Active Shape Model. The shape and appearance model, derived from a large mixed population of 150+ subjects, is described with complex Fourier descriptors in a principal component shape space. Using MNI space aligned T1w MRI data, the CC segmentation is initialized on the mid-sagittal plane using the tissue segmentation. A multi-step optimization strategy, with two constrained steps and a final unconstrained step, is then applied. If needed, interactive segmentation can be performed via contour repulsion points. Lobar connectivity based parcellation of the corpus callosum can finally be computed via the use of a probabilistic CC subdivision model. Our analysis framework has been integrated in an open-source, end-to-end application called CCSeg both with a command line and Qt-based graphical user interface (available on NITRC). A study has been performed to quantify the reliability of the semi-automatic segmentation on a small pediatric dataset. Using 5 subjects randomly segmented 3 times by two experts, the intra-class correlation coefficient showed a superb reliability (0.99). CCSeg is currently applied to a large longitudinal pediatric study of brain development in autism.

  5. AUTOMATIC TAGGING OF PERSIAN WEB PAGES BASED ON N-GRAM LANGUAGE MODELS USING MAPREDUCE

    Directory of Open Access Journals (Sweden)

    Saeed Shahrivari

    2015-07-01

    Full Text Available Page tagging is one of the most important facilities for increasing the accuracy of information retrieval in the web. Tags are simple pieces of data that usually consist of one or several words, and briefly describe a page. Tags provide useful information about a page and can be used for boosting the accuracy of searching, document clustering, and result grouping. The most accurate solution to page tagging is using human experts. However, when the number of pages is large, humans cannot be used, and some automatic solutions should be used instead. We propose a solution called PerTag which can automatically tag a set of Persian web pages. PerTag is based on n-gram models and uses the tf-idf method plus some effective Persian language rules to select proper tags for each web page. Since our target is huge sets of web pages, PerTag is built on top of the MapReduce distributed computing framework. We used a set of more than 500 million Persian web pages during our experiments, and extracted tags for each page using a cluster of 40 machines. The experimental results show that PerTag is both fast and accurate

  6. Model design and simulation of automatic sorting machine using proximity sensor

    Directory of Open Access Journals (Sweden)

    Bankole I. Oladapo

    2016-09-01

    Full Text Available The automatic sorting system has been reported to be complex and a global problem. This is because of the inability of sorting machines to incorporate flexibility in their design concept. This research therefore designed and developed an automated sorting object of a conveyor belt. The developed automated sorting machine is able to incorporate flexibility and separate species of non-ferrous metal objects and at the same time move objects automatically to the basket as defined by the regulation of the Programmable Logic Controllers (PLC with a capacitive proximity sensor to detect a value range of objects. The result obtained shows that plastic, wood, and steel were sorted into their respective and correct position with an average, sorting, time of 9.903 s, 14.072 s and 18.648 s respectively. The proposed developed model of this research could be adopted at any institution or industries, whose practices are based on mechatronics engineering systems. This is to guide the industrial sector in sorting of object and teaching aid to institutions and hence produce the list of classified materials according to the enabled sorting program commands.

  7. Growing local likelihood network: Emergence of communities

    Science.gov (United States)

    Chen, S.; Small, M.

    2015-10-01

    In many real situations, networks grow only via local interactions. New nodes are added to the growing network with information only pertaining to a small subset of existing nodes. Multilevel marketing, social networks, and disease models can all be depicted as growing networks based on local (network path-length) distance information. In these examples, all nodes whose distance from a chosen center is less than d form a subgraph. Hence, we grow networks with information only from these subgraphs. Moreover, we use a likelihood-based method, where at each step we modify the networks by changing their likelihood to be closer to the expected degree distribution. Combining the local information and the likelihood method, we grow networks that exhibit novel features. We discover that the likelihood method, over certain parameter ranges, can generate networks with highly modulated communities, even when global information is not available. Communities and clusters are abundant in real-life networks, and the method proposed here provides a natural mechanism for the emergence of communities in scale-free networks. In addition, the algorithmic implementation of network growth via local information is substantially faster than global methods and allows for the exploration of much larger networks.

  8. Development of Monte Carlo automatic modeling functions of MCAM for TRIPOLI-ITER application

    Science.gov (United States)

    Lu, L.; Lee, Y. K.; Zhang, J. J.; Li, Y.; Zeng, Q.; Wu, Y. C.

    2009-07-01

    TRIPOLI is a Monte Carlo particle transport code simulating the three-dimensional transport of neutrons and photons with the Monte Carlo method, and it can be used for many applications to nuclear devices with complex geometries; however, modeling of a complex geometry is a time-consuming and error-prone task. The recently developed functions of Monte Carlo Automatic Modeling (MCAM) system, which is an interface code that can facilitate Monte Carlo modeling by employing the CAD technology, have implemented the bidirectional conversion between the CAD model and the TRIPOLI computation model. In this study, different geometric representations of CAD system and TRIPOLI code and the methodology of bidirectional conversion between them were introduced. A TRIPOLI input file of International Thermonuclear Experimental Reactor (ITER) benchmark model, which was distributed to validate the Monte Carlo modeling tools, was created and applied to simulate D-T fusion neutron source sampling and calculate first wall loading. Then the results were compared with that of Monte Carlo N-Particle (MCNP) and the good agreements present the feasibility and validity.

  9. On the likelihood of forests

    Science.gov (United States)

    Shang, Yilun

    2016-08-01

    How complex a network is crucially impacts its function and performance. In many modern applications, the networks involved have a growth property and sparse structures, which pose challenges to physicists and applied mathematicians. In this paper, we introduce the forest likelihood as a plausible measure to gauge how difficult it is to construct a forest in a non-preferential attachment way. Based on the notions of admittable labeling and path construction, we propose algorithms for computing the forest likelihood of a given forest. Concrete examples as well as the distributions of forest likelihoods for all forests with some fixed numbers of nodes are presented. Moreover, we illustrate the ideas on real-life networks, including a benzenoid tree, a mathematical family tree, and a peer-to-peer network.

  10. Hierarchical Model-Based Activity Recognition With Automatic Low-Level State Discovery

    Directory of Open Access Journals (Sweden)

    Justin Muncaster

    2007-09-01

    Full Text Available Activity recognition in video streams is increasingly important for both the computer vision and artificial intelligence communities. Activity recognition has many applications in security and video surveillance. Ultimately in such applications one wishes to recognize complex activities, which can be viewed as combination of simple activities. In this paper, we present a general framework of a Dlevel dynamic Bayesian network to perform complex activity recognition. The levels of the network are constrained to enforce state hierarchy while the Dth level models the duration of simplest event. Moreover, in this paper we propose to use the deterministic annealing clustering method to automatically define the simple activities, which corresponds to the low level states of observable levels in a Dynamic Bayesian Networks. We used real data sets for experiments. The experimental results show the effectiveness of our proposed method.

  11. Modeling Earthen Dike Stability: Sensitivity Analysis and Automatic Calibration of Diffusivities Based on Live Sensor Data

    CERN Document Server

    Melnikova, N B; Sloot, P M A

    2012-01-01

    The paper describes concept and implementation details of integrating a finite element module for dike stability analysis Virtual Dike into an early warning system for flood protection. The module operates in real-time mode and includes fluid and structural sub-models for simulation of porous flow through the dike and for dike stability analysis. Real-time measurements obtained from pore pressure sensors are fed into the simulation module, to be compared with simulated pore pressure dynamics. Implementation of the module has been performed for a real-world test case - an earthen levee protecting a sea-port in Groningen, the Netherlands. Sensitivity analysis and calibration of diffusivities have been performed for tidal fluctuations. An algorithm for automatic diffusivities calibration for a heterogeneous dike is proposed and studied. Analytical solutions describing tidal propagation in one-dimensional saturated aquifer are employed in the algorithm to generate initial estimates of diffusivities.

  12. Automatic generation of virtual worlds from architectural and mechanical CAD models

    International Nuclear Information System (INIS)

    Accelerator projects like the XFEL or the planned linear collider TESLA involve extensive architectural and mechanical design work, resulting in a variety of CAD models. The CAD models will be showing different parts of the project, like e.g. the different accelerator components or parts of the building complexes, and they will be created and stored by different groups in different formats. A complete CAD model of the accelerator and its buildings is thus difficult to obtain and would also be extremely huge and difficult to handle. This thesis describes the design and prototype development of a tool which automatically creates virtual worlds from different CAD models. The tool will enable the user to select a required area for visualization on a map, and then create a 3D-model of the selected area which can be displayed in a web-browser. The thesis first discusses the system requirements and provides some background on data visualization. Then, it introduces the system architecture, the algorithms and the used technologies, and finally demonstrates the capabilities of the system using two case studies. (orig.)

  13. Fitmunk: improving protein structures by accurate, automatic modeling of side-chain conformations.

    Science.gov (United States)

    Porebski, Przemyslaw Jerzy; Cymborowski, Marcin; Pasenkiewicz-Gierula, Marta; Minor, Wladek

    2016-02-01

    Improvements in crystallographic hardware and software have allowed automated structure-solution pipelines to approach a near-`one-click' experience for the initial determination of macromolecular structures. However, in many cases the resulting initial model requires a laborious, iterative process of refinement and validation. A new method has been developed for the automatic modeling of side-chain conformations that takes advantage of rotamer-prediction methods in a crystallographic context. The algorithm, which is based on deterministic dead-end elimination (DEE) theory, uses new dense conformer libraries and a hybrid energy function derived from experimental data and prior information about rotamer frequencies to find the optimal conformation of each side chain. In contrast to existing methods, which incorporate the electron-density term into protein-modeling frameworks, the proposed algorithm is designed to take advantage of the highly discriminatory nature of electron-density maps. This method has been implemented in the program Fitmunk, which uses extensive conformational sampling. This improves the accuracy of the modeling and makes it a versatile tool for crystallographic model building, refinement and validation. Fitmunk was extensively tested on over 115 new structures, as well as a subset of 1100 structures from the PDB. It is demonstrated that the ability of Fitmunk to model more than 95% of side chains accurately is beneficial for improving the quality of crystallographic protein models, especially at medium and low resolutions. Fitmunk can be used for model validation of existing structures and as a tool to assess whether side chains are modeled optimally or could be better fitted into electron density. Fitmunk is available as a web service at http://kniahini.med.virginia.edu/fitmunk/server/ or at http://fitmunk.bitbucket.org/. PMID:26894674

  14. Fitmunk: improving protein structures by accurate, automatic modeling of side-chain conformations.

    Science.gov (United States)

    Porebski, Przemyslaw Jerzy; Cymborowski, Marcin; Pasenkiewicz-Gierula, Marta; Minor, Wladek

    2016-02-01

    Improvements in crystallographic hardware and software have allowed automated structure-solution pipelines to approach a near-`one-click' experience for the initial determination of macromolecular structures. However, in many cases the resulting initial model requires a laborious, iterative process of refinement and validation. A new method has been developed for the automatic modeling of side-chain conformations that takes advantage of rotamer-prediction methods in a crystallographic context. The algorithm, which is based on deterministic dead-end elimination (DEE) theory, uses new dense conformer libraries and a hybrid energy function derived from experimental data and prior information about rotamer frequencies to find the optimal conformation of each side chain. In contrast to existing methods, which incorporate the electron-density term into protein-modeling frameworks, the proposed algorithm is designed to take advantage of the highly discriminatory nature of electron-density maps. This method has been implemented in the program Fitmunk, which uses extensive conformational sampling. This improves the accuracy of the modeling and makes it a versatile tool for crystallographic model building, refinement and validation. Fitmunk was extensively tested on over 115 new structures, as well as a subset of 1100 structures from the PDB. It is demonstrated that the ability of Fitmunk to model more than 95% of side chains accurately is beneficial for improving the quality of crystallographic protein models, especially at medium and low resolutions. Fitmunk can be used for model validation of existing structures and as a tool to assess whether side chains are modeled optimally or could be better fitted into electron density. Fitmunk is available as a web service at http://kniahini.med.virginia.edu/fitmunk/server/ or at http://fitmunk.bitbucket.org/.

  15. AUTOMATIC TOPOLOGY DERIVATION FROM IFC BUILDING MODEL FOR IN-DOOR INTELLIGENT NAVIGATION

    Directory of Open Access Journals (Sweden)

    S. J. Tang

    2015-05-01

    Full Text Available With the goal to achieve an accuracy navigation within the building environment, it is critical to explore a feasible way for building the connectivity relationships among 3D geographical features called in-building topology network. Traditional topology construction approaches for indoor space always based on 2D maps or pure geometry model, which remained information insufficient problem. Especially, an intelligent navigation for different applications depends mainly on the precise geometry and semantics of the navigation network. The trouble caused by existed topology construction approaches can be smoothed by employing IFC building model which contains detailed semantic and geometric information. In this paper, we present a method which combined a straight media axis transformation algorithm (S-MAT with IFC building model to reconstruct indoor geometric topology network. This derived topology aimed at facilitating the decision making for different in-building navigation. In this work, we describe a multi-step deviation process including semantic cleaning, walkable features extraction, Multi-Storey 2D Mapping and S-MAT implementation to automatically generate topography information from existing indoor building model data given in IFC.

  16. Multiobjective Optimal Algorithm for Automatic Calibration of Daily Streamflow Forecasting Model

    Directory of Open Access Journals (Sweden)

    Yi Liu

    2016-01-01

    Full Text Available Single-objection function cannot describe the characteristics of the complicated hydrologic system. Consequently, it stands to reason that multiobjective functions are needed for calibration of hydrologic model. The multiobjective algorithms based on the theory of nondominate are employed to solve this multiobjective optimal problem. In this paper, a novel multiobjective optimization method based on differential evolution with adaptive Cauchy mutation and Chaos searching (MODE-CMCS is proposed to optimize the daily streamflow forecasting model. Besides, to enhance the diversity performance of Pareto solutions, a more precise crowd distance assigner is presented in this paper. Furthermore, the traditional generalized spread metric (SP is sensitive with the size of Pareto set. A novel diversity performance metric, which is independent of Pareto set size, is put forward in this research. The efficacy of the new algorithm MODE-CMCS is compared with the nondominated sorting genetic algorithm II (NSGA-II on a daily streamflow forecasting model based on support vector machine (SVM. The results verify that the performance of MODE-CMCS is superior to the NSGA-II for automatic calibration of hydrologic model.

  17. Automatic Segmentation of Wrist Bones in CT Using a Statistical Wrist Shape + Pose Model.

    Science.gov (United States)

    Anas, Emran Mohammad Abu; Rasoulian, Abtin; Seitel, Alexander; Darras, Kathryn; Wilson, David; John, Paul St; Pichora, David; Mousavi, Parvin; Rohling, Robert; Abolmaesumi, Purang

    2016-08-01

    Segmentation of the wrist bones in CT images has been frequently used in different clinical applications including arthritis evaluation, bone age assessment and image-guided interventions. The major challenges include non-uniformity and spongy textures of the bone tissue as well as narrow inter-bone spaces. In this work, we propose an automatic wrist bone segmentation technique for CT images based on a statistical model that captures the shape and pose variations of the wrist joint across 60 example wrists at nine different wrist positions. To establish the correspondences across the training shapes at neutral positions, the wrist bone surfaces are jointly aligned using a group-wise registration framework based on a Gaussian Mixture Model. Principal component analysis is then used to determine the major modes of shape variations. The variations in poses not only across the population but also across different wrist positions are incorporated in two pose models. An intra-subject pose model is developed by utilizing the similarity transforms at all wrist positions across the population. Further, an inter-subject pose model is used to model the pose variations across different wrist positions. For segmentation of the wrist bones in CT images, the developed model is registered to the edge point cloud extracted from the CT volume through an expectation maximization based probabilistic approach. Residual registration errors are corrected by application of a non-rigid registration technique. We validate the proposed segmentation method by registering the wrist model to a total of 66 unseen CT volumes of average voxel size of 0.38 mm. We report a mean surface distance error of 0.33 mm and a mean Jaccard index of 0.86. PMID:26890640

  18. Shrinkage Effect in Ancestral Maximum Likelihood

    CERN Document Server

    Mossel, Elchanan; Steel, Mike

    2008-01-01

    Ancestral maximum likelihood (AML) is a method that simultaneously reconstructs a phylogenetic tree and ancestral sequences from extant data (sequences at the leaves). The tree and ancestral sequences maximize the probability of observing the given data under a Markov model of sequence evolution, in which branch lengths are also optimized but constrained to take the same value on any edge across all sequence sites. AML differs from the more usual form of maximum likelihood (ML) in phylogenetics because ML averages over all possible ancestral sequences. ML has long been known to be statistically consistent -- that is, it converges on the correct tree with probability approaching 1 as the sequence length grows. However, the statistical consistency of AML has not been formally determined, despite informal remarks in a literature that dates back 20 years. In this short note we prove a general result that implies that AML is statistically inconsistent. In particular we show that AML can `shrink' short edges in a t...

  19. Multi-Channel Maximum Likelihood Pitch Estimation

    DEFF Research Database (Denmark)

    Christensen, Mads Græsbøll

    2012-01-01

    . This essentially means that the model allows for different conditions in the various channels, like different signal-to-noise ratios, microphone characteristics and reverberation. Moreover, the method does not assume that a certain array structure is used but rather relies on a more general model and......In this paper, a method for multi-channel pitch estimation is proposed. The method is a maximum likelihood estimator and is based on a parametric model where the signals in the various channels share the same fundamental frequency but can have different amplitudes, phases, and noise characteristics...

  20. Automatic calibration of a global flow routing model in the Amazon basin using virtual SWOT data

    Science.gov (United States)

    Rogel, P. Y.; Mouffe, M.; Getirana, A.; Ricci, S. M.; Lion, C.; Mognard, N. M.; Biancamaria, S.; Boone, A.

    2012-12-01

    The Surface Water and Ocean Topography (SWOT) wide swath altimetry mission will provide a global coverage of surface water elevation, which will be used to help correct water height and discharge prediction from hydrological models. Here, the aim is to investigate the use of virtually generated SWOT data to improve water height and discharge simulation using calibration of model parameters (like river width, river depth and roughness coefficient). In this work, we use the HyMAP model to estimate water height and discharge on the Amazon catchment area. Before reaching the river network, surface and subsurface runoff are delayed by a set of linear and independent reservoirs. The flow routing is performed by the kinematic wave equation.. Since the SWOT mission has not yet been launched, virtual SWOT data are generated with a set of true parameters for HyMAP as well as measurement errors from a SWOT data simulator (i.e. a twin experiment approach is implemented). These virtual observations are used to calibrate key parameters of HyMAP through the minimization of a cost function defining the difference between the simulated and observed water heights over a one-year simulation period. The automatic calibration procedure is achieved using the MOCOM-UA multicriteria global optimization algorithm as well as the local optimization algorithm BC-DFO that is considered as a computational cost saving alternative. First, to reduce the computational cost of the calibration procedure, each spatially distributed parameter (Manning coefficient, river width and river depth) is corrupted through the multiplication of a spatially uniform factor that is the only factor optimized. In this case, it is shown that, when the measurement errors are small, the true water heights and discharges are easily retrieved. Because of equifinality, the true parameters are not always identified. A spatial correction of the model parameters is then investigated and the domain is divided into 4 regions

  1. Modelling the adoption of automatic milking systems in Noord-Holland

    Directory of Open Access Journals (Sweden)

    Matteo Floridi

    2013-05-01

    Full Text Available Innovation and new technology adoption represent two central elements for the business and industry development process in agriculture. One of the most relevant innovations in dairy farms is the robotisation of the milking process through the adoption of Automatic Milking Systems (AMS. The purpose of this paper is to assess the impact of selected Common Agricultural Policy measures on the adoption of AMS in dairy farms. The model developed is a dynamic farm-household model that is able to simulate the adoption of AMS taking into account the allocation of productive factors between on-farm and off-farm activities. The model simulates the decision to replace a traditional milking system with AMS using a Real Options approach that allows farmers to choose the optimal timing of investments. Results show that the adoption of AMS, and the timing of such a decision, is strongly affected by policy uncertainty and market conditions. The effect of this uncertainty is to postpone the decision to adopt the new technology until farmers have gathered enough information to reduce the negative effects of the technological lock-in. AMS adoption results in an increase in farm size and herd size due to the reduction in the labour required for milking operations.

  2. Automatic Sex Determination of Skulls Based on a Statistical Shape Model

    Directory of Open Access Journals (Sweden)

    Li Luo

    2013-01-01

    Full Text Available Sex determination from skeletons is an important research subject in forensic medicine. Previous skeletal sex assessments are through subjective visual analysis by anthropologists or metric analysis of sexually dimorphic features. In this work, we present an automatic sex determination method for 3D digital skulls, in which a statistical shape model for skulls is constructed, which projects the high-dimensional skull data into a low-dimensional shape space, and Fisher discriminant analysis is used to classify skulls in the shape space. This method combines the advantages of metrical and morphological methods. It is easy to use without professional qualification and tedious manual measurement. With a group of Chinese skulls including 127 males and 81 females, we choose 92 males and 58 females to establish the discriminant model and validate the model with the other skulls. The correct rate is 95.7% and 91.4% for females and males, respectively. Leave-one-out test also shows that the method has a high accuracy.

  3. EXPERIMENTS WITH UAS IMAGERY FOR AUTOMATIC MODELING OF POWER LINE 3D GEOMETRY

    Directory of Open Access Journals (Sweden)

    G. Jóźków

    2015-08-01

    Full Text Available The ideal mapping technology for transmission line inspection is the airborne LiDAR executed from helicopter platforms. It allows for full 3D geometry extraction in highly automated manner. Large scale aerial images can be also used for this purpose, however, automation is possible only for finding transmission line positions (2D geometry, and the sag needs to be estimated manually. For longer lines, these techniques are less expensive than ground surveys, yet they are still expensive. UAS technology has the potential to reduce these costs, especially if using inexpensive platforms with consumer grade cameras. This study investigates the potential of using high resolution UAS imagery for automatic modeling of transmission line 3D geometry. The key point of this experiment was to employ dense matching algorithms to appropriately acquired UAS images to have points created also on wires. This allowed to model the 3D geometry of transmission lines similarly to LiDAR acquired point clouds. Results showed that the transmission line modeling is possible with a high internal accuracy for both, horizontal and vertical directions, even when wires were represented by a partial (sparse point cloud.

  4. Calibration of the Hydrological Simulation Program Fortran (HSPF) model using automatic calibration and geographical information systems

    Science.gov (United States)

    Al-Abed, N. A.; Whiteley, H. R.

    2002-11-01

    Calibrating a comprehensive, multi-parameter conceptual hydrological model, such as the Hydrological Simulation Program Fortran model, is a major challenge. This paper describes calibration procedures for water-quantity parameters of the HSPF version 10·11 using the automatic-calibration parameter estimator model coupled with a geographical information system (GIS) approach for spatially averaged properties. The study area was the Grand River watershed, located in southern Ontario, Canada, between 79° 30 and 80° 57W longitude and 42° 51 and 44° 31N latitude. The drainage area is 6965 km2. Calibration efforts were directed to those model parameters that produced large changes in model response during sensitivity tests run prior to undertaking calibration. A GIS was used extensively in this study. It was first used in the watershed segmentation process. During calibration, the GIS data were used to establish realistic starting values for the surface and subsurface zone parameters LZSN, UZSN, COVER, and INFILT and physically reasonable ratios of these parameters among watersheds were preserved during calibration with the ratios based on the known properties of the subwatersheds determined using GIS. This calibration procedure produced very satisfactory results; the percentage difference between the simulated and the measured yearly discharge ranged between 4 to 16%, which is classified as good to very good calibration. The average simulated daily discharge for the watershed outlet at Brantford for the years 1981-85 was 67 m3 s-1 and the average measured discharge at Brantford was 70 m3 s-1. The coupling of a GIS with automatice calibration produced a realistic and accurate calibration for the HSPF model with much less effort and subjectivity than would be required for unassisted calibration.

  5. Support Vector Machine Model for Automatic Detection and Classification of Seismic Events

    Science.gov (United States)

    Barros, Vesna; Barros, Lucas

    2016-04-01

    The automated processing of multiple seismic signals to detect, localize and classify seismic events is a central tool in both natural hazards monitoring and nuclear treaty verification. However, false detections and missed detections caused by station noise and incorrect classification of arrivals are still an issue and the events are often unclassified or poorly classified. Thus, machine learning techniques can be used in automatic processing for classifying the huge database of seismic recordings and provide more confidence in the final output. Applied in the context of the International Monitoring System (IMS) - a global sensor network developed for the Comprehensive Nuclear-Test-Ban Treaty (CTBT) - we propose a fully automatic method for seismic event detection and classification based on a supervised pattern recognition technique called the Support Vector Machine (SVM). According to Kortström et al., 2015, the advantages of using SVM are handleability of large number of features and effectiveness in high dimensional spaces. Our objective is to detect seismic events from one IMS seismic station located in an area of high seismicity and mining activity and classify them as earthquakes or quarry blasts. It is expected to create a flexible and easily adjustable SVM method that can be applied in different regions and datasets. Taken a step further, accurate results for seismic stations could lead to a modification of the model and its parameters to make it applicable to other waveform technologies used to monitor nuclear explosions such as infrasound and hydroacoustic waveforms. As an authorized user, we have direct access to all IMS data and bulletins through a secure signatory account. A set of significant seismic waveforms containing different types of events (e.g. earthquake, quarry blasts) and noise is being analysed to train the model and learn the typical pattern of the signal from these events. Moreover, comparing the performance of the support

  6. EMPIRICAL LIKELIHOOD RATIO CONFIDENCE INTERVALS FOR VARIOUS DIFFERENCES OF TWO POPULATIONS

    Institute of Scientific and Technical Information of China (English)

    2000-01-01

    Recently the empirical likelihood has been shown to be very useful in nonparametric models. Qin combined the empirical likelihood thought and the parametric likelihood method to construct confidence intervals for the difference of two population means in a semiparametric model. In this paper, we use the empirical likelihood thought to construct confidence intervals for some differences of two populations in a nonparametric model. A version of Wilks' theorem is developed.

  7. CRYPTOGRAPHIC SECURE CLOUD STORAGE MODEL WITH ANONYMOUS AUTHENTICATION AND AUTOMATIC FILE RECOVERY

    Directory of Open Access Journals (Sweden)

    Sowmiya Murthy

    2014-10-01

    Full Text Available We propose a secure cloud storage model that addresses security and storage issues for cloud computing environments. Security is achieved by anonymous authentication which ensures that cloud users remain anonymous while getting duly authenticated. For achieving this goal, we propose a digital signature based authentication scheme with a decentralized architecture for distributed key management with multiple Key Distribution Centers. Homomorphic encryption scheme using Paillier public key cryptosystem is used for encrypting the data that is stored in the cloud. We incorporate a query driven approach for validating the access policies defined by an individual user for his/her data i.e. the access is granted to a requester only if his credentials matches with the hidden access policy. Further, since data is vulnerable to losses or damages due to the vagaries of the network, we propose an automatic retrieval mechanism where lost data is recovered by data replication and file replacement with string matching algorithm. We describe a prototype implementation of our proposed model.

  8. Artificial neural networks for automatic modelling of the pectus excavatum corrective prosthesis

    Science.gov (United States)

    Rodrigues, Pedro L.; Moreira, António H. J.; Rodrigues, Nuno F.; Pinho, ACM; Fonseca, Jaime C.; Correia-Pinto, Jorge; Vilaça, João. L.

    2014-03-01

    Pectus excavatum is the most common deformity of the thorax and usually comprises Computed Tomography (CT) examination for pre-operative diagnosis. Aiming at the elimination of the high amounts of CT radiation exposure, this work presents a new methodology for the replacement of CT by a laser scanner (radiation-free) in the treatment of pectus excavatum using personally modeled prosthesis. The complete elimination of CT involves the determination of ribs external outline, at the maximum sternum depression point for prosthesis placement, based on chest wall skin surface information, acquired by a laser scanner. The developed solution resorts to artificial neural networks trained with data vectors from 165 patients. Scaled Conjugate Gradient, Levenberg-Marquardt, Resilient Back propagation and One Step Secant gradient learning algorithms were used. The training procedure was performed using the soft tissue thicknesses, determined using image processing techniques that automatically segment the skin and rib cage. The developed solution was then used to determine the ribs outline in data from 20 patient scanners. Tests revealed that ribs position can be estimated with an average error of about 6.82+/-5.7 mm for the left and right side of the patient. Such an error range is well below current prosthesis manual modeling (11.7+/-4.01 mm) even without CT imagiology, indicating a considerable step forward towards CT replacement by a 3D scanner for prosthesis personalization.

  9. Maximum-likelihood absorption tomography

    International Nuclear Information System (INIS)

    Maximum-likelihood methods are applied to the problem of absorption tomography. The reconstruction is done with the help of an iterative algorithm. We show how the statistics of the illuminating beam can be incorporated into the reconstruction. The proposed reconstruction method can be considered as a useful alternative in the extreme cases where the standard ill-posed direct-inversion methods fail. (authors)

  10. Accurate structural correlations from maximum likelihood superpositions.

    Directory of Open Access Journals (Sweden)

    Douglas L Theobald

    2008-02-01

    Full Text Available The cores of globular proteins are densely packed, resulting in complicated networks of structural interactions. These interactions in turn give rise to dynamic structural correlations over a wide range of time scales. Accurate analysis of these complex correlations is crucial for understanding biomolecular mechanisms and for relating structure to function. Here we report a highly accurate technique for inferring the major modes of structural correlation in macromolecules using likelihood-based statistical analysis of sets of structures. This method is generally applicable to any ensemble of related molecules, including families of nuclear magnetic resonance (NMR models, different crystal forms of a protein, and structural alignments of homologous proteins, as well as molecular dynamics trajectories. Dominant modes of structural correlation are determined using principal components analysis (PCA of the maximum likelihood estimate of the correlation matrix. The correlations we identify are inherently independent of the statistical uncertainty and dynamic heterogeneity associated with the structural coordinates. We additionally present an easily interpretable method ("PCA plots" for displaying these positional correlations by color-coding them onto a macromolecular structure. Maximum likelihood PCA of structural superpositions, and the structural PCA plots that illustrate the results, will facilitate the accurate determination of dynamic structural correlations analyzed in diverse fields of structural biology.

  11. Automatically inferred Markov network models for classification of chromosomal band pattern structures.

    Science.gov (United States)

    Granum, E; Thomason, M G

    1990-01-01

    A structural pattern recognition approach to the analysis and classification of metaphase chromosome band patterns is presented. An operational method of representing band pattern profiles as sharp edged idealized profiles is outlined. These profiles are nonlinearly scaled to a few, but fixed number of "density" levels. Previous experience has shown that profiles of six levels are appropriate and that the differences between successive bands in these profiles are suitable for classification. String representations, which focuses on the sequences of transitions between local band pattern levels, are derived from such "difference profiles." A method of syntactic analysis of the band transition sequences by dynamic programming for optimal (maximal probability) string-to-network alignments is described. It develops automatic data-driven inference of band pattern models (Markov networks) per class, and uses these models for classification. The method does not use centromere information, but assumes the p-q-orientation of the band pattern profiles to be known a priori. It is experimentally established that the method can build Markov network models, which, when used for classification, show a recognition rate of about 92% on test data. The experiments used 200 samples (chromosome profiles) for each of the 22 autosome chromosome types and are designed to also investigate various classifier design problems. It is found that the use of a priori knowledge of Denver Group assignment only improved classification by 1 or 2%. A scheme for typewise normalization of the class relationship measures prove useful, partly through improvements on average results and partly through a more evenly distributed error pattern. The choice of reference of the p-q-orientation of the band patterns is found to be unimportant, and results of timing of the execution time of the analysis show that recent and efficient implementations can process one cell in less than 1 min on current standard

  12. Synthesizing regression results: a factored likelihood method.

    Science.gov (United States)

    Wu, Meng-Jia; Becker, Betsy Jane

    2013-06-01

    Regression methods are widely used by researchers in many fields, yet methods for synthesizing regression results are scarce. This study proposes using a factored likelihood method, originally developed to handle missing data, to appropriately synthesize regression models involving different predictors. This method uses the correlations reported in the regression studies to calculate synthesized standardized slopes. It uses available correlations to estimate missing ones through a series of regressions, allowing us to synthesize correlations among variables as if each included study contained all the same variables. Great accuracy and stability of this method under fixed-effects models were found through Monte Carlo simulation. An example was provided to demonstrate the steps for calculating the synthesized slopes through sweep operators. By rearranging the predictors in the included regression models or omitting a relatively small number of correlations from those models, we can easily apply the factored likelihood method to many situations involving synthesis of linear models. Limitations and other possible methods for synthesizing more complicated models are discussed. Copyright © 2012 John Wiley & Sons, Ltd. PMID:26053653

  13. Modified likelihood ratio test for homogeneity in bivariate normal mixtures with presence of a structural parameter

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    This paper investigates the asymptotic properties of the modified likelihood ratio statistic for testing homogeneity in bivariate normal mixture models with an unknown structural parameter. It is shown that the modified likelihood ratio statistic has χ22 null limiting distribution.

  14. Insight into model mechanisms through automatic parameter fitting: a new methodological framework for model development

    Science.gov (United States)

    2014-01-01

    Background Striking a balance between the degree of model complexity and parameter identifiability, while still producing biologically feasible simulations using modelling is a major challenge in computational biology. While these two elements of model development are closely coupled, parameter fitting from measured data and analysis of model mechanisms have traditionally been performed separately and sequentially. This process produces potential mismatches between model and data complexities that can compromise the ability of computational frameworks to reveal mechanistic insights or predict new behaviour. In this study we address this issue by presenting a generic framework for combined model parameterisation, comparison of model alternatives and analysis of model mechanisms. Results The presented methodology is based on a combination of multivariate metamodelling (statistical approximation of the input–output relationships of deterministic models) and a systematic zooming into biologically feasible regions of the parameter space by iterative generation of new experimental designs and look-up of simulations in the proximity of the measured data. The parameter fitting pipeline includes an implicit sensitivity analysis and analysis of parameter identifiability, making it suitable for testing hypotheses for model reduction. Using this approach, under-constrained model parameters, as well as the coupling between parameters within the model are identified. The methodology is demonstrated by refitting the parameters of a published model of cardiac cellular mechanics using a combination of measured data and synthetic data from an alternative model of the same system. Using this approach, reduced models with simplified expressions for the tropomyosin/crossbridge kinetics were found by identification of model components that can be omitted without affecting the fit to the parameterising data. Our analysis revealed that model parameters could be constrained to a standard

  15. Multi-objective automatic calibration of hydrodynamic models - development of the concept and an application in the Mekong Delta

    OpenAIRE

    Nguyen, Viet-Dung

    2011-01-01

    Automatic and multi-objective calibration of hydrodynamic models is still underdeveloped, in particular, in comparison with other fields such as hydrological modeling. This is for several reasons: lack of appropriate data, the high degree of computational time demanded, and a suitable framework. These aspects are aggravated in large-scale applications. There are recent developments, however, that improve both the data and the computing constraints. Remote sensing, especially radar-based techn...

  16. A Computer Model of the Evaporator for the Development of an Automatic Control System

    Science.gov (United States)

    Kozin, K. A.; Efremov, E. V.; Kabrysheva, O. P.; Grachev, M. I.

    2016-08-01

    For the implementation of a closed nuclear fuel cycle it is necessary to carry out a series of experimental studies to justify the choice of technology. In addition, the operation of the radiochemical plant is impossible without high-quality automatic control systems. In the technologies of spent nuclear fuel reprocessing, the method of continuous evaporation is often used for a solution conditioning. Therefore, the effective continuous technological process will depend on the operation of the evaporation equipment. Its essential difference from similar devices is a small size. In this paper the method of mathematic simulation is applied for the investigation of one-effect evaporator with an external heating chamber. Detailed modelling is quite difficult because the phase equilibrium dynamics of the evaporation process is not described. Moreover, there is a relationship with the other process units. The results proved that the study subject is a MIMO plant, nonlinear over separate control channels and not selfbalancing. Adequacy was tested using the experimental data obtained at the laboratory evaporation unit.

  17. Automatic 3D object recognition and reconstruction based on neuro-fuzzy modelling

    Science.gov (United States)

    Samadzadegan, Farhad; Azizi, Ali; Hahn, Michael; Lucas, Curo

    Three-dimensional object recognition and reconstruction (ORR) is a research area of major interest in computer vision and photogrammetry. Virtual cities, for example, is one of the exciting application fields of ORR which became very popular during the last decade. Natural and man-made objects of cities such as trees and buildings are complex structures and automatic recognition and reconstruction of these objects from digital aerial images but also other data sources is a big challenge. In this paper a novel approach for object recognition is presented based on neuro-fuzzy modelling. Structural, textural and spectral information is extracted and integrated in a fuzzy reasoning process. The learning capability of neural networks is introduced to the fuzzy recognition process by taking adaptable parameter sets into account which leads to the neuro-fuzzy approach. Object reconstruction follows recognition seamlessly by using the recognition output and the descriptors which have been extracted for recognition. A first successful application of this new ORR approach is demonstrated for the three object classes 'buildings', 'cars' and 'trees' by using aerial colour images of an urban area of the town of Engen in Germany.

  18. Automatic sequences

    CERN Document Server

    Haeseler, Friedrich

    2003-01-01

    Automatic sequences are sequences which are produced by a finite automaton. Although they are not random they may look as being random. They are complicated, in the sense of not being not ultimately periodic, they may look rather complicated, in the sense that it may not be easy to name the rule by which the sequence is generated, however there exists a rule which generates the sequence. The concept automatic sequences has special applications in algebra, number theory, finite automata and formal languages, combinatorics on words. The text deals with different aspects of automatic sequences, in particular:· a general introduction to automatic sequences· the basic (combinatorial) properties of automatic sequences· the algebraic approach to automatic sequences· geometric objects related to automatic sequences.

  19. Risk assessment models in genetics clinic for array comparative genomic hybridization: Clinical information can be used to predict the likelihood of an abnormal result in patients.

    Science.gov (United States)

    Marano, Rachel M; Mercurio, Laura; Kanter, Rebecca; Doyle, Richard; Abuelo, Dianne; Morrow, Eric M; Shur, Natasha

    2013-03-01

    Array comparative genomic hybridization (aCGH) testing can diagnose chromosomal microdeletions and duplications too small to be detected by conventional cytogenetic techniques. We need to consider which patients are more likely to receive a diagnosis from aCGH testing versus patients that have lower likelihood and may benefit from broader genome wide scanning. We retrospectively reviewed charts of a population of 200 patients, 117 boys and 83 girls, who underwent aCGH testing in Genetics Clinic at Rhode Island hospital between 1 January/2008 and 31 December 2010. Data collected included sex, age at initial clinical presentation, aCGH result, history of seizures, autism, dysmorphic features, global developmental delay/intellectual disability, hypotonia and failure to thrive. aCGH analysis revealed abnormal results in 34 (17%) and variants of unknown significance in 24 (12%). Patients with three or more clinical diagnoses had a 25.0% incidence of abnormal aCGH findings, while patients with two or fewer clinical diagnoses had a 12.5% incidence of abnormal aCGH findings. Currently, we provide families with a range of 10-30% of a diagnosis with aCGH testing. With increased clinical complexity, patients have an increased probability of having an abnormal aCGH result. With this, we can provide individualized risk estimates for each patient.

  20. Risk assessment models in genetics clinic for array comparative genomic hybridization: Clinical information can be used to predict the likelihood of an abnormal result in patients

    Science.gov (United States)

    Marano, Rachel M.; Mercurio, Laura; Kanter, Rebecca; Doyle, Richard; Abuelo, Dianne; Morrow, Eric M.; Shur, Natasha

    2013-01-01

    Array comparative genomic hybridization (aCGH) testing can diagnose chromosomal microdeletions and duplications too small to be detected by conventional cytogenetic techniques. We need to consider which patients are more likely to receive a diagnosis from aCGH testing versus patients that have lower likelihood and may benefit from broader genome wide scanning. We retrospectively reviewed charts of a population of 200 patients, 117 boys and 83 girls, who underwent aCGH testing in Genetics Clinic at Rhode Island hospital between 1 January/2008 and 31 December 2010. Data collected included sex, age at initial clinical presentation, aCGH result, history of seizures, autism, dysmorphic features, global developmental delay/intellectual disability, hypotonia and failure to thrive. aCGH analysis revealed abnormal results in 34 (17%) and variants of unknown significance in 24 (12%). Patients with three or more clinical diagnoses had a 25.0% incidence of abnormal aCGH findings, while patients with two or fewer clinical diagnoses had a 12.5% incidence of abnormal aCGH findings. Currently, we provide families with a range of 10–30% of a diagnosis with aCGH testing. With increased clinical complexity, patients have an increased probability of having an abnormal aCGH result. With this, we can provide individualized risk estimates for each patient. PMID:27625836

  1. Automatic detection of alpine rockslides in continuous seismic data using hidden Markov models

    Science.gov (United States)

    Dammeier, Franziska; Moore, Jeffrey R.; Hammer, Conny; Haslinger, Florian; Loew, Simon

    2016-02-01

    Data from continuously recording permanent seismic networks can contain information about rockslide occurrence and timing complementary to eyewitness observations and thus aid in construction of robust event catalogs. However, detecting infrequent rockslide signals within large volumes of continuous seismic waveform data remains challenging and often requires demanding manual intervention. We adapted an automatic classification method using hidden Markov models to detect rockslide signals in seismic data from two stations in central Switzerland. We first processed 21 known rockslides, with event volumes spanning 3 orders of magnitude and station event distances varying by 1 order of magnitude, which resulted in 13 and 19 successfully classified events at the two stations. Retraining the models to incorporate seismic noise from the day of the event improved the respective results to 16 and 19 successful classifications. The missed events generally had low signal-to-noise ratio and small to medium volumes. We then processed nearly 14 years of continuous seismic data from the same two stations to detect previously unknown events. After postprocessing, we classified 30 new events as rockslides, of which we could verify three through independent observation. In particular, the largest new event, with estimated volume of 500,000 m3, was not generally known within the Swiss landslide community, highlighting the importance of regional seismic data analysis even in densely populated mountainous regions. Our method can be easily implemented as part of existing earthquake monitoring systems, and with an average event detection rate of about two per month, manual verification would not significantly increase operational workload.

  2. Automatic detection of volcano-seismic events by modeling state and event duration in hidden Markov models

    Science.gov (United States)

    Bhatti, Sohail Masood; Khan, Muhammad Salman; Wuth, Jorge; Huenupan, Fernando; Curilem, Millaray; Franco, Luis; Yoma, Nestor Becerra

    2016-09-01

    In this paper we propose an automatic volcano event detection system based on Hidden Markov Model (HMM) with state and event duration models. Since different volcanic events have different durations, therefore the state and whole event durations learnt from the training data are enforced on the corresponding state and event duration models within the HMM. Seismic signals from the Llaima volcano are used to train the system. Two types of events are employed in this study, Long Period (LP) and Volcano-Tectonic (VT). Experiments show that the standard HMMs can detect the volcano events with high accuracy but generates false positives. The results presented in this paper show that the incorporation of duration modeling can lead to reductions in false positive rate in event detection as high as 31% with a true positive accuracy equal to 94%. Further evaluation of the false positives indicate that the false alarms generated by the system were mostly potential events based on the signal-to-noise ratio criteria recommended by a volcano expert.

  3. Monte Carlo maximum likelihood estimation for discretely observed diffusion processes

    OpenAIRE

    Beskos, Alexandros; Papaspiliopoulos, Omiros; Roberts, Gareth

    2009-01-01

    This paper introduces a Monte Carlo method for maximum likelihood inference in the context of discretely observed diffusion processes. The method gives unbiased and a.s.\\@ continuous estimators of the likelihood function for a family of diffusion models and its performance in numerical examples is computationally efficient. It uses a recently developed technique for the exact simulation of diffusions, and involves no discretization error. We show that, under regularity conditions, the Monte C...

  4. Sieve likelihood ratio inference on general parameter space

    Institute of Scientific and Technical Information of China (English)

    SHI; Jian; SHEN; Xiaotong

    2005-01-01

    In this paper,a theory on sieve likelihood ratio inference on general parameter spaces(including infinite dimensional) is studied.Under fairly general regularity conditions,the sieve log-likelihood ratio statistic is proved to be asymptotically x2 distributed,which can be viewed as a generalization of the well-known Wilks' theorem.As an example,a emiparametric partial linear model is investigated.

  5. Smoothed log-concave maximum likelihood estimation with applications

    CERN Document Server

    Chen, Yining

    2011-01-01

    We study the smoothed log-concave maximum likelihood estimator of a probability distribution on $\\mathbb{R}^d$. This is a fully automatic nonparametric density estimator, obtained as a canonical smoothing of the log-concave maximum likelihood estimator. We demonstrate its attractive features both through an analysis of its theoretical properties and a simulation study. Moreover, we show how the estimator can be used as an intermediate stage of more involved procedures, such as constructing a classifier or estimating a functional of the density. Here again, the use of the estimator can be justified both on theoretical grounds and through its finite sample performance, and we illustrate its use in a breast cancer diagnosis (classification) problem.

  6. Finite element analysis of osteosynthesis screw fixation in the bone stock: an appropriate method for automatic screw modelling.

    Science.gov (United States)

    Wieding, Jan; Souffrant, Robert; Fritsche, Andreas; Mittelmeier, Wolfram; Bader, Rainer

    2012-01-01

    The use of finite element analysis (FEA) has grown to a more and more important method in the field of biomedical engineering and biomechanics. Although increased computational performance allows new ways to generate more complex biomechanical models, in the area of orthopaedic surgery, solid modelling of screws and drill holes represent a limitation of their use for individual cases and an increase of computational costs. To cope with these requirements, different methods for numerical screw modelling have therefore been investigated to improve its application diversity. Exemplarily, fixation was performed for stabilization of a large segmental femoral bone defect by an osteosynthesis plate. Three different numerical modelling techniques for implant fixation were used in this study, i.e. without screw modelling, screws as solid elements as well as screws as structural elements. The latter one offers the possibility to implement automatically generated screws with variable geometry on arbitrary FE models. Structural screws were parametrically generated by a Python script for the automatic generation in the FE-software Abaqus/CAE on both a tetrahedral and a hexahedral meshed femur. Accuracy of the FE models was confirmed by experimental testing using a composite femur with a segmental defect and an identical osteosynthesis plate for primary stabilisation with titanium screws. Both deflection of the femoral head and the gap alteration were measured with an optical measuring system with an accuracy of approximately 3 µm. For both screw modelling techniques a sufficient correlation of approximately 95% between numerical and experimental analysis was found. Furthermore, using structural elements for screw modelling the computational time could be reduced by 85% using hexahedral elements instead of tetrahedral elements for femur meshing. The automatically generated screw modelling offers a realistic simulation of the osteosynthesis fixation with screws in the adjacent

  7. Finite element analysis of osteosynthesis screw fixation in the bone stock: an appropriate method for automatic screw modelling.

    Directory of Open Access Journals (Sweden)

    Jan Wieding

    Full Text Available The use of finite element analysis (FEA has grown to a more and more important method in the field of biomedical engineering and biomechanics. Although increased computational performance allows new ways to generate more complex biomechanical models, in the area of orthopaedic surgery, solid modelling of screws and drill holes represent a limitation of their use for individual cases and an increase of computational costs. To cope with these requirements, different methods for numerical screw modelling have therefore been investigated to improve its application diversity. Exemplarily, fixation was performed for stabilization of a large segmental femoral bone defect by an osteosynthesis plate. Three different numerical modelling techniques for implant fixation were used in this study, i.e. without screw modelling, screws as solid elements as well as screws as structural elements. The latter one offers the possibility to implement automatically generated screws with variable geometry on arbitrary FE models. Structural screws were parametrically generated by a Python script for the automatic generation in the FE-software Abaqus/CAE on both a tetrahedral and a hexahedral meshed femur. Accuracy of the FE models was confirmed by experimental testing using a composite femur with a segmental defect and an identical osteosynthesis plate for primary stabilisation with titanium screws. Both deflection of the femoral head and the gap alteration were measured with an optical measuring system with an accuracy of approximately 3 µm. For both screw modelling techniques a sufficient correlation of approximately 95% between numerical and experimental analysis was found. Furthermore, using structural elements for screw modelling the computational time could be reduced by 85% using hexahedral elements instead of tetrahedral elements for femur meshing. The automatically generated screw modelling offers a realistic simulation of the osteosynthesis fixation with

  8. Model of automatic fuel management for the Atucha II nuclear central with the PUMA IV code

    International Nuclear Information System (INIS)

    The Atucha II central is a heavy water power station and natural uranium. For this reason and due to the first floor reactivity excess that have this type of reactors, it is necessary to carry out a continuous fuel management and with the central in power (for the case of Atucha II every 0.7 days approximately). To maintain in operation these centrals and to achieve a good fuels economy, different types of negotiate of fuels that include areas and roads where the fuels displace inside the core are proved; it is necessary to prove the great majority of these managements in long periods in order to corroborate the behavior of the power station and the burnt of extraction of the fuel elements. To carry out this work it is of great help that a program implements the approaches to continue in each replacement, using the roads and areas of each administration type to prove, and this way to obtain as results the one regulations execution in the time and the average burnt of extraction of the fuel elements, being fundamental this last data for the operator company of the power station. To carry out the previous work it is necessary that a physicist with experience in fuel management proves each one of the possible managements, even those that quickly can be discarded if its don't fulfill with the regulatory standards or its possess an average extraction burnt too much low. For this it is of fundamental help that with an automatic model the different administrations are proven and lastly the physicist analyzes the more important cases. The pattern in question not only allows to program different types of roads and areas of fuel management, but rather it also foresees the possibility to disable some of the approaches. (Author)

  9. Assessing Compatibility of Direct Detection Data: Halo-Independent Global Likelihood Analyses

    OpenAIRE

    Gelmini, Graciela B.; Huh, Ji-Haeng; Witte, Samuel J.

    2016-01-01

    We present two different halo-independent methods utilizing a global maximum likelihood that can assess the compatibility of dark matter direct detection data given a particular dark matter model. The global likelihood we use is comprised of at least one extended likelihood and an arbitrary number of Poisson or Gaussian likelihoods. In the first method we find the global best fit halo function and construct a two sided pointwise confidence band, which can then be compared with those derived f...

  10. SU-E-T-50: Automatic Validation of Megavoltage Beams Modeled for Clinical Use in Radiation Therapy

    Energy Technology Data Exchange (ETDEWEB)

    Melchior, M [Terapia Radiante S.A., La Plata, Buenos Aires (Argentina); Salinas Aranda, F [Vidt Centro Medico, Ciudad Autonoma De Buenos Aires (Argentina); 21st Century Oncology, Ft. Myers, FL (United States); Sciutto, S [Universidad Nacional de La Plata, La Plata, Buenos Aires (Argentina); Dodat, D [Centro Medico Privado Dean Funes, La Plata, Buenos Aires (Argentina); Larragueta, N [Universidad Nacional de La Plata, La Plata, Buenos Aires (Argentina); Centro Medico Privado Dean Funes, La Plata, Buenos Aires (Argentina)

    2014-06-01

    Purpose: To automatically validate megavoltage beams modeled in XiO™ 4.50 (Elekta, Stockholm, Sweden) and Varian Eclipse™ Treatment Planning Systems (TPS) (Varian Associates, Palo Alto, CA, USA), reducing validation time before beam-on for clinical use. Methods: A software application that can automatically read and analyze DICOM RT Dose and W2CAD files was developed using MatLab integrated development environment.TPS calculated dose distributions, in DICOM RT Dose format, and dose values measured in different Varian Clinac beams, in W2CAD format, were compared. Experimental beam data used were those acquired for beam commissioning, collected on a water phantom with a 2D automatic beam scanning system.Two methods were chosen to evaluate dose distributions fitting: gamma analysis and point tests described in Appendix E of IAEA TECDOC-1583. Depth dose curves and beam profiles were evaluated for both open and wedged beams. Tolerance parameters chosen for gamma analysis are 3% and 3 mm dose and distance, respectively.Absolute dose was measured independently at points proposed in Appendix E of TECDOC-1583 to validate software results. Results: TPS calculated depth dose distributions agree with measured beam data under fixed precision values at all depths analyzed. Measured beam dose profiles match TPS calculated doses with high accuracy in both open and wedged beams. Depth and profile dose distributions fitting analysis show gamma values < 1. Relative errors at points proposed in Appendix E of TECDOC-1583 meet therein recommended tolerances.Independent absolute dose measurements at points proposed in Appendix E of TECDOC-1583 confirm software results. Conclusion: Automatic validation of megavoltage beams modeled for their use in the clinic was accomplished. The software tool developed proved efficient, giving users a convenient and reliable environment to decide whether to accept or not a beam model for clinical use. Validation time before beam-on for clinical use

  11. SU-E-T-50: Automatic Validation of Megavoltage Beams Modeled for Clinical Use in Radiation Therapy

    International Nuclear Information System (INIS)

    Purpose: To automatically validate megavoltage beams modeled in XiO™ 4.50 (Elekta, Stockholm, Sweden) and Varian Eclipse™ Treatment Planning Systems (TPS) (Varian Associates, Palo Alto, CA, USA), reducing validation time before beam-on for clinical use. Methods: A software application that can automatically read and analyze DICOM RT Dose and W2CAD files was developed using MatLab integrated development environment.TPS calculated dose distributions, in DICOM RT Dose format, and dose values measured in different Varian Clinac beams, in W2CAD format, were compared. Experimental beam data used were those acquired for beam commissioning, collected on a water phantom with a 2D automatic beam scanning system.Two methods were chosen to evaluate dose distributions fitting: gamma analysis and point tests described in Appendix E of IAEA TECDOC-1583. Depth dose curves and beam profiles were evaluated for both open and wedged beams. Tolerance parameters chosen for gamma analysis are 3% and 3 mm dose and distance, respectively.Absolute dose was measured independently at points proposed in Appendix E of TECDOC-1583 to validate software results. Results: TPS calculated depth dose distributions agree with measured beam data under fixed precision values at all depths analyzed. Measured beam dose profiles match TPS calculated doses with high accuracy in both open and wedged beams. Depth and profile dose distributions fitting analysis show gamma values < 1. Relative errors at points proposed in Appendix E of TECDOC-1583 meet therein recommended tolerances.Independent absolute dose measurements at points proposed in Appendix E of TECDOC-1583 confirm software results. Conclusion: Automatic validation of megavoltage beams modeled for their use in the clinic was accomplished. The software tool developed proved efficient, giving users a convenient and reliable environment to decide whether to accept or not a beam model for clinical use. Validation time before beam-on for clinical use

  12. Machine Beats Experts: Automatic Discovery of Skill Models for Data-Driven Online Course Refinement

    Science.gov (United States)

    Matsuda, Noboru; Furukawa, Tadanobu; Bier, Norman; Faloutsos, Christos

    2015-01-01

    How can we automatically determine which skills must be mastered for the successful completion of an online course? Large-scale online courses (e.g., MOOCs) often contain a broad range of contents frequently intended to be a semester's worth of materials; this breadth often makes it difficult to articulate an accurate set of skills and knowledge…

  13. 76 FR 8917 - Special Conditions: Gulfstream Model GVI Airplane; Automatic Speed Protection for Design Dive Speed

    Science.gov (United States)

    2011-02-16

    ...; Automatic Speed Protection for Design Dive Speed AGENCY: Federal Aviation Administration (FAA), DOT. ACTION... design features include a high speed protection system. These proposed special conditions contain the... Design Features The GVI is equipped with a high speed protection system that limits nose down...

  14. 76 FR 31454 - Special Conditions: Gulfstream Model GVI Airplane; Automatic Speed Protection for Design Dive Speed

    Science.gov (United States)

    2011-06-01

    ... for Gulfstream GVI airplanes was published in the Federal Register on February 16, 2011 (76 FR 8917...; Automatic Speed Protection for Design Dive Speed AGENCY: Federal Aviation Administration (FAA), DOT. ACTION... high speed protection system. These special conditions contain the additional safety standards that...

  15. Modelling the adoption of automatic milking systems in Noord-Holland

    NARCIS (Netherlands)

    Floridi, M.; Bartolini, F.; Peerlings, J.H.M.; Polman, N.B.P.; Viaggi, D.

    2013-01-01

    Innovation and new technology adoption represent two central elements for the business and industry development process in agriculture. One of the most relevant innovations in dairy farms is the robotisation of the milking process through the adoption of Automatic Milking Systems (AMS). The purpose

  16. Performance Modelling of Automatic Identification System with Extended Field of View

    DEFF Research Database (Denmark)

    Lauersen, Troels; Mortensen, Hans Peter; Pedersen, Nikolaj Bisgaard;

    2010-01-01

    This paper deals with AIS (Automatic Identification System) behavior, to investigate the severity of packet collisions in an extended field of view (FOV). This is an important issue for satellite-based AIS, and the main goal is a feasibility study to find out to what extent an increased FOV...

  17. A maximum likelihood framework for protein design

    Directory of Open Access Journals (Sweden)

    Philippe Hervé

    2006-06-01

    Full Text Available Abstract Background The aim of protein design is to predict amino-acid sequences compatible with a given target structure. Traditionally envisioned as a purely thermodynamic question, this problem can also be understood in a wider context, where additional constraints are captured by learning the sequence patterns displayed by natural proteins of known conformation. In this latter perspective, however, we still need a theoretical formalization of the question, leading to general and efficient learning methods, and allowing for the selection of fast and accurate objective functions quantifying sequence/structure compatibility. Results We propose a formulation of the protein design problem in terms of model-based statistical inference. Our framework uses the maximum likelihood principle to optimize the unknown parameters of a statistical potential, which we call an inverse potential to contrast with classical potentials used for structure prediction. We propose an implementation based on Markov chain Monte Carlo, in which the likelihood is maximized by gradient descent and is numerically estimated by thermodynamic integration. The fit of the models is evaluated by cross-validation. We apply this to a simple pairwise contact potential, supplemented with a solvent-accessibility term, and show that the resulting models have a better predictive power than currently available pairwise potentials. Furthermore, the model comparison method presented here allows one to measure the relative contribution of each component of the potential, and to choose the optimal number of accessibility classes, which turns out to be much higher than classically considered. Conclusion Altogether, this reformulation makes it possible to test a wide diversity of models, using different forms of potentials, or accounting for other factors than just the constraint of thermodynamic stability. Ultimately, such model-based statistical analyses may help to understand the forces

  18. Fusing moving average model and stationary wavelet decomposition for automatic incident detection: case study of Tokyo Expressway

    Directory of Open Access Journals (Sweden)

    Qinghua Liu

    2014-12-01

    Full Text Available Traffic congestion is a growing problem in urban areas all over the world. The transport sector has been in full swing event study on intelligent transportation system for automatic detection. The functionality of automatic incident detection on expressways is a primary objective of advanced traffic management system. In order to save lives and prevent secondary incidents, accurate and prompt incident detection is necessary. This paper presents a methodology that integrates moving average (MA model with stationary wavelet decomposition for automatic incident detection, in which parameters of layer coefficient are extracted from the difference between the upstream and downstream occupancy. Unlike other wavelet-based method presented before, firstly it smooths the raw data with MA model. Then it uses stationary wavelet to decompose, which can achieve accurate reconstruction of the signal, and does not shift the signal transfer coefficients. Thus, it can detect the incidents more accurately. The threshold to trigger incident alarm is also adjusted according to normal traffic condition with congestion. The methodology is validated with real data from Tokyo Expressway ultrasonic sensors. Experimental results show that it is accurate and effective, and that it can differentiate traffic accident from other condition such as recurring traffic congestion.

  19. 指数多项式模型中参数最大似然估计的收敛速度%Convergence rate for maximum likelihood estimation of parameters in exponential polynomial model

    Institute of Scientific and Technical Information of China (English)

    房祥忠; 陈家鼎

    2011-01-01

    强度随时间变化的非齐次Possion过程在很多领域应用广泛.对一类非常广泛的非齐次Poisson过程—指数多项式模型,得到了当观测时间趋于无穷大时,参数的最大似然估计的“最优”收敛速度.%The model of nonhomogeneous Poisson processes with varying intensity function is applied in many fields. The best convergence rate for the maximum likelihood estimate ( MLE ) of exponential polynomial model, which is a kind of wide used nonhomogeneous Poisson processes, is given when time going to infinity.

  20. Usefulness and limitations of dK random graph models to predict interactions and functional homogeneity in biological networks under a pseudo-likelihood parameter estimation approach

    OpenAIRE

    Luan Yihui; Nunez-Iglesias Juan; Wang Wenhui; Sun Fengzhu

    2009-01-01

    Abstract Background Many aspects of biological functions can be modeled by biological networks, such as protein interaction networks, metabolic networks, and gene coexpression networks. Studying the statistical properties of these networks in turn allows us to infer biological function. Complex statistical network models can potentially more accurately describe the networks, but it is not clear whether such complex models are better suited to find biologically meaningful subnetworks. Results ...

  1. Automatic bladder segmentation on CBCT for multiple plan ART of bladder cancer using a patient-specific bladder model

    Science.gov (United States)

    Chai, Xiangfei; van Herk, Marcel; Betgen, Anja; Hulshof, Maarten; Bel, Arjan

    2012-06-01

    In multiple plan adaptive radiotherapy (ART) strategies of bladder cancer, a library of plans corresponding to different bladder volumes is created based on images acquired in early treatment sessions. Subsequently, the plan for the smallest PTV safely covering the bladder on cone-beam CT (CBCT) is selected as the plan of the day. The aim of this study is to develop an automatic bladder segmentation approach suitable for CBCT scans and test its ability to select the appropriate plan from the library of plans for such an ART procedure. Twenty-three bladder cancer patients with a planning CT and on average 11.6 CBCT scans were included in our study. For each patient, all CBCT scans were matched to the planning CT on bony anatomy. Bladder contours were manually delineated for each planning CT (for model building) and CBCT (for model building and validation). The automatic segmentation method consisted of two steps. A patient-specific bladder deformation model was built from the training data set of each patient (the planning CT and the first five CBCT scans). Then, the model was applied to automatically segment bladders in the validation data of the same patient (the remaining CBCT scans). Principal component analysis (PCA) was applied to the training data to model patient-specific bladder deformation patterns. The number of PCA modes for each patient was chosen such that the bladder shapes in the training set could be represented by such number of PCA modes with less than 0.1 cm mean residual error. The automatic segmentation started from the bladder shape of a reference CBCT, which was adjusted by changing the weight of each PCA mode. As a result, the segmentation contour was deformed consistently with the training set to fit the bladder in the validation image. A cost function was defined by the absolute difference between the directional gradient field of reference CBCT sampled on the corresponding bladder contour and the directional gradient field of validation

  2. Automatic single questionnaire intensity (SQI, EMS98 scale) estimation using ranking models built on the existing BCSF database

    Science.gov (United States)

    Schlupp, A.; Sira, C.; Schmitt, K.; Schaming, M.

    2013-12-01

    In charge of intensity estimations in France, BCSF has collected and manually analyzed more than 47000 online individual macroseismic questionnaires since 2000 up to intensity VI. These macroseismic data allow us to estimate one SQI value (Single Questionnaire Intensity) for each form following the EMS98 scale. The reliability of the automatic intensity estimation is important as they are today used for automatic shakemaps communications and crisis management. Today, the automatic intensity estimation at BCSF is based on the direct use of thumbnails selected on a menu by the witnesses. Each thumbnail corresponds to an EMS-98 intensity value, allowing us to quickly issue an intensity map of the communal intensity by averaging the SQIs at each city. Afterwards an expert, to determine a definitive SQI, manually analyzes each form. This work is time consuming and not anymore suitable considering the increasing number of testimonies at BCSF. Nevertheless, it can take into account incoherent answers. We tested several automatic methods (USGS algorithm, Correlation coefficient, Thumbnails) (Sira et al. 2013, IASPEI) and compared them with 'expert' SQIs. These methods gave us medium score (between 50 to 60% of well SQI determined and 35 to 40% with plus one or minus one intensity degree). The best fit was observed with the thumbnails. Here, we present new approaches based on 3 statistical ranking methods as 1) Multinomial logistic regression model, 2) Discriminant analysis DISQUAL and 3) Support vector machines (SVMs). The two first methods are standard methods, while the third one is more recent. Theses methods could be applied because the BCSF has already in his database more then 47000 forms and because their questions and answers are well adapted for a statistical analysis. The ranking models could then be used as automatic method constrained on expert analysis. The performance of the automatic methods and the reliability of the estimated SQI can be evaluated thanks to

  3. Improved maximum likelihood reconstruction of complex multi-generational pedigrees.

    Science.gov (United States)

    Sheehan, Nuala A; Bartlett, Mark; Cussens, James

    2014-11-01

    The reconstruction of pedigrees from genetic marker data is relevant to a wide range of applications. Likelihood-based approaches aim to find the pedigree structure that gives the highest probability to the observed data. Existing methods either entail an exhaustive search and are hence restricted to small numbers of individuals, or they take a more heuristic approach and deliver a solution that will probably have high likelihood but is not guaranteed to be optimal. By encoding the pedigree learning problem as an integer linear program we can exploit efficient optimisation algorithms to construct pedigrees guaranteed to have maximal likelihood for the standard situation where we have complete marker data at unlinked loci and segregation of genes from parents to offspring is Mendelian. Previous work demonstrated efficient reconstruction of pedigrees of up to about 100 individuals. The modified method that we present here is not so restricted: we demonstrate its applicability with simulated data on a real human pedigree structure of over 1600 individuals. It also compares well with a very competitive approximate approach in terms of solving time and accuracy. In addition to identifying a maximum likelihood pedigree, we can obtain any number of pedigrees in decreasing order of likelihood. This is useful for assessing the uncertainty of a maximum likelihood solution and permits model averaging over high likelihood pedigrees when this would be appropriate. More importantly, when the solution is not unique, as will often be the case for large pedigrees, it enables investigation into the properties of maximum likelihood pedigree estimates which has not been possible up to now. Crucially, we also have a means of assessing the behaviour of other approximate approaches which all aim to find a maximum likelihood solution. Our approach hence allows us to properly address the question of whether a reasonably high likelihood solution that is easy to obtain is practically as

  4. Nearly Efficient Likelihood Ratio Tests of the Unit Root Hypothesis

    DEFF Research Database (Denmark)

    Jansson, Michael; Nielsen, Morten Ørregaard

    Seemingly absent from the arsenal of currently available "nearly efficient" testing procedures for the unit root hypothesis, i.e. tests whose local asymptotic power functions are indistinguishable from the Gaussian power envelope, is a test admitting a (quasi-)likelihood ratio interpretation. We...... show that the likelihood ratio unit root test derived in a Gaussian AR(1) model with standard normal innovations is nearly efficient in that model. Moreover, these desirable properties carry over to more complicated models allowing for serially correlated and/or non-Gaussian innovations....

  5. Automatic iterative segmentation of multiple sclerosis lesions using Student's t mixture models and probabilistic anatomical atlases in FLAIR images.

    Science.gov (United States)

    Freire, Paulo G L; Ferrari, Ricardo J

    2016-06-01

    Multiple sclerosis (MS) is a demyelinating autoimmune disease that attacks the central nervous system (CNS) and affects more than 2 million people worldwide. The segmentation of MS lesions in magnetic resonance imaging (MRI) is a very important task to assess how a patient is responding to treatment and how the disease is progressing. Computational approaches have been proposed over the years to segment MS lesions and reduce the amount of time spent on manual delineation and inter- and intra-rater variability and bias. However, fully-automatic segmentation of MS lesions still remains an open problem. In this work, we propose an iterative approach using Student's t mixture models and probabilistic anatomical atlases to automatically segment MS lesions in Fluid Attenuated Inversion Recovery (FLAIR) images. Our technique resembles a refinement approach by iteratively segmenting brain tissues into smaller classes until MS lesions are grouped as the most hyperintense one. To validate our technique we used 21 clinical images from the 2015 Longitudinal Multiple Sclerosis Lesion Segmentation Challenge dataset. Evaluation using Dice Similarity Coefficient (DSC), True Positive Ratio (TPR), False Positive Ratio (FPR), Volume Difference (VD) and Pearson's r coefficient shows that our technique has a good spatial and volumetric agreement with raters' manual delineations. Also, a comparison between our proposal and the state-of-the-art shows that our technique is comparable and, in some cases, better than some approaches, thus being a viable alternative for automatic MS lesion segmentation in MRI.

  6. Face Prediction Model for an Automatic Age-invariant Face Recognition System

    OpenAIRE

    Yadav, Poonam

    2015-01-01

    Automated face recognition and identification softwares are becoming part of our daily life; it finds its abode not only with Facebook's auto photo tagging, Apple's iPhoto, Google's Picasa, Microsoft's Kinect, but also in Homeland Security Department's dedicated biometric face detection systems. Most of these automatic face identification systems fail where the effects of aging come into the picture. Little work exists in the literature on the subject of face prediction that accounts for agin...

  7. Regional Image Features Model for Automatic Classification between Normal and Glaucoma in Fundus and Scanning Laser Ophthalmoscopy (SLO) Images.

    Science.gov (United States)

    Haleem, Muhammad Salman; Han, Liangxiu; Hemert, Jano van; Fleming, Alan; Pasquale, Louis R; Silva, Paolo S; Song, Brian J; Aiello, Lloyd Paul

    2016-06-01

    Glaucoma is one of the leading causes of blindness worldwide. There is no cure for glaucoma but detection at its earliest stage and subsequent treatment can aid patients to prevent blindness. Currently, optic disc and retinal imaging facilitates glaucoma detection but this method requires manual post-imaging modifications that are time-consuming and subjective to image assessment by human observers. Therefore, it is necessary to automate this process. In this work, we have first proposed a novel computer aided approach for automatic glaucoma detection based on Regional Image Features Model (RIFM) which can automatically perform classification between normal and glaucoma images on the basis of regional information. Different from all the existing methods, our approach can extract both geometric (e.g. morphometric properties) and non-geometric based properties (e.g. pixel appearance/intensity values, texture) from images and significantly increase the classification performance. Our proposed approach consists of three new major contributions including automatic localisation of optic disc, automatic segmentation of disc, and classification between normal and glaucoma based on geometric and non-geometric properties of different regions of an image. We have compared our method with existing approaches and tested it on both fundus and Scanning laser ophthalmoscopy (SLO) images. The experimental results show that our proposed approach outperforms the state-of-the-art approaches using either geometric or non-geometric properties. The overall glaucoma classification accuracy for fundus images is 94.4 % and accuracy of detection of suspicion of glaucoma in SLO images is 93.9 %. PMID:27086033

  8. Towards fully automatic modelling of the fracture process in quasi-brittle and ductile materials: a unified crack growth criterion

    Institute of Scientific and Technical Information of China (English)

    Zhen-jun YANG; Guo-hua LIU

    2008-01-01

    Fully automatic finite element (FE) modelling of the fracture process in quasi-brittle materials such as concrete and rocks and ductile materials such as metals and alloys, is of great significance in assessing structural integrity and presents tremendous challenges to the engineering community. One challenge lies in the adoption of an objective and effective crack propagation criterion. This paper proposes a crack propagation criterion based on the principle of energy conservation and the cohesive zone model (CZM). The virtual crack extension technique is used to calculate the differential terms in the criterion. A fully-automatic discrete crack modelling methodology, integrating the developed criterion, the CZM to model the crack, a simple remeshing procedure to accommodate crack propagation, the J2 flow theory implemented within the incremental plasticity framework to model the ductile materials, and a local arc-length solver to the nonlinear equation system, is developed and implemented in an in-house program. Three examples, i.e., a plain concrete beam with a single shear crack, a reinforced concrete (RC) beam with multiple cracks and a compact-tension steel specimen, are simulated. Good agreement between numerical predictions and experimental data is found, which demonstrates the applicability of the criterion to both quasi-brittle and ductile materials.

  9. Comparison of the frequentist properties of Bayes and the maximum likelihood estimators in an age-structured fish stock assessment model

    DEFF Research Database (Denmark)

    Nielsen, Anders; Lewy, Peter

    2002-01-01

    A simulation study was carried out for a separable fish stock assessment model including commercial and survey catch-at-age and effort data. All catches are considered stochastic variables subject to sampling and process variations. The results showed that the Bayes estimator of spawning biomass ...

  10. Maximum likelihood estimation of fractionally cointegrated systems

    DEFF Research Database (Denmark)

    Lasak, Katarzyna

    In this paper we consider a fractionally cointegrated error correction model and investigate asymptotic properties of the maximum likelihood (ML) estimators of the matrix of the cointe- gration relations, the degree of fractional cointegration, the matrix of the speed of adjustment to the...... equilibrium parameters and the variance-covariance matrix of the error term. We show that using ML principles to estimate jointly all parameters of the fractionally cointegrated system we obtain consistent estimates and provide their asymptotic distributions. The cointegration matrix is asymptotically mixed...... any influence on the long-run relationship. The rate of convergence of the estimators of the long-run relationships depends on the coin- tegration degree but it is optimal for the strong cointegration case considered. We also prove that misspecification of the degree of fractional cointegation does...

  11. Profile likelihood maps of a 15-dimensional MSSM

    NARCIS (Netherlands)

    C. Strege; G. Bertone; G.J. Besjes; S. Caron; R. Ruiz de Austri; A. Strubig; R. Trotta

    2014-01-01

    We present statistically convergent profile likelihood maps obtained via global fits of a phenomenological Minimal Supersymmetric Standard Model with 15 free parameters (the MSSM-15), based on over 250M points. We derive constraints on the model parameters from direct detection limits on dark matter

  12. Composite likelihood and two-stage estimation in family studies

    DEFF Research Database (Denmark)

    Andersen, Elisabeth Anne Wreford

    2004-01-01

    In this paper register based family studies provide the motivation for linking a two-stage estimation procedure in copula models for multivariate failure time data with a composite likelihood approach. The asymptotic properties of the estimators in both parametric and semi-parametric models are d...

  13. Maximum likelihood estimation of the attenuated ultrasound pulse

    DEFF Research Database (Denmark)

    Rasmussen, Klaus Bolding

    1994-01-01

    The attenuated ultrasound pulse is divided into two parts: a stationary basic pulse and a nonstationary attenuation pulse. A standard ARMA model is used for the basic pulse, and a nonstandard ARMA model is derived for the attenuation pulse. The maximum likelihood estimator of the attenuated ultra...

  14. Sediment characterization in intertidal zone of the Bourgneuf bay using the Automatic Modified Gaussian Model (AMGM)

    Science.gov (United States)

    Verpoorter, C.; Carrère, V.; Combe, J.-P.; Le Corre, L.

    2009-04-01

    Understanding of the uppermost layer of cohesive sediment beds provides important clues for predicting future sediment behaviours. Sediment consolidation, grain size, water content and biological slimes (EPS: extracellular polymeric substances) were found to be significant factors influencing erosion resistance. The surface spectral signatures of mudflat sediments reflect such bio-geophysical parameters. The overall shape of the spectrum, also called a continuum, is a function of grain size and moisture content. Composition translates into specific absorption features. Finally, the chlorophyll-a concentration derived from the strength of the absorption at 675 nm, is a good proxy for biofilm biomass. Bourgneuf Bay site, south of the Loire river estuary, France, was chosen to represent a range of physical and biological influences on sediment erodability. Field spectral measurements and samples of sediments were collected during various field campaigns. An ASD Fieldspec 3 spectroradiometer was used to produce sediment reflectance hyperspectra in the wavelength range 350-2500 nm. We have developed an automatic procedure based on the Modified Gaussian Model that uses, as the first step, the Spectroscopic Derivative Analysis (SDA) to extract from spectra the bio-geophysical properties on mudflat sediments (Verpoorter et al., 2007). This AMGM algorithm is a powerfull tool to deconvolve spectra into two components, first gaussian curves for the absorptions bands, and second a straight line in the wavenumber range for the continuum. We are investigating the possibility of including other approaches, as the inverse gaussian band centred on 2800 nm initially developed by Whiting et al., (2006) to estimate water content. Additionally, soils samples were analysed to determine moisture content, grain size (laser grain size analyses), organic matter content, carbonate content (calcimetry) and clay content. X-ray diffraction analysis was performed on selected non

  15. Efficient Strategies for Calculating Blockwise Likelihoods Under the Coalescent.

    Science.gov (United States)

    Lohse, Konrad; Chmelik, Martin; Martin, Simon H; Barton, Nicholas H

    2016-02-01

    The inference of demographic history from genome data is hindered by a lack of efficient computational approaches. In particular, it has proved difficult to exploit the information contained in the distribution of genealogies across the genome. We have previously shown that the generating function (GF) of genealogies can be used to analytically compute likelihoods of demographic models from configurations of mutations in short sequence blocks (Lohse et al. 2011). Although the GF has a simple, recursive form, the size of such likelihood calculations explodes quickly with the number of individuals and applications of this framework have so far been mainly limited to small samples (pairs and triplets) for which the GF can be written by hand. Here we investigate several strategies for exploiting the inherent symmetries of the coalescent. In particular, we show that the GF of genealogies can be decomposed into a set of equivalence classes that allows likelihood calculations from nontrivial samples. Using this strategy, we automated blockwise likelihood calculations for a general set of demographic scenarios in Mathematica. These histories may involve population size changes, continuous migration, discrete divergence, and admixture between multiple populations. To give a concrete example, we calculate the likelihood for a model of isolation with migration (IM), assuming two diploid samples without phase and outgroup information. We demonstrate the new inference scheme with an analysis of two individual butterfly genomes from the sister species Heliconius melpomene rosina and H. cydno. PMID:26715666

  16. Automatic video summarization driven by a spatio-temporal attention model

    Science.gov (United States)

    Barland, R.; Saadane, A.

    2008-02-01

    According to the literature, automatic video summarization techniques can be classified in two parts, following the output nature: "video skims", which are generated using portions of the original video and "key-frame sets", which correspond to the images, selected from the original video, having a significant semantic content. The difference between these two categories is reduced when we consider automatic procedures. Most of the published approaches are based on the image signal and use either pixel characterization or histogram techniques or image decomposition by blocks. However, few of them integrate properties of the Human Visual System (HVS). In this paper, we propose to extract keyframes for video summarization by studying the variations of salient information between two consecutive frames. For each frame, a saliency map is produced simulating the human visual attention by a bottom-up (signal-dependent) approach. This approach includes three parallel channels for processing three early visual features: intensity, color and temporal contrasts. For each channel, the variations of the salient information between two consecutive frames are computed. These outputs are then combined to produce the global saliency variation which determines the key-frames. Psychophysical experiments have been defined and conducted to analyze the relevance of the proposed key-frame extraction algorithm.

  17. Conceptual Model for Automatic Early Warning Information System of Infectious Diseases Based on Internet Reporting Surveillance System

    Institute of Scientific and Technical Information of China (English)

    JIA-QI MA; LI-PING WANG; XUAO-PENG QI; XIAO-MING SHI; GONG-HUAN YANG

    2007-01-01

    Objective To establish a conceptual model of automatic early warning of infectious diseases based on internet reporting surveillance system,with a view to realizing an automated warning system on a daily basis and timely identifying potential outbreaks of infectious diseases. Methods The statistic conceptual model was established using historic surveillance data with movable percentile method.Results Based on the infectious disease surveillance information platform,the conceptualmodelfor early warning was established.The parameter,threshold,and revised sensitivity and specificity of early warning value were changed to realize dynamic alert of infectious diseases on a daily basis.Conclusion The instructive conceptual model of dynamic alert can be used as a validating tool in institutions of infectious disease surveillance in different districts.

  18. Automatic application of ICRP biokinetic models in voxel phantoms for in vivo counting and internal dose assessment

    International Nuclear Information System (INIS)

    As part of the improvement of calibration techniques of in vivo counting, the Laboratory of Internal Dose Assessment of the Institute of Radiological Protection and Nuclear Safety has developed a computer tool, 'OEDIPE', to model internal contamination, to simulate in vivo counting and to calculate internal dose. The first version of this software could model sources located in a single organ. As the distribution of the contamination evolves from the time of intake according to the biokinetics of the radionuclide, a new facility has been added to the software first to allow complex heterogeneous source modelling and then to automatically integrate the distribution of the contamination in the different tissues estimated by biokinetic calculation at any time since the intake. These new developments give the opportunity to study the influence of the biokinetics on the in vivo counting, leading to a better assessment of the calibration factors and the corresponding uncertainties. (authors)

  19. Automatic ECG wave extraction in long-term recordings using Gaussian mesa function models and nonlinear probability estimators.

    Science.gov (United States)

    Dubois, Rémi; Maison-Blanche, Pierre; Quenet, Brigitte; Dreyfus, Gérard

    2007-12-01

    This paper describes the automatic extraction of the P, Q, R, S and T waves of electrocardiographic recordings (ECGs), through the combined use of a new machine-learning algorithm termed generalized orthogonal forward regression (GOFR) and of a specific parameterized function termed Gaussian mesa function (GMF). GOFR breaks up the heartbeat signal into Gaussian mesa functions, in such a way that each wave is modeled by a single GMF; the model thus generated is easily interpretable by the physician. GOFR is an essential ingredient in a global procedure that locates the R wave after some simple pre-processing, extracts the characteristic shape of each heart beat, assigns P, Q, R, S and T labels through automatic classification, discriminates normal beats (NB) from abnormal beats (AB), and extracts features for diagnosis. The efficiency of the detection of the QRS complex, and of the discrimination of NB from AB, is assessed on the MIT and AHA databases; the labeling of the P and T wave is validated on the QTDB database. PMID:17997186

  20. Automatic calibration of a parsimonious ecohydrological model in a sparse basin using the spatio-temporal variation of the NDVI

    Science.gov (United States)

    Ruiz-Pérez, Guiomar; Manfreda, Salvatore; Caylor, Kelly; Francés, Félix

    2016-04-01

    Drylands are extensive, covering 30% of the Earth's land surface and 50% of Africa. In these water-controlled areas, vegetation plays a key role in the water cycle. Ecohydrological models provide a tool to investigate the relationships between vegetation and water resources. However, studies in Africa often face the problem that many ecohydrological models have quite extensive parametrical requirements, while available data are scarce. Therefore, there is a need for searching new sources of information such as satellite data. The advantages of the use of satellite data in dry regions has been deeply demonstrated and studied. But, the use of this kind of data forces to introduce the concept of spatio-temporal information. In this context, we have to deal with the fact that there is a lack in terms of statistics and methodologies to incorporate the spatio-temporal data during the calibration and validation processes. This research wants to be a contribution in that sense. The used ecohydrological model was calibrated in the Upper Ewaso river basin in Kenya only using NDVI (Normalized Difference Vegetation Index) data from MODIS. An automatic calibration methodology based on Singular Value Decomposition techniques was proposed in order to calibrate the model taking into account the temporal variation and, also, the spatial pattern of the observed NDVI and the simulated LAI. The obtained results have demonstrated: (1) the satellite data is an extraordinary useful tool of information and it can be used to implement ecohydrological models in dry regions; (2) the proposed model calibrated only using satellite data is able to reproduce the vegetation dynamics (in time and in space) and, also, the observed discharge at the outlet point; and (3) the proposed automatic calibration methodology works satisfactorily and it includes spatio-temporal data, in other words, it takes into account the temporal variation and the spatial pattern of the analyzed data.

  1. IMPROVING VOICE ACTIVITY DETECTION VIA WEIGHTING LIKELIHOOD AND DIMENSION REDUCTION

    Institute of Scientific and Technical Information of China (English)

    Wang Huanliang; Han Jiqing; Li Haifeng; Zheng Tieran

    2008-01-01

    The performance of the traditional Voice Activity Detection (VAD) algorithms declines sharply in lower Signal-to-Noise Ratio (SNR) environments. In this paper, a feature weighting likelihood method is proposed for noise-robust VAD. The contribution of dynamic features to likelihood score can be increased via the method, which improves consequently the noise robustness of VAD.Divergence based dimension reduction method is proposed for saving computation, which reduces these feature dimensions with smaller divergence value at the cost of degrading the performance a little.Experimental results on Aurora Ⅱ database show that the detection performance in noise environments can remarkably be improved by the proposed method when the model trained in clean data is used to detect speech endpoints. Using weighting likelihood on the dimension-reduced features obtains comparable, even better, performance compared to original full-dimensional feature.

  2. A semi-automatic image-based close range 3D modeling pipeline using a multi-camera configuration.

    Science.gov (United States)

    Rau, Jiann-Yeou; Yeh, Po-Chia

    2012-01-01

    The generation of photo-realistic 3D models is an important task for digital recording of cultural heritage objects. This study proposes an image-based 3D modeling pipeline which takes advantage of a multi-camera configuration and multi-image matching technique that does not require any markers on or around the object. Multiple digital single lens reflex (DSLR) cameras are adopted and fixed with invariant relative orientations. Instead of photo-triangulation after image acquisition, calibration is performed to estimate the exterior orientation parameters of the multi-camera configuration which can be processed fully automatically using coded targets. The calibrated orientation parameters of all cameras are applied to images taken using the same camera configuration. This means that when performing multi-image matching for surface point cloud generation, the orientation parameters will remain the same as the calibrated results, even when the target has changed. Base on this invariant character, the whole 3D modeling pipeline can be performed completely automatically, once the whole system has been calibrated and the software was seamlessly integrated. Several experiments were conducted to prove the feasibility of the proposed system. Images observed include that of a human being, eight Buddhist statues, and a stone sculpture. The results for the stone sculpture, obtained with several multi-camera configurations were compared with a reference model acquired by an ATOS-I 2M active scanner. The best result has an absolute accuracy of 0.26 mm and a relative accuracy of 1:17,333. It demonstrates the feasibility of the proposed low-cost image-based 3D modeling pipeline and its applicability to a large quantity of antiques stored in a museum.

  3. A Semi-Automatic Image-Based Close Range 3D Modeling Pipeline Using a Multi-Camera Configuration

    Directory of Open Access Journals (Sweden)

    Po-Chia Yeh

    2012-08-01

    Full Text Available The generation of photo-realistic 3D models is an important task for digital recording of cultural heritage objects. This study proposes an image-based 3D modeling pipeline which takes advantage of a multi-camera configuration and multi-image matching technique that does not require any markers on or around the object. Multiple digital single lens reflex (DSLR cameras are adopted and fixed with invariant relative orientations. Instead of photo-triangulation after image acquisition, calibration is performed to estimate the exterior orientation parameters of the multi-camera configuration which can be processed fully automatically using coded targets. The calibrated orientation parameters of all cameras are applied to images taken using the same camera configuration. This means that when performing multi-image matching for surface point cloud generation, the orientation parameters will remain the same as the calibrated results, even when the target has changed. Base on this invariant character, the whole 3D modeling pipeline can be performed completely automatically, once the whole system has been calibrated and the software was seamlessly integrated. Several experiments were conducted to prove the feasibility of the proposed system. Images observed include that of a human being, eight Buddhist statues, and a stone sculpture. The results for the stone sculpture, obtained with several multi-camera configurations were compared with a reference model acquired by an ATOS-I 2M active scanner. The best result has an absolute accuracy of 0.26 mm and a relative accuracy of 1:17,333. It demonstrates the feasibility of the proposed low-cost image-based 3D modeling pipeline and its applicability to a large quantity of antiques stored in a museum.

  4. Generic method for automatic bladder segmentation on cone beam CT using a patient-specific bladder shape model

    Energy Technology Data Exchange (ETDEWEB)

    Schoot, A. J. A. J. van de, E-mail: a.j.schootvande@amc.uva.nl; Schooneveldt, G.; Wognum, S.; Stalpers, L. J. A.; Rasch, C. R. N.; Bel, A. [Department of Radiation Oncology, Academic Medical Center, University of Amsterdam, Meibergdreef 9, 1105 AZ Amsterdam (Netherlands); Hoogeman, M. S. [Department of Radiation Oncology, Daniel den Hoed Cancer Center, Erasmus Medical Center, Groene Hilledijk 301, 3075 EA Rotterdam (Netherlands); Chai, X. [Department of Radiation Oncology, Stanford University School of Medicine, 875 Blake Wilbur Drive, Palo Alto, California 94305 (United States)

    2014-03-15

    Purpose: The aim of this study is to develop and validate a generic method for automatic bladder segmentation on cone beam computed tomography (CBCT), independent of gender and treatment position (prone or supine), using only pretreatment imaging data. Methods: Data of 20 patients, treated for tumors in the pelvic region with the entire bladder visible on CT and CBCT, were divided into four equally sized groups based on gender and treatment position. The full and empty bladder contour, that can be acquired with pretreatment CT imaging, were used to generate a patient-specific bladder shape model. This model was used to guide the segmentation process on CBCT. To obtain the bladder segmentation, the reference bladder contour was deformed iteratively by maximizing the cross-correlation between directional grey value gradients over the reference and CBCT bladder edge. To overcome incorrect segmentations caused by CBCT image artifacts, automatic adaptations were implemented. Moreover, locally incorrect segmentations could be adapted manually. After each adapted segmentation, the bladder shape model was expanded and new shape patterns were calculated for following segmentations. All available CBCTs were used to validate the segmentation algorithm. The bladder segmentations were validated by comparison with the manual delineations and the segmentation performance was quantified using the Dice similarity coefficient (DSC), surface distance error (SDE) and SD of contour-to-contour distances. Also, bladder volumes obtained by manual delineations and segmentations were compared using a Bland-Altman error analysis. Results: The mean DSC, mean SDE, and mean SD of contour-to-contour distances between segmentations and manual delineations were 0.87, 0.27 cm and 0.22 cm (female, prone), 0.85, 0.28 cm and 0.22 cm (female, supine), 0.89, 0.21 cm and 0.17 cm (male, supine) and 0.88, 0.23 cm and 0.17 cm (male, prone), respectively. Manual local adaptations improved the segmentation

  5. Likelihood-based inference for clustered line transect data

    DEFF Research Database (Denmark)

    Waagepetersen, Rasmus Plenge; Schweder, Tore

    The uncertainty in estimation of spatial animal density from line transect surveys depends on the degree of spatial clustering in the animal population. To quantify the clustering we model line transect data as independent thinnings of spatial shot-noise Cox processes. Likelihood-based inference...

  6. A Rayleigh Doppler frequency estimator derived from maximum likelihood theory

    DEFF Research Database (Denmark)

    Hansen, Henrik; Affes, Sofiéne; Mermelstein, Paul

    1999-01-01

    capacities in low and high speed situations. We derive a Doppler frequency estimator using the maximum likelihood method and Jakes model (1974) of a Rayleigh fading channel. This estimator requires an FFT and simple post-processing only. Its performance is verified through simulations and found to yield good...

  7. A Rayleigh Doppler Frequency Estimator Derived from Maximum Likelihood Theory

    DEFF Research Database (Denmark)

    Hansen, Henrik; Affes, Sofiene; Mermelstein, Paul

    1999-01-01

    capacities in low and high speed situations.We derive a Doppler frequency estimatorusing the maximum likelihood method and Jakes model [\\ref{Jakes}] of a Rayleigh fading channel. This estimator requires an FFT and simple post-processing only. Its performance is verifiedthrough simulations and found to yield...

  8. Likelihood-based inference for clustered line transect data

    DEFF Research Database (Denmark)

    Waagepetersen, Rasmus; Schweder, Tore

    2006-01-01

    The uncertainty in estimation of spatial animal density from line transect surveys depends on the degree of spatial clustering in the animal population. To quantify the clustering we model line transect data as independent thinnings of spatial shot-noise Cox processes. Likelihood-based inference...

  9. GPU Accelerated Likelihoods for Stereo-Based Articulated Tracking

    DEFF Research Database (Denmark)

    Friborg, Rune Møllegaard; Hauberg, Søren; Erleben, Kenny

    For many years articulated tracking has been an active research topic in the computer vision community. While working solutions have been suggested, computational time is still problematic. We present a GPU implementation of a ray-casting based likelihood model that is orders of magnitude faster...

  10. AMME: an Automatic Mental Model Evaluation to analyse user behaviour traced in a finite, discrete state space.

    Science.gov (United States)

    Rauterberg, M

    1993-11-01

    To support the human factors engineer in designing a good user interface, a method has been developed to analyse the empirical data of the interactive user behaviour traced in a finite discrete state space. The sequences of actions produced by the user contain valuable information about the mental model of this user, the individual problem solution strategies for a given task and the hierarchical structure of the task-subtasks relationships. The presented method, AMME, can analyse the action sequences and automatically generate (1) a net description of the task dependent model of the user, (2) a complete state transition matrix, and (3) various quantitative measures of the user's task solving process. The behavioural complexity of task-solving processes carried out by novices has been found to be significantly larger than the complexity of task-solving processes carried out by experts.

  11. Towards SWOT data assimilation for hydrology : automatic calibration of global flow routing model parameters in the Amazon basin

    Science.gov (United States)

    Mouffe, M.; Getirana, A.; Ricci, S. M.; Lion, C.; Biancamaria, S.; Boone, A.; Mognard, N. M.; Rogel, P.

    2011-12-01

    The Surface Water and Ocean Topography (SWOT) mission is a swath mapping radar interferometer that will provide global measurements of water surface elevation (WSE). The revisit time depends upon latitude and varies from two (low latitudes) to ten (high latitudes) per 22-day orbit repeat period. The high resolution and the global coverage of the SWOT data open the way for new hydrology studies. Here, the aim is to investigate the use of virtually generated SWOT data to improve discharge simulation using data assimilation techniques. In the framework of the SWOT virtual mission (VM), this study presents the first results of the automatic calibration of a global flow routing (GFR) scheme using SWOT VM measurements for the Amazon basin. The Hydrological Modeling and Analysis Platform (HyMAP) is used along with the MOCOM-UA multi-criteria global optimization algorithm. HyMAP has a 0.25-degree spatial resolution and runs at the daily time step to simulate discharge, water levels and floodplains. The surface runoff and baseflow drainage derived from the Interactions Sol-Biosphère-Atmosphère (ISBA) model are used as inputs for HyMAP. Previous works showed that the use of ENVISAT data enables the reduction of the uncertainty on some of the hydrological model parameters, such as river width and depth, Manning roughness coefficient and groundwater time delay. In the framework of the SWOT preparation work, the automatic calibration procedure was applied using SWOT VM measurements. For this Observing System Experiment (OSE), the synthetical data were obtained applying an instrument simulator (representing realistic SWOT errors) for one hydrological year to HYMAP simulated WSE using a "true" set of parameters. Only pixels representing rivers larger than 100 meters within the Amazon basin are considered to produce SWOT VM measurements. The automatic calibration procedure leads to the estimation of optimal parametersminimizing objective functions that formulate the difference

  12. RELM (the Working Group for the Development of Region Earthquake Likelihood Models) and the Development of new, Open-Source, Java-Based (Object Oriented) Code for Probabilistic Seismic Hazard Analysis

    Science.gov (United States)

    Field, E. H.

    2001-12-01

    Given problems with virtually all previous earthquake-forecast models for southern California, and a current lack of consensus on how such models should be constructed, a joint SCEC-USGS sponsored working group for the development of Regional Earthquake Likelihood Models (RELM) has been established (www.relm.org). The goals are as follows: 1) To develop and test a range of viable earthquake-potential models for southern California (not just one "consensus" model); 2) To examine and compare the implications of each model with respect to probabilistic seismic-hazard estimates (which will not only quantify existing hazard uncertainties, but will also indicate how future research should be focused in order to reduce the uncertainties); and 3) To design and document conclusive tests of each model with respect to existing and future geophysical observations. The variety of models under development reflects the variety of geophysical constraints available; these include geological fault information, historical seismicity, geodetic observations, stress-transfer interactions, and foreshock/aftershock statistics. One reason for developing and testing a range of models is to evaluate the extent to which any one can be exported to another region where the options are more limited. RELM is not intended to be a one-time effort. Rather, we are building an infrastructure that will facilitate an ongoing incorporation of new scientific findings into seismic-hazard models. The effort involves the development of several community models and databases, one of which is new Java-based code for probabilistic seismic hazard analysis (PSHA). Although several different PSHA codes presently exist, none are open source, well documented, and written in an object-oriented programming language (which is ideally suited for PSHA). Furthermore, we need code that is flexible enough to accommodate the wide range of models currently under development in RELM. The new code is being developed under

  13. Likelihood Analysis of Supersymmetric SU(5) GUTs

    CERN Document Server

    Bagnaschi, E; Sakurai, K; Borsato, M; Buchmueller, O; Cavanaugh, R; Chobanova, V; Citron, M; De Roeck, A; Dolan, M J; Ellis, J R; Flächer, H; Heinemeyer, S; Isidori, G; Lucio, M; Santos, D Martínez; Olive, K A; Richards, A; de Vries, K J; Weiglein, G

    2016-01-01

    We perform a likelihood analysis of the constraints from accelerator experiments and astrophysical observations on supersymmetric (SUSY) models with SU(5) boundary conditions on soft SUSY-breaking parameters at the GUT scale. The parameter space of the models studied has 7 parameters: a universal gaugino mass $m_{1/2}$, distinct masses for the scalar partners of matter fermions in five- and ten-dimensional representations of SU(5), $m_5$ and $m_{10}$, and for the $\\mathbf{5}$ and $\\mathbf{\\bar 5}$ Higgs representations $m_{H_u}$ and $m_{H_d}$, a universal trilinear soft SUSY-breaking parameter $A_0$, and the ratio of Higgs vevs $\\tan \\beta$. In addition to previous constraints from direct sparticle searches, low-energy and flavour observables, we incorporate constraints based on preliminary results from 13 TeV LHC searches for jets + MET events and long-lived particles, as well as the latest PandaX-II and LUX searches for direct Dark Matter detection. In addition to previously-identified mechanisms for bringi...

  14. Engineering model of the electric drives of separation device for simulation of automatic control systems of reactive power compensation by means of serially connected capacitors

    Science.gov (United States)

    Juromskiy, V. M.

    2016-09-01

    It is developed a mathematical model for an electric drive of high-speed separation device in terms of the modeling dynamic systems Simulink, MATLAB. The model is focused on the study of the automatic control systems of the power factor (Cosφ) of an actuator by compensating the reactive component of the total power by switching a capacitor bank in series with the actuator. The model is based on the methodology of the structural modeling of dynamic processes.

  15. Automatic Verification of Biochemical Network Using Model Checking Method%基于模型校核的生化网络自动辨别方法

    Institute of Scientific and Technical Information of China (English)

    Jinkyung Kim; Younghee Lee; Il Moon

    2008-01-01

    This study focuses on automatic searching and verifying methods for the reachability, transition logics and hierarchical structure in all possible paths of biological processes using model checking. The automatic search and verification for alternative paths within complex and large networks in biological process can provide a consid-erable amount of solutions, which is difficult to handle manually. Model checking is an automatic method for veri-fying if a circuit or a condition, expressed as a concurrent transition system, satisfies a set of properties expressed ina temporal logic, such as computational tree logic (CTL). This article represents that model checking is feasible in biochemical network verification and it shows certain advantages over simulation for querying and searching of special behavioral properties in biochemical processes.

  16. Likelihood Analysis for Mega Pixel Maps

    Science.gov (United States)

    Kogut, Alan J.

    1999-01-01

    The derivation of cosmological parameters from astrophysical data sets routinely involves operations counts which scale as O(N(exp 3) where N is the number of data points. Currently planned missions, including MAP and Planck, will generate sky maps with N(sub d) = 10(exp 6) or more pixels. Simple "brute force" analysis, applied to such mega-pixel data, would require years of computing even on the fastest computers. We describe an algorithm which allows estimation of the likelihood function in the direct pixel basis. The algorithm uses a conjugate gradient approach to evaluate X2 and a geometric approximation to evaluate the determinant. Monte Carlo simulations provide a correction to the determinant, yielding an unbiased estimate of the likelihood surface in an arbitrary region surrounding the likelihood peak. The algorithm requires O(N(sub d)(exp 3/2) operations and O(Nd) storage for each likelihood evaluation, and allows for significant parallel computation.

  17. Maximum-likelihood method in quantum estimation

    CERN Document Server

    Paris, M G A; Sacchi, M F

    2001-01-01

    The maximum-likelihood method for quantum estimation is reviewed and applied to the reconstruction of density matrix of spin and radiation as well as to the determination of several parameters of interest in quantum optics.

  18. Asymptotic Likelihood Distribution for Correlated & Constrained Systems

    CERN Document Server

    Agarwal, Ujjwal

    2016-01-01

    It describes my work as summer student at CERN. The report discusses the asymptotic distribution of the likelihood ratio for total no. of parameters being h and 2 out of these being are constrained and correlated.

  19. Modeling the relationship between rapid automatized naming and literacy skills across languages varying in orthographic consistency.

    Science.gov (United States)

    Georgiou, George K; Aro, Mikko; Liao, Chen-Huei; Parrila, Rauno

    2016-03-01

    The purpose of this study was twofold: (a) to contrast the prominent theoretical explanations of the rapid automatized naming (RAN)-reading relationship across languages varying in orthographic consistency (Chinese, English, and Finnish) and (b) to examine whether the same accounts can explain the RAN-spelling relationship. In total, 304 Grade 4 children (102 Chinese-speaking Taiwanese children, 117 English-speaking Canadian children, and 85 Finnish-speaking children) were assessed on measures of RAN, speed of processing, phonological processing, orthographic processing, reading fluency, and spelling. The results of path analysis indicated that RAN had a strong direct effect on reading fluency that was of the same size across languages and that only in English was a small proportion of its predictive variance mediated by orthographic processing. In contrast, RAN did not exert a significant direct effect on spelling, and a substantial proportion of its predictive variance was mediated by phonological processing (in Chinese and Finnish) and orthographic processing (in English). Given that RAN predicted reading fluency equally well across languages and that phonological/orthographic processing had very little to do with this relationship, we argue that the reason why RAN is related to reading fluency should be sought in domain-general factors such as serial processing and articulation. PMID:26615467

  20. Exclusion probabilities and likelihood ratios with applications to mixtures.

    Science.gov (United States)

    Slooten, Klaas-Jan; Egeland, Thore

    2016-01-01

    The statistical evidence obtained from mixed DNA profiles can be summarised in several ways in forensic casework including the likelihood ratio (LR) and the Random Man Not Excluded (RMNE) probability. The literature has seen a discussion of the advantages and disadvantages of likelihood ratios and exclusion probabilities, and part of our aim is to bring some clarification to this debate. In a previous paper, we proved that there is a general mathematical relationship between these statistics: RMNE can be expressed as a certain average of the LR, implying that the expected value of the LR, when applied to an actual contributor to the mixture, is at least equal to the inverse of the RMNE. While the mentioned paper presented applications for kinship problems, the current paper demonstrates the relevance for mixture cases, and for this purpose, we prove some new general properties. We also demonstrate how to use the distribution of the likelihood ratio for donors of a mixture, to obtain estimates for exceedance probabilities of the LR for non-donors, of which the RMNE is a special case corresponding to L R>0. In order to derive these results, we need to view the likelihood ratio as a random variable. In this paper, we describe how such a randomization can be achieved. The RMNE is usually invoked only for mixtures without dropout. In mixtures, artefacts like dropout and drop-in are commonly encountered and we address this situation too, illustrating our results with a basic but widely implemented model, a so-called binary model. The precise definitions, modelling and interpretation of the required concepts of dropout and drop-in are not entirely obvious, and we attempt to clarify them here in a general likelihood framework for a binary model. PMID:26160753

  1. Exclusion probabilities and likelihood ratios with applications to mixtures.

    Science.gov (United States)

    Slooten, Klaas-Jan; Egeland, Thore

    2016-01-01

    The statistical evidence obtained from mixed DNA profiles can be summarised in several ways in forensic casework including the likelihood ratio (LR) and the Random Man Not Excluded (RMNE) probability. The literature has seen a discussion of the advantages and disadvantages of likelihood ratios and exclusion probabilities, and part of our aim is to bring some clarification to this debate. In a previous paper, we proved that there is a general mathematical relationship between these statistics: RMNE can be expressed as a certain average of the LR, implying that the expected value of the LR, when applied to an actual contributor to the mixture, is at least equal to the inverse of the RMNE. While the mentioned paper presented applications for kinship problems, the current paper demonstrates the relevance for mixture cases, and for this purpose, we prove some new general properties. We also demonstrate how to use the distribution of the likelihood ratio for donors of a mixture, to obtain estimates for exceedance probabilities of the LR for non-donors, of which the RMNE is a special case corresponding to L R>0. In order to derive these results, we need to view the likelihood ratio as a random variable. In this paper, we describe how such a randomization can be achieved. The RMNE is usually invoked only for mixtures without dropout. In mixtures, artefacts like dropout and drop-in are commonly encountered and we address this situation too, illustrating our results with a basic but widely implemented model, a so-called binary model. The precise definitions, modelling and interpretation of the required concepts of dropout and drop-in are not entirely obvious, and we attempt to clarify them here in a general likelihood framework for a binary model.

  2. Automatic Differentiation Variational Inference

    OpenAIRE

    Kucukelbir, Alp; Tran, Dustin; Ranganath, Rajesh; Gelman, Andrew; Blei, David M.

    2016-01-01

    Probabilistic modeling is iterative. A scientist posits a simple model, fits it to her data, refines it according to her analysis, and repeats. However, fitting complex models to large data is a bottleneck in this process. Deriving algorithms for new models can be both mathematically and computationally challenging, which makes it difficult to efficiently cycle through the steps. To this end, we develop automatic differentiation variational inference (ADVI). Using our method, the scientist on...

  3. Towards an automatic statistical model for seasonal precipitation prediction and its application to Central and South Asian headwater catchments

    Science.gov (United States)

    Gerlitz, Lars; Gafurov, Abror; Apel, Heiko; Unger-Sayesteh, Katy; Vorogushyn, Sergiy; Merz, Bruno

    2016-04-01

    Statistical climate forecast applications typically utilize a small set of large scale SST or climate indices, such as ENSO, PDO or AMO as predictor variables. If the predictive skill of these large scale modes is insufficient, specific predictor variables such as customized SST patterns are frequently included. Hence statistically based climate forecast models are either based on a fixed number of climate indices (and thus might not consider important predictor variables) or are highly site specific and barely transferable to other regions. With the aim of developing an operational seasonal forecast model, which is easily transferable to any region in the world, we present a generic data mining approach which automatically selects potential predictors from gridded SST observations and reanalysis derived large scale atmospheric circulation patterns and generates robust statistical relationships with posterior precipitation anomalies for user selected target regions. Potential predictor variables are derived by means of a cellwise correlation analysis of precipitation anomalies with gridded global climate variables under consideration of varying lead times. Significantly correlated grid cells are subsequently aggregated to predictor regions by means of a variability based cluster analysis. Finally for every month and lead time, an individual random forest based forecast model is automatically calibrated and evaluated by means of the preliminary generated predictor variables. The model is exemplarily applied and evaluated for selected headwater catchments in Central and South Asia. Particularly the for winter and spring precipitation (which is associated with westerly disturbances in the entire target domain) the model shows solid results with correlation coefficients up to 0.7, although the variability of precipitation rates is highly underestimated. Likewise for the monsoonal precipitation amounts in the South Asian target areas a certain skill of the model could

  4. 广义线性模型拟似然估计的弱相合性%Weak Consistency of Quasi-Maximum Likelihood Estimates in Generalized Linear Models

    Institute of Scientific and Technical Information of China (English)

    张戈; 吴黎军

    2013-01-01

    研究了广义线性模型在非典则联结情形下的拟似然方程Ln(β)=∑XiH(X’iβ)Λ-1(X’iβ)(yi-h(X'iβ))=0的解(β)n在一定条件下的弱相合性,证明了收敛速度i=1(β)n-(β)0≠Op(λn-1/2)以及拟似然估计的弱相合性的必要条件是:当n→∞时,S-1n→0.%In this paper, we study the solution β^n of quasi-maximum likelihood equation Ln(β) = ∑i=1n XiH(X'iβ)Λ-1(X'iβ) (yi -h(X'iβ ) = 0 for generalized linear models. Under the assumption of an unnatural link function and other some mild conditions, we prove the convergence rate β^n - β0 ≠ op(Λn-1/2) and necessary conditions is when n→∞ , we have S-1n→0.

  5. An Automatic Segmentation and Classification Framework Based on PCNN Model for Single Tooth in MicroCT Images.

    Science.gov (United States)

    Wang, Liansheng; Li, Shusheng; Chen, Rongzhen; Liu, Sze-Yu; Chen, Jyh-Cheng

    2016-01-01

    Accurate segmentation and classification of different anatomical structures of teeth from medical images plays an essential role in many clinical applications. Usually, the anatomical structures of teeth are manually labelled by experienced clinical doctors, which is time consuming. However, automatic segmentation and classification is a challenging task because the anatomical structures and surroundings of the tooth in medical images are rather complex. Therefore, in this paper, we propose an effective framework which is designed to segment the tooth with a Selective Binary and Gaussian Filtering Regularized Level Set (GFRLS) method improved by fully utilizing three dimensional (3D) information, and classify the tooth by employing unsupervised learning Pulse Coupled Neural Networks (PCNN) model. In order to evaluate the proposed method, the experiments are conducted on the different datasets of mandibular molars and the experimental results show that our method can achieve better accuracy and robustness compared to other four state of the art clustering methods. PMID:27322421

  6. An Automatic Segmentation and Classification Framework Based on PCNN Model for Single Tooth in MicroCT Images.

    Directory of Open Access Journals (Sweden)

    Liansheng Wang

    Full Text Available Accurate segmentation and classification of different anatomical structures of teeth from medical images plays an essential role in many clinical applications. Usually, the anatomical structures of teeth are manually labelled by experienced clinical doctors, which is time consuming. However, automatic segmentation and classification is a challenging task because the anatomical structures and surroundings of the tooth in medical images are rather complex. Therefore, in this paper, we propose an effective framework which is designed to segment the tooth with a Selective Binary and Gaussian Filtering Regularized Level Set (GFRLS method improved by fully utilizing three dimensional (3D information, and classify the tooth by employing unsupervised learning Pulse Coupled Neural Networks (PCNN model. In order to evaluate the proposed method, the experiments are conducted on the different datasets of mandibular molars and the experimental results show that our method can achieve better accuracy and robustness compared to other four state of the art clustering methods.

  7. An Automatic Segmentation and Classification Framework Based on PCNN Model for Single Tooth in MicroCT Images

    Science.gov (United States)

    Wang, Liansheng; Li, Shusheng; Chen, Rongzhen; Liu, Sze-Yu; Chen, Jyh-Cheng

    2016-01-01

    Accurate segmentation and classification of different anatomical structures of teeth from medical images plays an essential role in many clinical applications. Usually, the anatomical structures of teeth are manually labelled by experienced clinical doctors, which is time consuming. However, automatic segmentation and classification is a challenging task because the anatomical structures and surroundings of the tooth in medical images are rather complex. Therefore, in this paper, we propose an effective framework which is designed to segment the tooth with a Selective Binary and Gaussian Filtering Regularized Level Set (GFRLS) method improved by fully utilizing three dimensional (3D) information, and classify the tooth by employing unsupervised learning Pulse Coupled Neural Networks (PCNN) model. In order to evaluate the proposed method, the experiments are conducted on the different datasets of mandibular molars and the experimental results show that our method can achieve better accuracy and robustness compared to other four state of the art clustering methods. PMID:27322421

  8. Applying exclusion likelihoods from LHC searches to extended Higgs sectors

    Science.gov (United States)

    Bechtle, Philip; Heinemeyer, Sven; Stål, Oscar; Stefaniak, Tim; Weiglein, Georg

    2015-09-01

    LHC searches for non-standard Higgs bosons decaying into tau lepton pairs constitute a sensitive experimental probe for physics beyond the Standard Model (BSM), such as supersymmetry (SUSY). Recently, the limits obtained from these searches have been presented by the CMS collaboration in a nearly model-independent fashion - as a narrow resonance model - based on the full dataset. In addition to publishing a exclusion limit, the full likelihood information for the narrow resonance model has been released. This provides valuable information that can be incorporated into global BSM fits. We present a simple algorithm that maps an arbitrary model with multiple neutral Higgs bosons onto the narrow resonance model and derives the corresponding value for the exclusion likelihood from the CMS search. This procedure has been implemented into the public computer code HiggsBounds (version 4.2.0 and higher). We validate our implementation by cross-checking against the official CMS exclusion contours in three Higgs benchmark scenarios in the Minimal Supersymmetric Standard Model (MSSM), and find very good agreement. Going beyond validation, we discuss the combined constraints of the search and the rate measurements of the SM-like Higgs at in a recently proposed MSSM benchmark scenario, where the lightest Higgs boson obtains SM-like couplings independently of the decoupling of the heavier Higgs states. Technical details for how to access the likelihood information within HiggsBounds are given in the appendix. The program is available at http://higgsbounds.hepforge.org.

  9. Automatic Reading

    Institute of Scientific and Technical Information of China (English)

    胡迪

    2007-01-01

    <正>Reading is the key to school success and,like any skill,it takes practice.A child learns to walk by practising until he no longer has to think about how to put one foot in front of the other.The great athlete practises until he can play quickly,accurately and without thinking.Ed- ucators call it automaticity.

  10. Automatically updating predictive modeling workflows support decision-making in drug design.

    Science.gov (United States)

    Muegge, Ingo; Bentzien, Jörg; Mukherjee, Prasenjit; Hughes, Robert O

    2016-09-01

    Using predictive models for early decision-making in drug discovery has become standard practice. We suggest that model building needs to be automated with minimum input and low technical maintenance requirements. Models perform best when tailored to answering specific compound optimization related questions. If qualitative answers are required, 2-bin classification models are preferred. Integrating predictive modeling results with structural information stimulates better decision making. For in silico models supporting rapid structure-activity relationship cycles the performance deteriorates within weeks. Frequent automated updates of predictive models ensure best predictions. Consensus between multiple modeling approaches increases the prediction confidence. Combining qualified and nonqualified data optimally uses all available information. Dose predictions provide a holistic alternative to multiple individual property predictions for reaching complex decisions.

  11. High-order Composite Likelihood Inference for Max-Stable Distributions and Processes

    KAUST Repository

    Castruccio, Stefano

    2015-09-29

    In multivariate or spatial extremes, inference for max-stable processes observed at a large collection of locations is a very challenging problem in computational statistics, and current approaches typically rely on less expensive composite likelihoods constructed from small subsets of data. In this work, we explore the limits of modern state-of-the-art computational facilities to perform full likelihood inference and to efficiently evaluate high-order composite likelihoods. With extensive simulations, we assess the loss of information of composite likelihood estimators with respect to a full likelihood approach for some widely-used multivariate or spatial extreme models, we discuss how to choose composite likelihood truncation to improve the efficiency, and we also provide recommendations for practitioners. This article has supplementary material online.

  12. Automatic calibration of a global hydrological model using satellite data as a proxy for stream flow data

    Science.gov (United States)

    Revilla-Romero, B.; Beck, H.; Salamon, P.; Burek, P.; Thielen, J.; de Roo, A.

    2014-12-01

    Model calibration and validation are commonly restricted due to the limited availability of historical in situ observational data. Several studies have demonstrated that using complementary remotely sensed datasets such as soil moisture for model calibration have led to improvements. The aim of this study was to evaluate the use of remotely sensed signal of the Global Flood Detection System (GFDS) as a proxy for stream flow data to calibrate a global hydrological model used in operational flood forecasting. This is done in different river basins located in Africa, South and North America for the time period 1998-2010 by comparing model calibration using the raw satellite signal as a proxy for river discharge with a model calibration using in situ stream flow observations. River flow is simulated using the LISFLOOD hydrological model for the flow routing in the river network and the groundwater mass balance. The model is set up on global coverage with horizontal grid resolution of 0.1 degree and daily time step for input/output data. Based on prior tests, a set of seven model parameters was used for calibration. The parameter space was defined by specifying lower and upper limits on each parameter. The objective functions considered were Pearson correlation (R), Nash-Sutcliffe Efficiency log (NSlog) and Kling-Gupta Efficiency (KGE') where both single- and multi-objective functions were employed. After multiple iterations, for each catchment, the algorithm generated a set of Pareto-optimal front of solutions. A single parameter set was selected which had the lowest distance to R=1 for the single-objective and NSlog=1 and KGE'=1 for the multi-objective function. The results of the different test river basins are compared against the performance obtained using the same objective functions by in situ discharge observations. Automatic calibration strategies of the global hydrological model using satellite data as a proxy for stream flow data are outlined and discussed.

  13. MATHEMATICAL MODEL OF TRANSIENT PROCESSES PERTAIN-ING TO THREE-IMPULSE SYSTEM FOR AUTOMATIC CONTROL OF STEAM GENERATOR WATER SUPPLY ON LOAD RELIEF

    Directory of Open Access Journals (Sweden)

    G. T. Kulakov

    2014-01-01

    Full Text Available The paper analyzes an operation of the standard three-impulse automatic control system (ACS for steam generator water supply. Mathematical model for checking its operational ability on load relief has been developed in the paper and this model makes it possible to determine maximum deviations of water level without execution of actual tests and any corrections in the plants for starting-up of technological protection  systems in accordance with water level in the drum.  The paper reveals reasons of static regulation errors while solving problems of internal and external distortions caused by expenditure of over-heated steam in the standard automatic control system. An actual significance of modernization pertaining to automatic control system for steam generator water supply has been substantiated in the paper.

  14. Maximum Likelihood Estimation and Inference With Examples in R, SAS and ADMB

    CERN Document Server

    Millar, Russell B

    2011-01-01

    This book takes a fresh look at the popular and well-established method of maximum likelihood for statistical estimation and inference. It begins with an intuitive introduction to the concepts and background of likelihood, and moves through to the latest developments in maximum likelihood methodology, including general latent variable models and new material for the practical implementation of integrated likelihood using the free ADMB software. Fundamental issues of statistical inference are also examined, with a presentation of some of the philosophical debates underlying the choice of statis

  15. Efficient maximum likelihood parameterization of continuous-time Markov processes

    CERN Document Server

    McGibbon, Robert T

    2015-01-01

    Continuous-time Markov processes over finite state-spaces are widely used to model dynamical processes in many fields of natural and social science. Here, we introduce an maximum likelihood estimator for constructing such models from data observed at a finite time interval. This estimator is drastically more efficient than prior approaches, enables the calculation of deterministic confidence intervals in all model parameters, and can easily enforce important physical constraints on the models such as detailed balance. We demonstrate and discuss the advantages of these models over existing discrete-time Markov models for the analysis of molecular dynamics simulations.

  16. Identification and automatic segmentation of multiphasic cell growth using a linear hybrid model.

    Science.gov (United States)

    Hartmann, András; Neves, Ana Rute; Lemos, João M; Vinga, Susana

    2016-09-01

    This article considers a new mathematical model for the description of multiphasic cell growth. A linear hybrid model is proposed and it is shown that the two-parameter logistic model with switching parameters can be represented by a Switched affine AutoRegressive model with eXogenous inputs (SARX). The growth phases are modeled as continuous processes, while the switches between the phases are considered to be discrete events triggering a change in growth parameters. This framework provides an easily interpretable model, because the intrinsic behavior is the same along all the phases but with a different parameterization. Another advantage of the hybrid model is that it offers a simpler alternative to recent more complex nonlinear models. The growth phases and parameters from datasets of different microorganisms exhibiting multiphasic growth behavior such as Lactococcus lactis, Streptococcus pneumoniae, and Saccharomyces cerevisiae, were inferred. The segments and parameters obtained from the growth data are close to the ones determined by the experts. The fact that the model could explain the data from three different microorganisms and experiments demonstrates the strength of this modeling approach for multiphasic growth, and presumably other processes consisting of multiple phases. PMID:27424949

  17. Automatic Camera Control

    DEFF Research Database (Denmark)

    Burelli, Paolo; Preuss, Mike

    2014-01-01

    Automatically generating computer animations is a challenging and complex problem with applications in games and film production. In this paper, we investigate howto translate a shot list for a virtual scene into a series of virtual camera configurations — i.e automatically controlling the virtual...... camera. We approach this problem by modelling it as a dynamic multi-objective optimisation problem and show how this metaphor allows a much richer expressiveness than a classical single objective approach. Finally, we showcase the application of a multi-objective evolutionary algorithm to generate a shot...

  18. Automatic Generation of Individual Finite-Element Models for Computational Fluid Dynamics and Computational Structure Mechanics Simulations in the Arteries

    Science.gov (United States)

    Hazer, D.; Schmidt, E.; Unterhinninghofen, R.; Richter, G. M.; Dillmann, R.

    2009-08-01

    Abnormal hemodynamics and biomechanics of blood flow and vessel wall conditions in the arteries may result in severe cardiovascular diseases. Cardiovascular diseases result from complex flow pattern and fatigue of the vessel wall and are prevalent causes leading to high mortality each year. Computational Fluid Dynamics (CFD), Computational Structure Mechanics (CSM) and Fluid Structure Interaction (FSI) have become efficient tools in modeling the individual hemodynamics and biomechanics as well as their interaction in the human arteries. The computations allow non-invasively simulating patient-specific physical parameters of the blood flow and the vessel wall needed for an efficient minimally invasive treatment. The numerical simulations are based on the Finite Element Method (FEM) and require exact and individual mesh models to be provided. In the present study, we developed a numerical tool to automatically generate complex patient-specific Finite Element (FE) mesh models from image-based geometries of healthy and diseased vessels. The mesh generation is optimized based on the integration of mesh control functions for curvature, boundary layers and mesh distribution inside the computational domain. The needed mesh parameters are acquired from a computational grid analysis which ensures mesh-independent and stable simulations. Further, the generated models include appropriate FE sets necessary for the definition of individual boundary conditions, required to solve the system of nonlinear partial differential equations governed by the fluid and solid domains. Based on the results, we have performed computational blood flow and vessel wall simulations in patient-specific aortic models providing a physical insight into the pathological vessel parameters. Automatic mesh generation with individual awareness in terms of geometry and conditions is a prerequisite for performing fast, accurate and realistic FEM-based computations of hemodynamics and biomechanics in the

  19. Semi-automatic liver tumor segmentation with hidden Markov measure field model and non-parametric distribution estimation.

    Science.gov (United States)

    Häme, Yrjö; Pollari, Mika

    2012-01-01

    A novel liver tumor segmentation method for CT images is presented. The aim of this work was to reduce the manual labor and time required in the treatment planning of radiofrequency ablation (RFA), by providing accurate and automated tumor segmentations reliably. The developed method is semi-automatic, requiring only minimal user interaction. The segmentation is based on non-parametric intensity distribution estimation and a hidden Markov measure field model, with application of a spherical shape prior. A post-processing operation is also presented to remove the overflow to adjacent tissue. In addition to the conventional approach of using a single image as input data, an approach using images from multiple contrast phases was developed. The accuracy of the method was validated with two sets of patient data, and artificially generated samples. The patient data included preoperative RFA images and a public data set from "3D Liver Tumor Segmentation Challenge 2008". The method achieved very high accuracy with the RFA data, and outperformed other methods evaluated with the public data set, receiving an average overlap error of 30.3% which represents an improvement of 2.3% points to the previously best performing semi-automatic method. The average volume difference was 23.5%, and the average, the RMS, and the maximum surface distance errors were 1.87, 2.43, and 8.09 mm, respectively. The method produced good results even for tumors with very low contrast and ambiguous borders, and the performance remained high with noisy image data.

  20. Automatic registration of optical aerial imagery to a LiDAR point cloud for generation of city models

    Science.gov (United States)

    Abayowa, Bernard O.; Yilmaz, Alper; Hardie, Russell C.

    2015-08-01

    This paper presents a framework for automatic registration of both the optical and 3D structural information extracted from oblique aerial imagery to a Light Detection and Ranging (LiDAR) point cloud without prior knowledge of an initial alignment. The framework employs a coarse to fine strategy in the estimation of the registration parameters. First, a dense 3D point cloud and the associated relative camera parameters are extracted from the optical aerial imagery using a state-of-the-art 3D reconstruction algorithm. Next, a digital surface model (DSM) is generated from both the LiDAR and the optical imagery-derived point clouds. Coarse registration parameters are then computed from salient features extracted from the LiDAR and optical imagery-derived DSMs. The registration parameters are further refined using the iterative closest point (ICP) algorithm to minimize global error between the registered point clouds. The novelty of the proposed approach is in the computation of salient features from the DSMs, and the selection of matching salient features using geometric invariants coupled with Normalized Cross Correlation (NCC) match validation. The feature extraction and matching process enables the automatic estimation of the coarse registration parameters required for initializing the fine registration process. The registration framework is tested on a simulated scene and aerial datasets acquired in real urban environments. Results demonstrates the robustness of the framework for registering optical and 3D structural information extracted from aerial imagery to a LiDAR point cloud, when co-existing initial registration parameters are unavailable.

  1. An approach of crater automatic recognition based on contour digital elevation model from Chang'E Missions

    Science.gov (United States)

    Zuo, W.; Li, C.; Zhang, Z.; Li, H.; Feng, J.

    2015-12-01

    In order to provide fundamental information for exploration and related scientific research on the Moon and other planets, we propose a new automatic method to recognize craters on the lunar surface based on contour data extracted from a digital elevation model (DEM). First, we mapped 16-bits DEM to 256 gray scales for data compression, then for the purposes of better visualization, the grayscale is converted into RGB image. After that, a median filter is applied twice to DEM for data optimization, which produced smooth, continuous outlines for subsequent construction of contour plane. Considering the fact that the morphology of crater on contour plane can be approximately expressed as an ellipse or circle, we extract the outer boundaries of contour plane with the same color(gray value) as targets for further identification though a 8- neighborhood counterclockwise searching method. Then, A library of training samples is constructed based on above targets calculated from some sample DEM data, from which real crater targets are labeled as positive samples manually, and non-crater objects are labeled as negative ones. Some morphological feathers are calculated for all these samples, which are major axis (L), circumference(C), area inside the boundary(S), and radius of the largest inscribed circle(R). We use R/L, R/S, C/L, C/S, R/C, S/L as the key factors for identifying craters, and apply Fisher discrimination method on the sample library to calculate the weight of each factor and determine the discrimination formula, which is then applied to DEM data for identifying lunar craters. The method has been tested and verified with DEM data from CE-1 and CE-2, showing strong recognition ability and robustness and is applicable for the recognition of craters with various diameters and significant morphological differences, making fast and accurate automatic crater recognition possible.

  2. OPENICRA: Towards A Generic Model for Automatic Deployment of Applications in the Cloud Computing

    Directory of Open Access Journals (Sweden)

    Gadhgadhi Ridha

    2013-10-01

    Full Text Available This paper focuses on the design and the implementation of a new generic model for automated deployment of applications in the cloud to mitigate the effects of barriers to entry, reduce the complexity of application development and simplify the process of deploying cloud services. Our proposed model, called OpenICRA, implements a layered architecture that hides the implementation details, allowing having a simple deployment process. We conducted two real case studies to validate our proposed model. Our empirical results demonstrate the effectiveness of our proposed model to deploy different types of applications without any change in their source code.

  3. Automatic control of finite element models for temperature-controlled radiofrequency ablation

    Directory of Open Access Journals (Sweden)

    Haemmerich Dieter

    2005-07-01

    Full Text Available Abstract Background The finite element method (FEM has been used to simulate cardiac and hepatic radiofrequency (RF ablation. The FEM allows modeling of complex geometries that cannot be solved by analytical methods or finite difference models. In both hepatic and cardiac RF ablation a common control mode is temperature-controlled mode. Commercial FEM packages don't support automating temperature control. Most researchers manually control the applied power by trial and error to keep the tip temperature of the electrodes constant. Methods We implemented a PI controller in a control program written in C++. The program checks the tip temperature after each step and controls the applied voltage to keep temperature constant. We created a closed loop system consisting of a FEM model and the software controlling the applied voltage. The control parameters for the controller were optimized using a closed loop system simulation. Results We present results of a temperature controlled 3-D FEM model of a RITA model 30 electrode. The control software effectively controlled applied voltage in the FEM model to obtain, and keep electrodes at target temperature of 100°C. The closed loop system simulation output closely correlated with the FEM model, and allowed us to optimize control parameters. Discussion The closed loop control of the FEM model allowed us to implement temperature controlled RF ablation with minimal user input.

  4. Uav Aerial Survey: Accuracy Estimation for Automatically Generated Dense Digital Surface Model and Orthothoto Plan

    Science.gov (United States)

    Altyntsev, M. A.; Arbuzov, S. A.; Popov, R. A.; Tsoi, G. V.; Gromov, M. O.

    2016-06-01

    A dense digital surface model is one of the products generated by using UAV aerial survey data. Today more and more specialized software are supplied with modules for generating such kind of models. The procedure for dense digital model generation can be completely or partly automated. Due to the lack of reliable criterion of accuracy estimation it is rather complicated to judge the generation validity of such models. One of such criterion can be mobile laser scanning data as a source for the detailed accuracy estimation of the dense digital surface model generation. These data may be also used to estimate the accuracy of digital orthophoto plans created by using UAV aerial survey data. The results of accuracy estimation for both kinds of products are presented in the paper.

  5. A guideline for the validation of likelihood ratio methods used for forensic evidence evaluation

    NARCIS (Netherlands)

    Meuwly, Didier; Ramos, Daniel; Haraksim, Rudolf

    2016-01-01

    This Guideline proposes a protocol for the validation of forensic evaluation methods at the source level, using the Likelihood Ratio framework as defined within the Bayes’ inference model. In the context of the inference of identity of source, the Likelihood Ratio is used to evaluate the strength of

  6. GRACE/SUSY Automatic Generation of Tree Amplitudes in the Minimal Supersymmetric Standard Model

    OpenAIRE

    Fujimoto, J

    2002-01-01

    GRACE/SUSY is a program package for generating the tree-level amplitude and evaluating the corresponding cross section of processes of the minimal supersymmetric extension of the standard model (MSSM). The Higgs potential adopted in the system, however, is assumed to have a more general form indicated by the two-Higgs-doublet model. This system is an extension of GRACE for the standard model(SM) of the electroweak and strong interactions. For a given MSSM process the Feynman graphs and amplit...

  7. Computer-aided design of curved surfaces with automatic model generation

    Science.gov (United States)

    Staley, S. M.; Jerard, R. B.; White, P. R.

    1980-01-01

    The design and visualization of three-dimensional objects with curved surfaces have always been difficult. The paper given below describes a computer system which facilitates both the design and visualization of such surfaces. The system enhances the design of these surfaces by virtue of various interactive techniques coupled with the application of B-Spline theory. Visualization is facilitated by including a specially built model-making machine which produces three-dimensional foam models. Thus, the system permits the designer to produce an inexpensive model of the object which is suitable for evaluation and presentation.

  8. A quantum framework for likelihood ratios

    CERN Document Server

    Bond, Rachael L; Ormerod, Thomas C

    2015-01-01

    The ability to calculate precise likelihood ratios is fundamental to many STEM areas, such as decision-making theory, biomedical science, and engineering. However, there is no assumption-free statistical methodology to achieve this. For instance, in the absence of data relating to covariate overlap, the widely used Bayes' theorem either defaults to the marginal probability driven "naive Bayes' classifier", or requires the use of compensatory expectation-maximization techniques. Equally, the use of alternative statistical approaches, such as multivariate logistic regression, may be confounded by other axiomatic conditions, e.g., low levels of co-linearity. This article takes an information-theoretic approach in developing a new statistical formula for the calculation of likelihood ratios based on the principles of quantum entanglement. In doing so, it is argued that this quantum approach demonstrates: that the likelihood ratio is a real quality of statistical systems; that the naive Bayes' classifier is a spec...

  9. Free Model of Sentence Classifier for Automatic Extraction of Topic Sentences

    OpenAIRE

    M.L. Khodra; D.H. Widyantoro; E.A. Aziz; B.R. Trilaksono

    2011-01-01

    This research employs free model that uses only sentential features without paragraph context to extract topic sentences of a paragraph. For finding optimal combination of features, corpus-based classification is used for constructing a sentence classifier as the model. The sentence classifier is trained by using Support Vector Machine (SVM). The experiment shows that position and meta-discourse features are more important than syntactic features to extract topic sentence, and the best perfor...

  10. AUTOMATIC TEXTURE RECONSTRUCTION OF 3D CITY MODEL FROM OBLIQUE IMAGES

    OpenAIRE

    Kang, Junhua; Deng, Fei; LI, XINWEI; WAN, FANG

    2016-01-01

    In recent years, the photorealistic 3D city models are increasingly important in various geospatial applications related to virtual city tourism, 3D GIS, urban planning, real-estate management. Besides the acquisition of high-precision 3D geometric data, texture reconstruction is also a crucial step for generating high-quality and visually realistic 3D models. However, most of the texture reconstruction approaches are probably leading to texture fragmentation and memory inefficiency....

  11. Modelling the influence of automaticity of behaviour on physical activity motivation, intention and actual behaviour

    OpenAIRE

    Rietdijk, Yara

    2014-01-01

    In research and in practise social-cognitive models, such as the theory of planned behaviour (TPB), are used to predict physical activity behaviour. These models mainly focus on reflective cognitive processes. As a reflective process, intention is thought to be the most proximal predictor to behaviour. Nevertheless, research suggests that the relation between intention and actual behaviour, the so called intention-behaviour gap, is moderate. Many health-related actions in d...

  12. FRankenstein becomes a cyborg: the automatic recombination and realignment of fold recognition models in CASP6.

    Science.gov (United States)

    Kosinski, Jan; Gajda, Michal J; Cymerman, Iwona A; Kurowski, Michal A; Pawlowski, Marcin; Boniecki, Michal; Obarska, Agnieszka; Papaj, Grzegorz; Sroczynska-Obuchowicz, Paulina; Tkaczuk, Karolina L; Sniezynska, Paulina; Sasin, Joanna M; Augustyn, Anna; Bujnicki, Janusz M; Feder, Marcin

    2005-01-01

    In the course of CASP6, we generated models for all targets using a new version of the "FRankenstein's monster approach." Previously (in CASP5) we were able to build many very accurate full-atom models by selection and recombination of well-folded fragments obtained from crude fold recognition (FR) results, followed by optimization of the sequence-structure fit and assessment of alternative alignments on the structural level. This procedure was however very arduous, as most of the steps required extensive visual and manual input from the human modeler. Now, we have automated the most tedious steps, such as superposition of alternative models, extraction of best-scoring fragments, and construction of a hybrid "monster" structure, as well as generation of alternative alignments in the regions that remain poorly scored in the refined hybrid model. We have also included the ROSETTA method to construct those parts of the target for which no reasonable structures were generated by FR methods (such as long insertions and terminal extensions). The analysis of successes and failures of the current version of the FRankenstein approach in modeling of CASP6 targets reveals that the considerably streamlined and automated method performs almost as well as the initial, mostly manual version, which suggests that it may be a useful tool for accurate protein structure prediction even in the hands of nonexperts.

  13. Towards Automatic and Topologically Consistent 3D Regional Geological Modeling from Boundaries and Attitudes

    Directory of Open Access Journals (Sweden)

    Jiateng Guo

    2016-02-01

    Full Text Available Three-dimensional (3D geological models are important representations of the results of regional geological surveys. However, the process of constructing 3D geological models from two-dimensional (2D geological elements remains difficult and is not necessarily robust. This paper proposes a method of migrating from 2D elements to 3D models. First, the geological interfaces were constructed using the Hermite Radial Basis Function (HRBF to interpolate the boundaries and attitude data. Then, the subsurface geological bodies were extracted from the spatial map area using the Boolean method between the HRBF surface and the fundamental body. Finally, the top surfaces of the geological bodies were constructed by coupling the geological boundaries to digital elevation models. Based on this workflow, a prototype system was developed, and typical geological structures (e.g., folds, faults, and strata were simulated. Geological modes were constructed through this workflow based on realistic regional geological survey data. The model construction process was rapid, and the resulting models accorded with the constraints of the original data. This method could also be used in other fields of study, including mining geology and urban geotechnical investigations.

  14. Maximum mutual information vector quantization of log-likelihood ratios for memory efficient HARQ implementations

    OpenAIRE

    Danieli, Matteo; Forchhammer, Søren; Andersen, Jakob Dahl; Christensen, Lars P.B.; Skovgaard Christensen, Søren

    2010-01-01

    Modern mobile telecommunication systems, such as 3GPP LTE, make use of Hybrid Automatic Repeat reQuest (HARQ) for efficient and reliable communication between base stations and mobile terminals. To this purpose, marginal posterior probabilities of the received bits are stored in the form of log-likelihood ratios (LLR) in order to combine information sent across different transmissions due to requests. To mitigate the effects of ever-increasing data rates that call for larger HARQ memory, vect...

  15. 基于EIS的杠杆随机波动率模型的极大似然估计%EIS-based maximum likelihood estimation of stochastic volatility model with leverage effect

    Institute of Scientific and Technical Information of China (English)

    吴鑫育; 周海林; 汪寿阳; 马超群

    2013-01-01

    The stochastic volatility model with a leverage effect (SV-L) has received a great deal of attention in the financial econometrics literature. However, estimation of the SV-L model poses difficulties. In this pa-per, we develop a method for maximum likelihood (ML) estimation of the SV-L model based on the efficient importance sampling (EIS) technique. Monte Carlo (MC) simulations are presented to examine the accuracy and small sample properties of our proposed method. The experimental results show that the EIS-ML method performs very well. Finally, the EIS-ML method is illustrated with real data. We apply the EIS-ML method of SV-L model to the daily log returns of SSE and SZSE Component Index. Empirical results show that a high persistence of volatility and a significant leverage effect exist in China stock market.%杠杆随机波动率(SV-L)模型在金融计量学文献中已经引起了广泛的关注,然而,它的参数估计一直是一个难点.本文基于有效重要性抽样(EIS)技巧,给出了SV-L模型的极大似然(ML)估计方法.为了检验提出的EIS-ML方法的精确性以及小样本性质,构建了蒙特卡罗(MC)模拟实验.结果表明,EIS-ML方法是非常准确和有效的.最后,将EIS-ML方法应用于实际数据,选取上证和深证综合指数的日对数收益率数据为研究样本,利用SV-L模型对中国股市进行了实证分析.结果表明,中国股市具有很强的波动持续性,并且存在显著的杠杆效应.

  16. Automatic estimation of midline shift in patients with cerebral glioma based on enhanced voigt model and local symmetry.

    Science.gov (United States)

    Chen, Mingyang; Elazab, Ahmed; Jia, Fucang; Wu, Jianhuang; Li, Guanglin; Li, Xiaodong; Hu, Qingmao

    2015-12-01

    Cerebral glioma is one of the most aggressive space-occupying diseases, which will exhibit midline shift (MLS) due to mass effect. MLS has been used as an important feature for evaluating the pathological severity and patients' survival possibility. Automatic quantification of MLS is challenging due to deformation, complex shape and complex grayscale distribution. An automatic method is proposed and validated to estimate MLS in patients with gliomas diagnosed using magnetic resonance imaging (MRI). The deformed midline is approximated by combining mechanical model and local symmetry. An enhanced Voigt model which takes into account the size and spatial information of lesion is devised to predict the deformed midline. A composite local symmetry combining local intensity symmetry and local intensity gradient symmetry is proposed to refine the predicted midline within a local window whose size is determined according to the pinhole camera model. To enhance the MLS accuracy, the axial slice with maximum MSL from each volumetric data has been interpolated from a spatial resolution of 1 mm to 0.33 mm. The proposed method has been validated on 30 publicly available clinical head MRI scans presenting with MLS. It delineates the deformed midline with maximum MLS and yields a mean difference of 0.61 ± 0.27 mm, and average maximum difference of 1.89 ± 1.18 mm from the ground truth. Experiments show that the proposed method will yield better accuracy with the geometric center of pathology being the geometric center of tumor and the pathological region being the whole lesion. It has also been shown that the proposed composite local symmetry achieves significantly higher accuracy than the traditional local intensity symmetry and the local intensity gradient symmetry. To the best of our knowledge, for delineation of deformed midline, this is the first report on both quantification of gliomas and from MRI, which hopefully will provide valuable information for diagnosis

  17. Automatic Generation of Connectivity for Large-Scale Neuronal Network Models through Structural Plasticity.

    Science.gov (United States)

    Diaz-Pier, Sandra; Naveau, Mikaël; Butz-Ostendorf, Markus; Morrison, Abigail

    2016-01-01

    With the emergence of new high performance computation technology in the last decade, the simulation of large scale neural networks which are able to reproduce the behavior and structure of the brain has finally become an achievable target of neuroscience. Due to the number of synaptic connections between neurons and the complexity of biological networks, most contemporary models have manually defined or static connectivity. However, it is expected that modeling the dynamic generation and deletion of the links among neurons, locally and between different regions of the brain, is crucial to unravel important mechanisms associated with learning, memory and healing. Moreover, for many neural circuits that could potentially be modeled, activity data is more readily and reliably available than connectivity data. Thus, a framework that enables networks to wire themselves on the basis of specified activity targets can be of great value in specifying network models where connectivity data is incomplete or has large error margins. To address these issues, in the present work we present an implementation of a model of structural plasticity in the neural network simulator NEST. In this model, synapses consist of two parts, a pre- and a post-synaptic element. Synapses are created and deleted during the execution of the simulation following local homeostatic rules until a mean level of electrical activity is reached in the network. We assess the scalability of the implementation in order to evaluate its potential usage in the self generation of connectivity of large scale networks. We show and discuss the results of simulations on simple two population networks and more complex models of the cortical microcircuit involving 8 populations and 4 layers using the new framework. PMID:27303272

  18. Structural modeling of G-protein coupled receptors: An overview on automatic web-servers.

    Science.gov (United States)

    Busato, Mirko; Giorgetti, Alejandro

    2016-08-01

    Despite the significant efforts and discoveries during the last few years in G protein-coupled receptor (GPCR) expression and crystallization, the receptors with known structures to date are limited only to a small fraction of human GPCRs. The lack of experimental three-dimensional structures of the receptors represents a strong limitation that hampers a deep understanding of their function. Computational techniques are thus a valid alternative strategy to model three-dimensional structures. Indeed, recent advances in the field, together with extraordinary developments in crystallography, in particular due to its ability to capture GPCRs in different activation states, have led to encouraging results in the generation of accurate models. This, prompted the community of modelers to render their methods publicly available through dedicated databases and web-servers. Here, we present an extensive overview on these services, focusing on their advantages, drawbacks and their role in successful applications. Future challenges in the field of GPCR modeling, such as the predictions of long loop regions and the modeling of receptor activation states are presented as well. PMID:27102413

  19. Automatically multi-paradigm requirements modeling and analyzing: An ontology-based approach

    Institute of Scientific and Technical Information of China (English)

    2003-01-01

    There are several purposes for modeling and analyzing the problem domain before starting the software requirements analysis. First, it focuses on the problem domain, so that the domain users could be involved easily. Secondly, a comprehensive description on the problem domain will advantage getting a comprehensive software requirements model. This paper proposes an ontology-based approach for mod-eling the problem domain. It interacts with the domain users by using terminology that they can under-stand and guides them to provide the relevant information. A multiple paradigm analysis approach, with the basis of the description on the problem domain, has also been presented. Three criteria, i.e. the ra-tionality of organization structure, the achievability of organization goals, and the feasibility of organiza-tion process, have been proposed. The results of the analysis could be used as feedbacks for guiding the domain users to provide further information on the problem domain. And those models on the problem domain could be a kind of document for the pre-requirements analysis phase. They also will be the basis for further software requirements modeling.

  20. Structural modeling of G-protein coupled receptors: An overview on automatic web-servers.

    Science.gov (United States)

    Busato, Mirko; Giorgetti, Alejandro

    2016-08-01

    Despite the significant efforts and discoveries during the last few years in G protein-coupled receptor (GPCR) expression and crystallization, the receptors with known structures to date are limited only to a small fraction of human GPCRs. The lack of experimental three-dimensional structures of the receptors represents a strong limitation that hampers a deep understanding of their function. Computational techniques are thus a valid alternative strategy to model three-dimensional structures. Indeed, recent advances in the field, together with extraordinary developments in crystallography, in particular due to its ability to capture GPCRs in different activation states, have led to encouraging results in the generation of accurate models. This, prompted the community of modelers to render their methods publicly available through dedicated databases and web-servers. Here, we present an extensive overview on these services, focusing on their advantages, drawbacks and their role in successful applications. Future challenges in the field of GPCR modeling, such as the predictions of long loop regions and the modeling of receptor activation states are presented as well.

  1. A composite likelihood approach for spatially correlated survival data.

    Science.gov (United States)

    Paik, Jane; Ying, Zhiliang

    2013-01-01

    The aim of this paper is to provide a composite likelihood approach to handle spatially correlated survival data using pairwise joint distributions. With e-commerce data, a recent question of interest in marketing research has been to describe spatially clustered purchasing behavior and to assess whether geographic distance is the appropriate metric to describe purchasing dependence. We present a model for the dependence structure of time-to-event data subject to spatial dependence to characterize purchasing behavior from the motivating example from e-commerce data. We assume the Farlie-Gumbel-Morgenstern (FGM) distribution and then model the dependence parameter as a function of geographic and demographic pairwise distances. For estimation of the dependence parameters, we present pairwise composite likelihood equations. We prove that the resulting estimators exhibit key properties of consistency and asymptotic normality under certain regularity conditions in the increasing-domain framework of spatial asymptotic theory. PMID:24223450

  2. $\\ell_0$-penalized maximum likelihood for sparse directed acyclic graphs

    CERN Document Server

    van de Geer, Sara

    2012-01-01

    We consider the problem of regularized maximum likelihood estimation for the structure and parameters of a high-dimensional, sparse directed acyclic graphical (DAG) model with Gaussian distribution, or equivalently, of a Gaussian structural equation model. We show that the $\\ell_0$-penalized maximum likelihood estimator of a DAG has about the same number of edges as the minimal-edge I-MAP (a DAG with minimal number of edges representing the distribution), and that it converges in Frobenius norm. We allow the number of nodes $p$ to be much larger than sample size $n$ but assume a sparsity condition and that any representation of the true DAG has at least a fixed proportion of its non-zero edge weights above the noise level. Our results do not rely on the restrictive strong faithfulness condition which is required for methods based on conditional independence testing such as the PC-algorithm.

  3. A composite likelihood approach for spatially correlated survival data.

    Science.gov (United States)

    Paik, Jane; Ying, Zhiliang

    2013-01-01

    The aim of this paper is to provide a composite likelihood approach to handle spatially correlated survival data using pairwise joint distributions. With e-commerce data, a recent question of interest in marketing research has been to describe spatially clustered purchasing behavior and to assess whether geographic distance is the appropriate metric to describe purchasing dependence. We present a model for the dependence structure of time-to-event data subject to spatial dependence to characterize purchasing behavior from the motivating example from e-commerce data. We assume the Farlie-Gumbel-Morgenstern (FGM) distribution and then model the dependence parameter as a function of geographic and demographic pairwise distances. For estimation of the dependence parameters, we present pairwise composite likelihood equations. We prove that the resulting estimators exhibit key properties of consistency and asymptotic normality under certain regularity conditions in the increasing-domain framework of spatial asymptotic theory.

  4. Modelling of the automatic stabilization system of the aircraft course by a fuzzy logic method

    Science.gov (United States)

    Mamonova, T.; Syryamkin, V.; Vasilyeva, T.

    2016-04-01

    The problem of the present paper concerns the development of a fuzzy model of the system of an aircraft course stabilization. In this work modelling of the aircraft course stabilization system with the application of fuzzy logic is specified. Thus the authors have used the data taken for an ordinary passenger plane. As a result of the study the stabilization system models were realised in the environment of Matlab package Simulink on the basis of the PID-regulator and fuzzy logic. The authors of the paper have shown that the use of the method of artificial intelligence allows reducing the time of regulation to 1, which is 50 times faster than the time when standard receptions of the management theory are used. This fact demonstrates a positive influence of the use of fuzzy regulation.

  5. Maximum likelihood molecular clock comb: analytic solutions.

    Science.gov (United States)

    Chor, Benny; Khetan, Amit; Snir, Sagi

    2006-04-01

    Maximum likelihood (ML) is increasingly used as an optimality criterion for selecting evolutionary trees, but finding the global optimum is a hard computational task. Because no general analytic solution is known, numeric techniques such as hill climbing or expectation maximization (EM), are used in order to find optimal parameters for a given tree. So far, analytic solutions were derived only for the simplest model--three taxa, two state characters, under a molecular clock. Four taxa rooted trees have two topologies--the fork (two subtrees with two leaves each) and the comb (one subtree with three leaves, the other with a single leaf). In a previous work, we devised a closed form analytic solution for the ML molecular clock fork. In this work, we extend the state of the art in the area of analytic solutions ML trees to the family of all four taxa trees under the molecular clock assumption. The change from the fork topology to the comb incurs a major increase in the complexity of the underlying algebraic system and requires novel techniques and approaches. We combine the ultrametric properties of molecular clock trees with the Hadamard conjugation to derive a number of topology dependent identities. Employing these identities, we substantially simplify the system of polynomial equations. We finally use tools from algebraic geometry (e.g., Gröbner bases, ideal saturation, resultants) and employ symbolic algebra software to obtain analytic solutions for the comb. We show that in contrast to the fork, the comb has no closed form solutions (expressed by radicals in the input data). In general, four taxa trees can have multiple ML points. In contrast, we can now prove that under the molecular clock assumption, the comb has a unique (local and global) ML point. (Such uniqueness was previously shown for the fork.).

  6. Free Model of Sentence Classifier for Automatic Extraction of Topic Sentences

    Directory of Open Access Journals (Sweden)

    M.L. Khodra

    2011-04-01

    Full Text Available This research employs free model that uses only sentential features without paragraph context to extract topic sentences of a paragraph. For finding optimal combination of features, corpus-based classification is used for constructing a sentence classifier as the model. The sentence classifier is trained by using Support Vector Machine (SVM. The experiment shows that position and meta-discourse features are more important than syntactic features to extract topic sentence, and the best performer (80.68% is SVM classifier with all features.

  7. Automatic Tuning of a Retina Model for a Cortical Visual Neuroprosthesis Using a Multi-Objective Optimization Genetic Algorithm.

    Science.gov (United States)

    Martínez-Álvarez, Antonio; Crespo-Cano, Rubén; Díaz-Tahoces, Ariadna; Cuenca-Asensi, Sergio; Ferrández Vicente, José Manuel; Fernández, Eduardo

    2016-11-01

    The retina is a very complex neural structure, which contains many different types of neurons interconnected with great precision, enabling sophisticated conditioning and coding of the visual information before it is passed via the optic nerve to higher visual centers. The encoding of visual information is one of the basic questions in visual and computational neuroscience and is also of seminal importance in the field of visual prostheses. In this framework, it is essential to have artificial retina systems to be able to function in a way as similar as possible to the biological retinas. This paper proposes an automatic evolutionary multi-objective strategy based on the NSGA-II algorithm for tuning retina models. Four metrics were adopted for guiding the algorithm in the search of those parameters that best approximate a synthetic retinal model output with real electrophysiological recordings. Results show that this procedure exhibits a high flexibility when different trade-offs has to be considered during the design of customized neuro prostheses.

  8. Automatic Tracing and Segmentation of Rat Mammary Fat Pads in MRI Image Sequences Based on Cartoon-Texture Model

    Institute of Scientific and Technical Information of China (English)

    TU Shengxian; ZHANG Su; CHEN Yazhu; Freedman Matthew T; WANG Bin; XUAN Jason; WANG Yue

    2009-01-01

    The growth patterns of mammary fat pads and glandular tissues inside the fat pads may be related with the risk factors of breast cancer.Quantitative measurements of this relationship are available after segmentation of mammary pads and glandular tissues.Rat fat pads may lose continuity along image sequences or adjoin similar intensity areas like epidermis and subcutaneous regions.A new approach for automatic tracing and segmentation of fat pads in magnetic resonance imaging (MRI) image sequences is presented,which does not require that the number of pads be constant or the spatial location of pads be adjacent among image slices.First,each image is decomposed into cartoon image and texture image based on cartoon-texture model.They will be used as smooth image and feature image for segmentation and for targeting pad seeds,respectively.Then,two-phase direct energy segmentation based on Chan-Vese active contour model is applied to partitioning the cartoon image into a set of regions,from which the pad boundary is traced iteratively from the pad seed.A tracing algorithm based on scanning order is proposed to accurately trace the pad boundary,which effectively removes the epidermis attached to the pad without any post processing as well as solves the problem of over-segmentation of some small holes inside the pad.The experimental results demonstrate the utility of this approach in accurate delineation of various numbers of mammary pads from several sets of MRI images.

  9. Semi-automatic segmentation of vertebral bodies in volumetric MR images using a statistical shape+pose model

    Science.gov (United States)

    Suzani, Amin; Rasoulian, Abtin; Fels, Sidney; Rohling, Robert N.; Abolmaesumi, Purang

    2014-03-01

    Segmentation of vertebral structures in magnetic resonance (MR) images is challenging because of poor con­trast between bone surfaces and surrounding soft tissue. This paper describes a semi-automatic method for segmenting vertebral bodies in multi-slice MR images. In order to achieve a fast and reliable segmentation, the method takes advantage of the correlation between shape and pose of different vertebrae in the same patient by using a statistical multi-vertebrae anatomical shape+pose model. Given a set of MR images of the spine, we initially reduce the intensity inhomogeneity in the images by using an intensity-correction algorithm. Then a 3D anisotropic diffusion filter smooths the images. Afterwards, we extract edges from a relatively small region of the pre-processed image with a simple user interaction. Subsequently, an iterative Expectation Maximization tech­nique is used to register the statistical multi-vertebrae anatomical model to the extracted edge points in order to achieve a fast and reliable segmentation for lumbar vertebral bodies. We evaluate our method in terms of speed and accuracy by applying it to volumetric MR images of the spine acquired from nine patients. Quantitative and visual results demonstrate that the method is promising for segmentation of vertebral bodies in volumetric MR images.

  10. Automatic Tuning of a Retina Model for a Cortical Visual Neuroprosthesis Using a Multi-Objective Optimization Genetic Algorithm.

    Science.gov (United States)

    Martínez-Álvarez, Antonio; Crespo-Cano, Rubén; Díaz-Tahoces, Ariadna; Cuenca-Asensi, Sergio; Ferrández Vicente, José Manuel; Fernández, Eduardo

    2016-11-01

    The retina is a very complex neural structure, which contains many different types of neurons interconnected with great precision, enabling sophisticated conditioning and coding of the visual information before it is passed via the optic nerve to higher visual centers. The encoding of visual information is one of the basic questions in visual and computational neuroscience and is also of seminal importance in the field of visual prostheses. In this framework, it is essential to have artificial retina systems to be able to function in a way as similar as possible to the biological retinas. This paper proposes an automatic evolutionary multi-objective strategy based on the NSGA-II algorithm for tuning retina models. Four metrics were adopted for guiding the algorithm in the search of those parameters that best approximate a synthetic retinal model output with real electrophysiological recordings. Results show that this procedure exhibits a high flexibility when different trade-offs has to be considered during the design of customized neuro prostheses. PMID:27354187

  11. On the Integration of Automatic Deployment into the ABS Modeling Language

    NARCIS (Netherlands)

    Gouw, C.P.T. de; Lienhardt, M.; Mauro, J.; Nobakht, B.; Zavattaro, G.; Dustdar, S.; Leymann, F.; Villari, M.

    2015-01-01

    In modern software systems, deployment is an integral and critical part of application development (see, e.g., the DevOps approach to software development). Nevertheless, deployment is usually overlooked at the modeling level, thus losing the possibility to perform deployment conscious decisions dur

  12. Maximum Likelihood Estimation in Panels with Incidental Trends

    OpenAIRE

    Moon, Hyungsik; Phillips, Peter C. B.

    1999-01-01

    It is shown that the maximum likelihood estimator of a local to unity parameter can be consistently estimated with panel data when the cross section observations are independent. Consistency applies when there are no deterministic trends or when there is a homogeneous deterministic trend in the panel model. When there are heterogeneous deterministic trends the panel MLE of the local to unity parameter is inconsistent. This outcome provides a new instance of inconsistent ML estimation in dynam...

  13. Efficient Strategies for Calculating Blockwise Likelihoods Under the Coalescent

    OpenAIRE

    Lohse, Konrad; Chmelik, Martin; Simon H Martin; Nicholas H Barton

    2015-01-01

    The inference of demographic history from genome data is hindered by a lack of efficient computational approaches. In particular, it has proven difficult to exploit the information contained in the distribution of genealogies across the genome. We have previously shown that the generating function (GF) of genealogies can be used to analytically compute likelihoods of demographic models from configurations of mutations in short sequence blocks (Lohse et al. 2011). Although the GF has a simple,...

  14. AUTOMATIC HUMAN FACE RECOGNITION USING MULTIVARIATE GAUSSIAN MODEL AND FISHER LINEAR DISCRIMINATIVE ANALYSIS

    Directory of Open Access Journals (Sweden)

    Surya Kasturi

    2014-08-01

    Full Text Available Face recognition plays an important role in surveillance, biometrics and is a popular application of computer vision. In this paper, color based skin segmentation is proposed to detect faces and is matched with faces from the dataset. The proposed color based segmentation method is tested in different color spaces to identify suitable color space for identification of faces. Based on the sample skin distribution a Multivariate Gaussian Model is fitted to identify skin regions from which face regions are detected using connected components. The detected face is match with a template and verified. The proposed method Multivariate Gaussian Model – Fisher Linear Discriminative Analysis (MGM – FLDA is compared with machine learning - Viola & Jones algorithm and it gives better results in terms of time.

  15. Generalized Linear Models of home activity for automatic detection of mild cognitive impairment in older adults.

    Science.gov (United States)

    Akl, Ahmad; Snoek, Jasper; Mihailidis, Alex

    2014-01-01

    With a globally aging population, the burden of care of cognitively impaired older adults is becoming increasingly concerning. Instances of Alzheimer's disease and other forms of dementia are becoming ever more frequent. Earlier detection of cognitive impairment offers significant benefits, but remains difficult to do in practice. In this paper, we develop statistical models of the behavior of older adults within their homes using sensor data in order to detect the early onset of cognitive decline. Specifically, we use inhomogenous Poisson processes to model the presence of subjects within different rooms throughout the day in the home using unobtrusive sensing technologies. We compare the distributions learned from cognitively intact and impaired subjects using information theoretic tools and observe statistical differences between the two populations which we believe can be used to help detect the onset of cognitive decline.

  16. AUTOMATIC HUMAN FACE RECOGNITION USING MULTIVARIATE GAUSSIAN MODEL AND FISHER LINEAR DISCRIMINATIVE ANALYSIS

    OpenAIRE

    Surya Kasturi; Ashoka Vanjare; S.N. Omkar

    2014-01-01

    Face recognition plays an important role in surveillance, biometrics and is a popular application of computer vision. In this paper, color based skin segmentation is proposed to detect faces and is matched with faces from the dataset. The proposed color based segmentation method is tested in different color spaces to identify suitable color space for identification of faces. Based on the sample skin distribution a Multivariate Gaussian Model is fitted to identify skin regions from which face ...

  17. Toward the Automatic Generation of a Semantic VRML Model from Unorganized 3D Point Clouds

    OpenAIRE

    Ben Hmida, Helmi; Cruz, Christophe; Nicolle, Christophe; Boochs, Frank

    2011-01-01

    International audience This paper presents our experience regarding the creation of 3D semantic facility model out of unorganized 3D point clouds. Thus, a knowledge-based detection approach of objects using the OWL ontology language is presented. This knowledge is used to define SWRL detection rules. In addition, the combination of 3D processing built-ins and topological Built-Ins in SWRL rules aims at combining geometrical analysis of 3D point clouds and specialist's knowledge. This combi...

  18. Genetic algorithms used for PWRs refuel management automatic optimization: a new modelling

    International Nuclear Information System (INIS)

    A Genetic Algorithms-based system, linking the computer codes GENESIS 5.0 and ANC through the interface ALGER, has been developed aiming the PWRs fuel management optimization. An innovative codification, the Lists Model, has been incorporated to the genetic system, which avoids the use of variants of the standard crossover operator and generates only valid loading patterns in the core. The GENESIS/ALGER/ANC system has been successfully tested in an optimization study for Angra-1 second cycle. (author)

  19. Analysis of DGNB-DK criteria for BIM-based Model Checking automatization

    DEFF Research Database (Denmark)

    Gade, Peter; Svidt, Kjeld; Jensen, Rasmus Lund

    This report includes the results of an analysis of the automation potential of the Danish edition of building sustainability assessment method Deutsche Gesellschaft für Nachhaltiges Bauen (DGNB) for office buildings version 2014 1.1. The analysis investigate the criteria related to DGNB-DK and if......-DK and if they would be suited for automation through the technological concept BIM-based Model Checking (BMC)....

  20. Automatic Extraction of Three Dimensional Prismatic Machining Features from CAD Model

    Directory of Open Access Journals (Sweden)

    B.V. Sudheer Kumar

    2011-12-01

    Full Text Available Machining features recognition provides the necessary platform for the computer aided process planning (CAPP and plays a key role in the integration of computer aided design (CAD and computer aided manufacturing (CAM. This paper presents a new methodology for extracting features from the geometrical data of the CAD Model present in the form of Virtual Reality Modeling Language (VRML files. First, the point cloud is separated into the available number of horizontal cross sections. Each cross section consists of a 2D point cloud. Then, a collection of points represented by a set of feature points is derived for each slice, describing the cross section accurately, and providing the basis for a feature-extraction. These extracted manufacturing features, gives the necessary information regarding the manufacturing activities tomanufacture the part. Software in Microsoft Visual C++ environment is developed to recognize the features, where geometric information of the part isextracted from the CAD model. By using this data, anoutput file i.e., text file is generated, which gives all the machinable features present in the part. This process has been tested on various parts and successfully extracted all the features

  1. Maintaining symmetry of simulated likelihood functions

    DEFF Research Database (Denmark)

    Andersen, Laura Mørch

    This paper suggests solutions to two different types of simulation errors related to Quasi-Monte Carlo integration. Likelihood functions which depend on standard deviations of mixed parameters are symmetric in nature. This paper shows that antithetic draws preserve this symmetry and thereby...

  2. Maximum likelihood estimation of search costs

    NARCIS (Netherlands)

    Moraga González, José; Wildenbeest, Matthijs R.

    2008-01-01

    In a recent paper Hong and Shum [2006. Using price distributions to estimate search costs. Rand Journal of Economics 37, 257-275] present a structural method to estimate search cost distributions. We extend their approach to the case of oligopoly and present a new maximum likelihood method to estima

  3. Maximum Likelihood Estimation of Search Costs

    NARCIS (Netherlands)

    J.L. Moraga-Gonzalez (José Luis); M.R. Wildenbeest (Matthijs)

    2006-01-01

    textabstractIn a recent paper Hong and Shum (forthcoming) present a structural methodology to estimate search cost distributions. We extend their approach to the case of oligopoly and present a maximum likelihood estimate of the search cost distribution. We apply our method to a data set of online p

  4. Automatic Reconstruction of 3D Building Models from Terrestrial Laser Scanner Data

    Science.gov (United States)

    El Meouche, R.; Rezoug, M.; Hijazi, I.; Maes, D.

    2013-11-01

    With modern 3D laser scanners we can acquire a large amount of 3D data in only a few minutes. This technology results in a growing number of applications ranging from the digitalization of historical artifacts to facial authentication. The modeling process demands a lot of time and work (Tim Volodine, 2007). In comparison with the other two stages, the acquisition and the registration, the degree of automation of the modeling stage is almost zero. In this paper, we propose a new surface reconstruction technique for buildings to process the data obtained by a 3D laser scanner. These data are called a point cloud which is a collection of points sampled from the surface of a 3D object. Such a point cloud can consist of millions of points. In order to work more efficiently, we worked with simplified models which contain less points and so less details than a point cloud obtained in situ. The goal of this study was to facilitate the modeling process of a building starting from 3D laser scanner data. In order to do this, we wrote two scripts for Rhinoceros 5.0 based on intelligent algorithms. The first script finds the exterior outline of a building. With a minimum of human interaction, there is a thin box drawn around the surface of a wall. This box is able to rotate 360° around an axis in a corner of the wall in search for the points of other walls. In this way we can eliminate noise points. These are unwanted or irrelevant points. If there is an angled roof, the box can also turn around the edge of the wall and the roof. With the different positions of the box we can calculate the exterior outline. The second script draws the interior outline in a surface of a building. By interior outline we mean the outline of the openings like windows or doors. This script is based on the distances between the points and vector characteristics. Two consecutive points with a relative big distance will form the outline of an opening. Once those points are found, the interior outline

  5. Automatic Black-Box Model Order Reduction using Radial Basis Functions

    Energy Technology Data Exchange (ETDEWEB)

    Stephanson, M B; Lee, J F; White, D A

    2011-07-15

    Finite elements methods have long made use of model order reduction (MOR), particularly in the context of fast freqeucny sweeps. In this paper, we discuss a black-box MOR technique, applicable to a many solution methods and not restricted only to spectral responses. We also discuss automated methods for generating a reduced order model that meets a given error tolerance. Numerical examples demonstrate the effectiveness and wide applicability of the method. With the advent of improved computing hardware and numerous fast solution techniques, the field of computational electromagnetics are progressed rapidly in terms of the size and complexity of problems that can be solved. Numerous applications, however, require the solution of a problem for many different configurations, including optimization, parameter exploration, and uncertainly quantification, where the parameters that may be changed include frequency, material properties, geometric dimensions, etc. In such cases, thousands of solutions may be needed, so solve times of even a few minutes can be burdensome. Model order reduction (MOR) may alleviate this difficulty by creating a small model that can be evaluated quickly. Many MOR techniques have been applied to electromagnetic problems over the past few decades, particularly in the context of fast frequency sweeps. Recent works have extended these methods to allow more than one parameter and to allow the parameters to represent material and geometric properties. There are still limitations with these methods, however. First, they almost always assume that the finite element method is used to solve the problem, so that the system matrix is a known function of the parameters. Second, although some authors have presented adaptive methods (e.g., [2]), the order of the model is often determined before the MOR process begins, with little insight about what order is actually needed to reach the desired accuracy. Finally, it not clear how to efficiently extend most

  6. Automatic TCAD model calibration for multi-cellular Trench-IGBTs

    Science.gov (United States)

    Maresca, Luca; Breglio, Giovanni; Irace, Andrea

    2014-01-01

    TCAD simulators are a consolidate tool in the field of the semiconductor research because of their predictive capability. However, an accurate calibration of the models is needed in order to get quantitative accurate results. In this work a calibration procedure of the TCAD elementary cell, specific for Trench IGBT with a blocking voltage of 600 V, is presented. It is based on the error minimization between the experimental and the simulated terminal curves of the device at two temperatures. The procedure is applied to a PT-IGBT and a good predictive capability is showed in the simulation of both the short-circuit and turn-off tests.

  7. Model Based Automatic Segmentation Of Tree Stems From Single Scan Data

    Science.gov (United States)

    Boesch, R.

    2013-10-01

    Forest inventories collect feature data manually on terrestrial field plots. Measuring large amounts of breast height diameters and tree positions is time consuming. Terrestrial laser scanning could be an additional instrument to collect precise and full inventory data in the 3D space. As a preliminary assumption single scan data is used to evaluate a minimal data acquisition scheme. To extract features like trees and diameter from the scanned point cloud, a simple geometric model world is defined in 3D. Trees are cylinder shapes vertically located on a plane. Using a RANSAC-based segmentation approach, cylinders are fitted iteratively in the point cloud. Several threshold parameters increase the robustness of the segmentation model and extract point clouds of single trees, which still contain branches and the tree crown. Fitting circles along the stem using point cloud slices allows to refine the effective diameter for customized heights. The cross section of a single tree point cloud covers only the semi circle towards the scan location, but is still contiguous enough to estimate diameters by using a robust circle fitting method.

  8. Optimization of Automatic Digital Phase Correction Model of Ka-band Radar%Ka频段设备自动校相模型优化

    Institute of Scientific and Technical Information of China (English)

    高山; 刘桂生; 李天宝; 何嘉靖; 贵宇

    2014-01-01

    针对Ka频段设备在实际工作中出现的由于自跟踪零点不准确导致自动校相检查失败的问题,对其自动校相模型进行了理论分析,计算得到了解算误差。结合该设备的性能规律,针对数据处理算法提出了优化建议,解决了该问题,进一步增强了自动校相模型的适用性。%The inaccuracy of automatic zero tracking in practical engineering may cause Ka-band Radar ’ s failure in automatic digital phase correction checking.Aiming at this problem,this paper firstly analyzes theoretically the automatic digital phase correction model,obtaining reliable results of resolution error;secondly,aiming at the data processing method,and combining with the performance law of the equipment,it puts forward some optimization suggestions which solve the problem,and further enhance the applicability of automatic digital phase correction model.

  9. An automatic generation of non-uniform mesh for CFD analyses of image-based multiscale human airway models

    Science.gov (United States)

    Miyawaki, Shinjiro; Tawhai, Merryn H.; Hoffman, Eric A.; Lin, Ching-Long

    2014-11-01

    The authors have developed a method to automatically generate non-uniform CFD mesh for image-based human airway models. The sizes of generated tetrahedral elements vary in both radial and longitudinal directions to account for boundary layer and multiscale nature of pulmonary airflow. The proposed method takes advantage of our previously developed centerline-based geometry reconstruction method. In order to generate the mesh branch by branch in parallel, we used the open-source programs Gmsh and TetGen for surface and volume meshes, respectively. Both programs can specify element sizes by means of background mesh. The size of an arbitrary element in the domain is a function of wall distance, element size on the wall, and element size at the center of airway lumen. The element sizes on the wall are computed based on local flow rate and airway diameter. The total number of elements in the non-uniform mesh (10 M) was about half of that in the uniform mesh, although the computational time for the non-uniform mesh was about twice longer (170 min). The proposed method generates CFD meshes with fine elements near the wall and smooth variation of element size in longitudinal direction, which are required, e.g., for simulations with high flow rate. NIH Grants R01-HL094315, U01-HL114494, and S10-RR022421. Computer time provided by XSEDE.

  10. A Sensitive and Automatic White Matter Fiber Tracts Model for Longitudinal Analysis of Diffusion Tensor Images in Multiple Sclerosis.

    Directory of Open Access Journals (Sweden)

    Claudio Stamile

    Full Text Available Diffusion tensor imaging (DTI is a sensitive tool for the assessment of microstructural alterations in brain white matter (WM. We propose a new processing technique to detect, local and global longitudinal changes of diffusivity metrics, in homologous regions along WM fiber-bundles. To this end, a reliable and automatic processing pipeline was developed in three steps: 1 co-registration and diffusion metrics computation, 2 tractography, bundle extraction and processing, and 3 longitudinal fiber-bundle analysis. The last step was based on an original Gaussian mixture model providing a fine analysis of fiber-bundle cross-sections, and allowing a sensitive detection of longitudinal changes along fibers. This method was tested on simulated and clinical data. High levels of F-Measure were obtained on simulated data. Experiments on cortico-spinal tract and inferior fronto-occipital fasciculi of five patients with Multiple Sclerosis (MS included in a weekly follow-up protocol highlighted the greater sensitivity of this fiber scale approach to detect small longitudinal alterations.

  11. A Weighted Likelihood Ratio of Two Related Negative Hypergeomeric Distributions

    Institute of Scientific and Technical Information of China (English)

    Titi Obilade

    2004-01-01

    In this paper we consider some related negative hypergeometric distributions arising from the problem of sampling without replacement from an urn containing balls of different colours and in different proportions but stopping only after some specifi number of balls of different colours have been obtained.With the aid of some simple recurrence relations and identities we obtain in the case of two colours the moments for the maximum negative hypergeometric distribution,the minimum negative hypergeometric distribution,the likelihood ratio negative hypergeometric distribution and consequently the likelihood proportional negative hypergeometric distributiuon.To the extent that the sampling scheme is applicable to modelling data as illustrated with a biological example and in fact many situations of estimating Bernoulli parameters for binary traits within afinite population,these are important first-step results.

  12. Evaluation of plan quality assurance models for prostate cancer patients based on fully automatically generated Pareto-optimal treatment plans

    Science.gov (United States)

    Wang, Yibing; Breedveld, Sebastiaan; Heijmen, Ben; Petit, Steven F.

    2016-06-01

    IMRT planning with commercial Treatment Planning Systems (TPSs) is a trial-and-error process. Consequently, the quality of treatment plans may not be consistent among patients, planners and institutions. Recently, different plan quality assurance (QA) models have been proposed, that could flag and guide improvement of suboptimal treatment plans. However, the performance of these models was validated using plans that were created using the conventional trail-and-error treatment planning process. Consequently, it is challenging to assess and compare quantitatively the accuracy of different treatment planning QA models. Therefore, we created a golden standard dataset of consistently planned Pareto-optimal IMRT plans for 115 prostate patients. Next, the dataset was used to assess the performance of a treatment planning QA model that uses the overlap volume histogram (OVH). 115 prostate IMRT plans were fully automatically planned using our in-house developed TPS Erasmus-iCycle. An existing OVH model was trained on the plans of 58 of the patients. Next it was applied to predict DVHs of the rectum, bladder and anus of the remaining 57 patients. The predictions were compared with the achieved values of the golden standard plans for the rectum D mean, V 65, and V 75, and D mean of the anus and the bladder. For the rectum, the prediction errors (predicted-achieved) were only  -0.2  ±  0.9 Gy (mean  ±  1 SD) for D mean,-1.0  ±  1.6% for V 65, and  -0.4  ±  1.1% for V 75. For D mean of the anus and the bladder, the prediction error was 0.1  ±  1.6 Gy and 4.8  ±  4.1 Gy, respectively. Increasing the training cohort to 114 patients only led to minor improvements. A dataset of consistently planned Pareto-optimal prostate IMRT plans was generated. This dataset can be used to train new, and validate and compare existing treatment planning QA models, and has been made publicly available. The OVH model was highly accurate

  13. Evaluation of plan quality assurance models for prostate cancer patients based on fully automatically generated Pareto-optimal treatment plans

    Science.gov (United States)

    Wang, Yibing; Breedveld, Sebastiaan; Heijmen, Ben; Petit, Steven F.

    2016-06-01

    IMRT planning with commercial Treatment Planning Systems (TPSs) is a trial-and-error process. Consequently, the quality of treatment plans may not be consistent among patients, planners and institutions. Recently, different plan quality assurance (QA) models have been proposed, that could flag and guide improvement of suboptimal treatment plans. However, the performance of these models was validated using plans that were created using the conventional trail-and-error treatment planning process. Consequently, it is challenging to assess and compare quantitatively the accuracy of different treatment planning QA models. Therefore, we created a golden standard dataset of consistently planned Pareto-optimal IMRT plans for 115 prostate patients. Next, the dataset was used to assess the performance of a treatment planning QA model that uses the overlap volume histogram (OVH). 115 prostate IMRT plans were fully automatically planned using our in-house developed TPS Erasmus-iCycle. An existing OVH model was trained on the plans of 58 of the patients. Next it was applied to predict DVHs of the rectum, bladder and anus of the remaining 57 patients. The predictions were compared with the achieved values of the golden standard plans for the rectum D mean, V 65, and V 75, and D mean of the anus and the bladder. For the rectum, the prediction errors (predicted–achieved) were only  ‑0.2  ±  0.9 Gy (mean  ±  1 SD) for D mean,‑1.0  ±  1.6% for V 65, and  ‑0.4  ±  1.1% for V 75. For D mean of the anus and the bladder, the prediction error was 0.1  ±  1.6 Gy and 4.8  ±  4.1 Gy, respectively. Increasing the training cohort to 114 patients only led to minor improvements. A dataset of consistently planned Pareto-optimal prostate IMRT plans was generated. This dataset can be used to train new, and validate and compare existing treatment planning QA models, and has been made publicly available. The OVH model was highly

  14. Automatic segmentation of lymph vessel wall using optimal surface graph cut and hidden Markov Models.

    Science.gov (United States)

    Jones, Jonathan-Lee; Essa, Ehab; Xie, Xianghua

    2015-08-01

    We present a novel method to segment the lymph vessel wall in confocal microscopy images using Optimal Surface Segmentation (OSS) and hidden Markov Models (HMM). OSS is used to preform a pre-segmentation on the images, to act as the initial state for the HMM. We utilize a steerable filter to determine edge based filters for both of these segmentations, and use these features to build Gaussian probability distributions for both the vessel walls and the background. From this we infer the emission probability for the HMM, and the transmission probability is learned using a Baum-Welch algorithm. We transform the segmentation problem into one of cost minimization, with each node in the graph corresponding to one state, and the weight for each node being defined using its emission probability. We define the inter-relations between neighboring nodes using the transmission probability. Having constructed the problem, it is solved using the Viterbi algorithm, allowing the vessel to be reconstructed. The optimal solution can be found in polynomial time. We present qualitative and quantitative analysis to show the performance of the proposed method. PMID:26736778

  15. 大学生健康素养核心信息精确概率劝导模型分析结果评价%Evaluation of health literacy information by elaboration likelihood model of persuasion among college students

    Institute of Scientific and Technical Information of China (English)

    郭利娜; 余小鸣; 张芯; 郭帅军; 孙玉颖; 安维维; 王嘉

    2012-01-01

    Objective To evaluate health education information about health literacy among college students based on e-laboration likelihood model before the implementation, in order to provide evidence for improving the efficacy of health education. Methods Phased purpose sampling and convenient sampling of 1 357 college students were chosen from 7 cities across the country. All the students were asked to evaluate the health education core information by message evaluation rating scale. Results Different information scored differently. The score of 55 core messages was 72.24 for the highest and 62. 79 for the lowest. Influences of core messages' evaluation included the characteristics of intervention receivers, cognition about health knowledge and health skills. Conclusion Evaluation of health literacy information is different among college students. Attention should be given based on the character of college students to set up the health education core information.%目的 以精确概率劝导模型对大学生健康素养核心信息进行健康教育实施前评价,为提高大学生健康教育效力提供依据.方法 采用分阶段目的抽样和方便抽样相结合的方法,在北京、昆明、广州、武汉、南京、哈尔滨和西安7个城市各选取4所不同类型非医学高校的1357名大学生进行调查.使用自编信息评价表,对大学生健康素养核心信息进行评价.结果 大学生不同信息得分存在差异.55条核心信息评价得分最高72.24分,最低62.79分.大学生对健康素养核心信息评价的影响因素为性别、个体对健康知识和对健康相关技能的认知.结论 在大学生健康教育中,应考虑大学生性别的影响,并根据大学生对信息的认同度来设置健康教育信息内容.

  16. Applying exclusion likelihoods from LHC searches to extended Higgs sectors

    Energy Technology Data Exchange (ETDEWEB)

    Bechtle, Philip [Bonn Univ. (Germany). Physikalisches Inst.; Heinemeyer, Sven [Instituto de Fisica de Cantabria (CSIC-UC), Santander (Spain); Staal, Oscar [Stockholm Univ. (Sweden). Dept. of Physics; Stefaniak, Tim [California Univ., Santa Cruz, CA (United States). Santa Cruz Inst. of Particle Physics (SCIPP); Weiglein, Georg [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany)

    2015-10-15

    LHC searches for non-standard Higgs bosons decaying into tau lepton pairs constitute a sensitive experimental probe for physics beyond the Standard Model (BSM), such as Supersymmetry (SUSY). Recently, the limits obtained from these searches have been presented by the CMS collaboration in a nearly model-independent fashion - as a narrow resonance model - based on the full 8 TeV dataset. In addition to publishing a 95% C.L. exclusion limit, the full likelihood information for the narrow resonance model has been released. This provides valuable information that can be incorporated into global BSM fits. We present a simple algorithm that maps an arbitrary model with multiple neutral Higgs bosons onto the narrow resonance model and derives the corresponding value for the exclusion likelihood from the CMS search. This procedure has been implemented into the public computer code HiggsBounds (version 4.2.0 and higher). We validate our implementation by cross-checking against the official CMS exclusion contours in three Higgs benchmark scenarios in the Minimal Supersymmetric Standard Model (MSSM), and find very good agreement. Going beyond validation, we discuss the combined constraints of the ττ search and the rate measurements of the SM-like Higgs at 125 GeV in a recently proposed MSSM benchmark scenario, where the lightest Higgs boson obtains SM-like couplings independently of the decoupling of the heavier Higgs states. Technical details for how to access the likelihood information within HiggsBounds are given in the appendix. The program is available at http://higgsbounds.hepforge.org.

  17. What Determines the Likelihood of Structural Reforms?

    OpenAIRE

    Agnello, Luca; Castro, Vitor; Jalles, João Tovar; Sousa, Ricardo M.

    2014-01-01

    We use data for a panel of 60 countries over the period 1980-2005 to investigate the main drivers of the likelihood of structural reforms. We find that: (i) external debt crises are the main trigger of financial and banking reforms; (ii) inflation and banking crises are the key drivers of external capital account reforms; (iii) banking crises also hasten financial reforms; and (iv) economic recessions play an important role in promoting the necessary consensus for financial, capital, banking ...

  18. Corporate governance effect on financial distress likelihood: Evidence from Spain

    Directory of Open Access Journals (Sweden)

    Montserrat Manzaneque

    2016-01-01

    Full Text Available The paper explores some mechanisms of corporate governance (ownership and board characteristics in Spanish listed companies and their impact on the likelihood of financial distress. An empirical study was conducted between 2007 and 2012 using a matched-pairs research design with 308 observations, with half of them classified as distressed and non-distressed. Based on the previous study by Pindado, Rodrigues, and De la Torre (2008, a broader concept of bankruptcy is used to define business failure. Employing several conditional logistic models, as well as to other previous studies on bankruptcy, the results confirm that in difficult situations prior to bankruptcy, the impact of board ownership and proportion of independent directors on business failure likelihood are similar to those exerted in more extreme situations. These results go one step further, to offer a negative relationship between board size and the likelihood of financial distress. This result is interpreted as a form of creating diversity and to improve the access to the information and resources, especially in contexts where the ownership is highly concentrated and large shareholders have a great power to influence the board structure. However, the results confirm that ownership concentration does not have a significant impact on financial distress likelihood in the Spanish context. It is argued that large shareholders are passive as regards an enhanced monitoring of management and, alternatively, they do not have enough incentives to hold back the financial distress. These findings have important implications in the Spanish context, where several changes in the regulatory listing requirements have been carried out with respect to corporate governance, and where there is no empirical evidence regarding this respect.

  19. Targeting cognitive-affective risk mechanisms in stress-precipitated alcohol dependence: an integrated, biopsychosocial model of automaticity, allostasis, and addiction.

    Science.gov (United States)

    Garland, Eric L; Boettiger, Charlotte A; Howard, Matthew O

    2011-05-01

    This paper proposes a novel hypothetical model integrating formerly discrete theories of stress appraisal, neurobiological allostasis, automatic cognitive processing, and addictive behavior to elucidate how alcohol misuse and dependence are maintained and re-activated by stress. We outline a risk chain in which psychosocial stress initiates physiological arousal, perseverative cognition, and negative affect that, in turn, triggers automatized schema to compel alcohol consumption. This implicit cognitive process then leads to attentional biases toward alcohol, subjective experiences of craving, paradoxical increases in arousal and alcohol-related cognitions due to urge suppression, and palliative coping through drinking. When palliative coping relieves distress, it results in negative reinforcement conditioning that perpetuates the cycle by further sensitizing the system to future stressful encounters. This model has implications for development and implementation of innovative behavioral interventions (such as mindfulness training) that disrupt cognitive-affective mechanisms underpinning stress-precipitated dependence on alcohol. PMID:21354711

  20. An Analytic Linear Accelerator Source Model for Monte Carlo dose calculations. II. Model Utilization in a GPU-based Monte Carlo Package and Automatic Source Commissioning

    CERN Document Server

    Tian, Zhen; Li, Yongbao; Shi, Feng; Jiang, Steve B; Jia, Xun

    2015-01-01

    We recently built an analytical source model for GPU-based MC dose engine. In this paper, we present a sampling strategy to efficiently utilize this source model in GPU-based dose calculation. Our source model was based on a concept of phase-space-ring (PSR). This ring structure makes it effective to account for beam rotational symmetry, but not suitable for dose calculations due to rectangular jaw settings. Hence, we first convert PSR source model to its phase-space let (PSL) representation. Then in dose calculation, different types of sub-sources were separately sampled. Source sampling and particle transport were iterated. So that the particles being sampled and transported simultaneously are of same type and close in energy to alleviate GPU thread divergence. We also present an automatic commissioning approach to adjust the model for a good representation of a clinical linear accelerator . Weighting factors were introduced to adjust relative weights of PSRs, determined by solving a quadratic minimization ...

  1. Automatic GPR image classification using a Support Vector Machine Pre-screener with Hidden Markov Model confirmation

    Science.gov (United States)

    Williams, R. M.; Ray, L. E.

    2012-12-01

    This paper presents methods to automatically classify ground penetrating radar (GPR) images of crevasses on ice sheets for use with a completely autonomous robotic system. We use a combination of support vector machines (SVM) and hidden Markov models (HMM) with appropriate un-biased processing that is suitable for real-time analysis and detection. We tested and evaluated three processing schemes on 96 examples of Antarctic GPR imagery from 2010 and 104 examples of Greenland imagery from 2011, collected by our robot and a Pisten Bully tractor. The Antarctic and Greenland data were collected in the shear zone near McMurdo Station and between Thule Air Base and Summit Station, respectively. Using a modified cross validation technique, we correctly classified 86 of the Antarctic examples and 90 of the Greenland examples with a radial basis kernel SVM trained and evaluated on down-sampled and texture-mapped GPR images of crevasses, compared to 60% classification rate using raw data. In order to reduce false positives, we use the SVM classification results as pre-screener flags that mark locations in the GPR files to evaluate with two gaussian HMMs, and evaluate our results with a similar modified cross validation technique. The combined SVM pre-screen-HMM confirm method retains all the correct classifications by the SVM, and reduces the false positive rate to 4%. This method also reduces the computational burden in classifying GPR traces because the HMM is only being evaluated on select pre-screened traces. Our experiments demonstrate the promise, robustness and reliability of real-time crevasse detection and classification with robotic GPR surveys.

  2. Automatic generation of 2D micromechanical finite element model of silicon–carbide/aluminum metal matrix composites: Effects of the boundary conditions

    DEFF Research Database (Denmark)

    Qing, Hai

    2013-01-01

    for the automatic generation of 2D micromechanical FE-models with randomly distributed SiC particles. In order to simulate the damage process in aluminum alloy matrix and SiC particles, a damage parameter based on the stress triaxial indicator and the maximum principal stress criterion based elastic brittle damage...... are performed to study the influence of boundary condition, particle number and volume fraction of the representative volume element (RVE) on composite stiffness and strength properties....

  3. Score-based likelihood ratios for handwriting evidence.

    Science.gov (United States)

    Hepler, Amanda B; Saunders, Christopher P; Davis, Linda J; Buscaglia, JoAnn

    2012-06-10

    Score-based approaches for computing forensic likelihood ratios are becoming more prevalent in the forensic literature. When two items of evidential value are entangled via a scorefunction, several nuances arise when attempting to model the score behavior under the competing source-level propositions. Specific assumptions must be made in order to appropriately model the numerator and denominator probability distributions. This process is fairly straightforward for the numerator of the score-based likelihood ratio, entailing the generation of a database of scores obtained by pairing items of evidence from the same source. However, this process presents ambiguities for the denominator database generation - in particular, how best to generate a database of scores between two items of different sources. Many alternatives have appeared in the literature, three of which we will consider in detail. They differ in their approach to generating denominator databases, by pairing (1) the item of known source with randomly selected items from a relevant database; (2) the item of unknown source with randomly generated items from a relevant database; or (3) two randomly generated items. When the two items differ in type, perhaps one having higher information content, these three alternatives can produce very different denominator databases. While each of these alternatives has appeared in the literature, the decision of how to generate the denominator database is often made without calling attention to the subjective nature of this process. In this paper, we compare each of the three methods (and the resulting score-based likelihood ratios), which can be thought of as three distinct interpretations of the denominator proposition. Our goal in performing these comparisons is to illustrate the effect that subtle modifications of these propositions can have on inferences drawn from the evidence evaluation procedure. The study was performed using a data set composed of cursive writing

  4. Automatic Segmentation of the Eye in 3D Magnetic Resonance Imaging: A Novel Statistical Shape Model for Treatment Planning of Retinoblastoma

    Energy Technology Data Exchange (ETDEWEB)

    Ciller, Carlos, E-mail: carlos.cillerruiz@unil.ch [Department of Radiology, Lausanne University Hospital and University of Lausanne, Lausanne (Switzerland); Ophthalmic Technology Group, ARTORG Center of the University of Bern, Bern (Switzerland); Centre d’Imagerie BioMédicale, University of Lausanne, Lausanne (Switzerland); De Zanet, Sandro I.; Rüegsegger, Michael B. [Ophthalmic Technology Group, ARTORG Center of the University of Bern, Bern (Switzerland); Department of Ophthalmology, Inselspital, Bern University Hospital, Bern (Switzerland); Pica, Alessia [Department of Radiation Oncology, Inselspital, Bern University Hospital, Bern (Switzerland); Sznitman, Raphael [Ophthalmic Technology Group, ARTORG Center of the University of Bern, Bern (Switzerland); Department of Ophthalmology, Inselspital, Bern University Hospital, Bern (Switzerland); Thiran, Jean-Philippe [Department of Radiology, Lausanne University Hospital and University of Lausanne, Lausanne (Switzerland); Signal Processing Laboratory, École Polytechnique Fédérale de Lausanne, Lausanne (Switzerland); Maeder, Philippe [Department of Radiology, Lausanne University Hospital and University of Lausanne, Lausanne (Switzerland); Munier, Francis L. [Unit of Pediatric Ocular Oncology, Jules Gonin Eye Hospital, Lausanne (Switzerland); Kowal, Jens H. [Ophthalmic Technology Group, ARTORG Center of the University of Bern, Bern (Switzerland); Department of Ophthalmology, Inselspital, Bern University Hospital, Bern (Switzerland); and others

    2015-07-15

    Purpose: Proper delineation of ocular anatomy in 3-dimensional (3D) imaging is a big challenge, particularly when developing treatment plans for ocular diseases. Magnetic resonance imaging (MRI) is presently used in clinical practice for diagnosis confirmation and treatment planning for treatment of retinoblastoma in infants, where it serves as a source of information, complementary to the fundus or ultrasonographic imaging. Here we present a framework to fully automatically segment the eye anatomy for MRI based on 3D active shape models (ASM), and we validate the results and present a proof of concept to automatically segment pathological eyes. Methods and Materials: Manual and automatic segmentation were performed in 24 images of healthy children's eyes (3.29 ± 2.15 years of age). Imaging was performed using a 3-T MRI scanner. The ASM consists of the lens, the vitreous humor, the sclera, and the cornea. The model was fitted by first automatically detecting the position of the eye center, the lens, and the optic nerve, and then aligning the model and fitting it to the patient. We validated our segmentation method by using a leave-one-out cross-validation. The segmentation results were evaluated by measuring the overlap, using the Dice similarity coefficient (DSC) and the mean distance error. Results: We obtained a DSC of 94.90 ± 2.12% for the sclera and the cornea, 94.72 ± 1.89% for the vitreous humor, and 85.16 ± 4.91% for the lens. The mean distance error was 0.26 ± 0.09 mm. The entire process took 14 seconds on average per eye. Conclusion: We provide a reliable and accurate tool that enables clinicians to automatically segment the sclera, the cornea, the vitreous humor, and the lens, using MRI. We additionally present a proof of concept for fully automatically segmenting eye pathology. This tool reduces the time needed for eye shape delineation and thus can help clinicians when planning eye treatment and confirming the extent of the tumor.

  5. Automatic Segmentation of the Eye in 3D Magnetic Resonance Imaging: A Novel Statistical Shape Model for Treatment Planning of Retinoblastoma

    International Nuclear Information System (INIS)

    Purpose: Proper delineation of ocular anatomy in 3-dimensional (3D) imaging is a big challenge, particularly when developing treatment plans for ocular diseases. Magnetic resonance imaging (MRI) is presently used in clinical practice for diagnosis confirmation and treatment planning for treatment of retinoblastoma in infants, where it serves as a source of information, complementary to the fundus or ultrasonographic imaging. Here we present a framework to fully automatically segment the eye anatomy for MRI based on 3D active shape models (ASM), and we validate the results and present a proof of concept to automatically segment pathological eyes. Methods and Materials: Manual and automatic segmentation were performed in 24 images of healthy children's eyes (3.29 ± 2.15 years of age). Imaging was performed using a 3-T MRI scanner. The ASM consists of the lens, the vitreous humor, the sclera, and the cornea. The model was fitted by first automatically detecting the position of the eye center, the lens, and the optic nerve, and then aligning the model and fitting it to the patient. We validated our segmentation method by using a leave-one-out cross-validation. The segmentation results were evaluated by measuring the overlap, using the Dice similarity coefficient (DSC) and the mean distance error. Results: We obtained a DSC of 94.90 ± 2.12% for the sclera and the cornea, 94.72 ± 1.89% for the vitreous humor, and 85.16 ± 4.91% for the lens. The mean distance error was 0.26 ± 0.09 mm. The entire process took 14 seconds on average per eye. Conclusion: We provide a reliable and accurate tool that enables clinicians to automatically segment the sclera, the cornea, the vitreous humor, and the lens, using MRI. We additionally present a proof of concept for fully automatically segmenting eye pathology. This tool reduces the time needed for eye shape delineation and thus can help clinicians when planning eye treatment and confirming the extent of the tumor

  6. Semi-automatic Face Image Finding Method, Which Uses the 3D Model of the Head for Recognising an Unknown Face

    OpenAIRE

    Krutikova, O; Glazs, A

    2015-01-01

    In this paper, a semi-automatic facial recognition algorithm is proposed in case of an insufficient training set (profile, front, half-turn). The recognition algorithm uses a polygonal 3D model that is created from the base images. The control points, in the proposed method, are transferred from the base images onto the 3D model, and they are also placed on the new image from the examination set. Then, the 3D model is used to determine the rotation angle of the head on th...

  7. Evaluating the use of verbal probability expressions to communicate likelihood information in IPCC reports

    Science.gov (United States)

    Harris, Adam

    2014-05-01

    The Intergovernmental Panel on Climate Change (IPCC) prescribes that the communication of risk and uncertainty information pertaining to scientific reports, model predictions etc. be communicated with a set of 7 likelihood expressions. These range from "Extremely likely" (intended to communicate a likelihood of greater than 99%) through "As likely as not" (33-66%) to "Extremely unlikely" (less than 1%). Psychological research has investigated the degree to which these expressions are interpreted as intended by the IPCC, both within and across cultures. I will present a selection of this research and demonstrate some problems associated with communicating likelihoods in this way, as well as suggesting some potential improvements.

  8. Multi-objective Automatic Calibration of a Semi-distributed Watershed Model Using Pareto Ordering Optimization and Genetic Algorithm

    Science.gov (United States)

    This study explored the application of a multi-objective evolutionary algorithm (MOEA) and Pareto ordering in the multiple-objective automatic calibration of the Soil and Water Assessment Tool (SWAT). SWAT was calibrated in the Calapooia watershed, Oregon, USA, with two different pairs of objective ...

  9. Exploiting Syntactic Structure for Natural Language Modeling

    OpenAIRE

    Chelba, Ciprian

    2000-01-01

    The thesis presents an attempt at using the syntactic structure in natural language for improved language models for speech recognition. The structured language model merges techniques in automatic parsing and language modeling using an original probabilistic parameterization of a shift-reduce parser. A maximum likelihood reestimation procedure belonging to the class of expectation-maximization algorithms is employed for training the model. Experiments on the Wall Street Journal, Switchboard ...

  10. Efficient computations with the likelihood ratio distribution.

    Science.gov (United States)

    Kruijver, Maarten

    2015-01-01

    What is the probability that the likelihood ratio exceeds a threshold t, if a specified hypothesis is true? This question is asked, for instance, when performing power calculations for kinship testing, when computing true and false positive rates for familial searching and when computing the power of discrimination of a complex mixture. Answering this question is not straightforward, since there is are a huge number of possible genotypic combinations to consider. Different solutions are found in the literature. Several authors estimate the threshold exceedance probability using simulation. Corradi and Ricciardi [1] propose a discrete approximation to the likelihood ratio distribution which yields a lower and upper bound on the probability. Nothnagel et al. [2] use the normal distribution as an approximation to the likelihood ratio distribution. Dørum et al. [3] introduce an algorithm that can be used for exact computation, but this algorithm is computationally intensive, unless the threshold t is very large. We present three new approaches to the problem. Firstly, we show how importance sampling can be used to make the simulation approach significantly more efficient. Importance sampling is a statistical technique that turns out to work well in the current context. Secondly, we present a novel algorithm for computing exceedance probabilities. The algorithm is exact, fast and can handle relatively large problems. Thirdly, we introduce an approach that combines the novel algorithm with the discrete approximation of Corradi and Ricciardi. This last approach can be applied to very large problems and yields a lower and upper bound on the exceedance probability. The use of the different approaches is illustrated with examples from forensic genetics, such as kinship testing, familial searching and mixture interpretation. The algorithms are implemented in an R-package called DNAprofiles, which is freely available from CRAN.

  11. Automatic Construction of Finite Algebras

    Institute of Scientific and Technical Information of China (English)

    张健

    1995-01-01

    This paper deals with model generation for equational theories,i.e.,automatically generating (finite)models of a given set of (logical) equations.Our method of finite model generation and a tool for automatic construction of finite algebras is described.Some examples are given to show the applications of our program.We argue that,the combination of model generators and theorem provers enables us to get a better understanding of logical theories.A brief comparison betwween our tool and other similar tools is also presented.

  12. Likelihood Functions for Galaxy Cluster Surveys

    CERN Document Server

    Holder, G

    2006-01-01

    Galaxy cluster surveys offer great promise for measuring cosmological parameters, but survey analysis methods have not been widely studied. Using methods developed decades ago for galaxy clustering studies, it is shown that nearly exact likelihood functions can be written down for galaxy cluster surveys. The sparse sampling of the density field by galaxy clusters allows simplifications that are not possible for galaxy surveys. An application to counts in cells is explicitly tested using cluster catalogs from numerical simulations and it is found that the calculated probability distributions are very accurate at masses above several times 10^{14}h^{-1} solar masses at z=0 and lower masses at higher redshift.

  13. Likelihood free inference for Markov processes: a comparison.

    Science.gov (United States)

    Owen, Jamie; Wilkinson, Darren J; Gillespie, Colin S

    2015-04-01

    Approaches to Bayesian inference for problems with intractable likelihoods have become increasingly important in recent years. Approximate Bayesian computation (ABC) and "likelihood free" Markov chain Monte Carlo techniques are popular methods for tackling inference in these scenarios but such techniques are computationally expensive. In this paper we compare the two approaches to inference, with a particular focus on parameter inference for stochastic kinetic models, widely used in systems biology. Discrete time transition kernels for models of this type are intractable for all but the most trivial systems yet forward simulation is usually straightforward. We discuss the relative merits and drawbacks of each approach whilst considering the computational cost implications and efficiency of these techniques. In order to explore the properties of each approach we examine a range of observation regimes using two example models. We use a Lotka-Volterra predator-prey model to explore the impact of full or partial species observations using various time course observations under the assumption of known and unknown measurement error. Further investigation into the impact of observation error is then made using a Schlögl system, a test case which exhibits bi-modal state stability in some regions of parameter space. PMID:25720092

  14. Maximum Likelihood Analysis of Low Energy CDMS II Germanium Data

    CERN Document Server

    Agnese, R; Balakishiyeva, D; Thakur, R Basu; Bauer, D A; Billard, J; Borgland, A; Bowles, M A; Brandt, D; Brink, P L; Bunker, R; Cabrera, B; Caldwell, D O; Cerdeno, D G; Chagani, H; Chen, Y; Cooley, J; Cornell, B; Crewdson, C H; Cushman, P; Daal, M; Di Stefano, P C F; Doughty, T; Esteban, L; Fallows, S; Figueroa-Feliciano, E; Fritts, M; Godfrey, G L; Golwala, S R; Graham, M; Hall, J; Harris, H R; Hertel, S A; Hofer, T; Holmgren, D; Hsu, L; Huber, M E; Jastram, A; Kamaev, O; Kara, B; Kelsey, M H; Kennedy, A; Kiveni, M; Koch, K; Leder, A; Loer, B; Asamar, E Lopez; Mahapatra, R; Mandic, V; Martinez, C; McCarthy, K A; Mirabolfathi, N; Moffatt, R A; Moore, D C; Nelson, R H; Oser, S M; Page, K; Page, W A; Partridge, R; Pepin, M; Phipps, A; Prasad, K; Pyle, M; Qiu, H; Rau, W; Redl, P; Reisetter, A; Ricci, Y; Rogers, H E; Saab, T; Sadoulet, B; Sander, J; Schneck, K; Schnee, R W; Scorza, S; Serfass, B; Shank, B; Speller, D; Upadhyayula, S; Villano, A N; Welliver, B; Wright, D H; Yellin, S; Yen, J J; Young, B A; Zhang, J

    2014-01-01

    We report on the results of a search for a Weakly Interacting Massive Particle (WIMP) signal in low-energy data of the Cryogenic Dark Matter Search (CDMS~II) experiment using a maximum likelihood analysis. A background model is constructed using GEANT4 to simulate the surface-event background from $^{210}$Pb decay-chain events, while using independent calibration data to model the gamma background. Fitting this background model to the data results in no statistically significant WIMP component. In addition, we perform fits using an analytic ad hoc background model proposed by Collar and Fields, who claimed to find a large excess of signal-like events in our data. We confirm the strong preference for a signal hypothesis in their analysis under these assumptions, but excesses are observed in both single- and multiple-scatter events, which implies the signal is not caused by WIMPs, but rather reflects the inadequacy of their background model.

  15. How to Improve the Likelihood of CDM Approval?

    DEFF Research Database (Denmark)

    Brandt, Urs Steiner; Svendsen, Gert Tinggaard

    2014-01-01

    How can the likelihood of Clean Development Mechanism (CDM) approval be improved in the face of institutional shortcomings? To answer this question, we focus on the three institutional shortcomings of income sharing, risk sharing and corruption prevention concerning afforestation/reforestation (A....../R). Furthermore, three main stakeholders are identified, namely investors, governments and agents in a principal-agent model regarding monitoring and enforcement capacity. Developing countries such as West Africa have, despite huge potentials, not been integrated in A/R CDM projects yet. Remote sensing, however...

  16. A note on the asymptotic distribution of likelihood ratio tests to test variance components.

    Science.gov (United States)

    Visscher, Peter M

    2006-08-01

    When using maximum likelihood methods to estimate genetic and environmental components of (co)variance, it is common to test hypotheses using likelihood ratio tests, since such tests have desirable asymptotic properties. In particular, the standard likelihood ratio test statistic is assumed asymptotically to follow a chi2 distribution with degrees of freedom equal to the number of parameters tested. Using the relationship between least squares and maximum likelihood estimators for balanced designs, it is shown why the asymptotic distribution of the likelihood ratio test for variance components does not follow a chi2 distribution with degrees of freedom equal to the number of parameters tested when the null hypothesis is true. Instead, the distribution of the likelihood ratio test is a mixture of chi2 distributions with different degrees of freedom. Implications for testing variance components in twin designs and for quantitative trait loci mapping are discussed. The appropriate distribution of the likelihood ratio test statistic should be used in hypothesis testing and model selection. PMID:16899155

  17. Modeling and simulation of the generation automatic control of electric power systems; Modelado y simulacion del control automatico de generacion de sistemas electricos de potencia

    Energy Technology Data Exchange (ETDEWEB)

    Caballero Ortiz, Ezequiel

    2002-12-01

    This work is devoted to the analysis of the Automatic Control of Electrical Systems Generation of power, as of the information that generates the loop with Load-Frequency Control and the Automatic Voltage Regulator loop. To accomplish the analysis, the control classical theory and feedback control systems concepts are applied. Thus also, the modern theory concepts are employed. The studies are accomplished in the digital computer through the MATLAB program and the available simulation technique in the SIMULINK tool. In this thesis the theoretical and physical concepts of the automatic control of generation are established; dividing it in load frequency control and automatic voltage regulator loops. The mathematical models of the two control loops are established. Later, the models of the elements are interconnected in order to integrate the loop with load frequency control and the digital simulation of the system is carried out. In first instance, the function of the primary control in are - machine, area - multi machine and multi area - multi machine power systems, is analyzed. Then, the automatic control of generation of the area and multi area power systems is studied. The economic dispatch concept is established and with this plan the power system multi area is simulated, there in after the energy exchange among areas in stationary stage is studied. The mathematical models of the component elements of the control loop of the automatic voltage regulator are interconnected. Data according to the nature of each component are generated and their behavior is simulated to analyze the system response. The two control loops are interconnected and a simulation is carry out with data generated previously, examining the performance of the automatic control of generation and the interaction between the two control loops. Finally, the Poles Positioning and the Optimum Control techniques of the modern control theory are applied to the automatic control of an area generation

  18. Maximum Likelihood Analysis in the PEN Experiment

    Science.gov (United States)

    Lehman, Martin

    2013-10-01

    The experimental determination of the π+ -->e+ ν (γ) decay branching ratio currently provides the most accurate test of lepton universality. The PEN experiment at PSI, Switzerland, aims to improve the present world average experimental precision of 3 . 3 ×10-3 to 5 ×10-4 using a stopped beam approach. During runs in 2008-10, PEN has acquired over 2 ×107 πe 2 events. The experiment includes active beam detectors (degrader, mini TPC, target), central MWPC tracking with plastic scintillator hodoscopes, and a spherical pure CsI electromagnetic shower calorimeter. The final branching ratio will be calculated using a maximum likelihood analysis. This analysis assigns each event a probability for 5 processes (π+ -->e+ ν , π+ -->μ+ ν , decay-in-flight, pile-up, and hadronic events) using Monte Carlo verified probability distribution functions of our observables (energies, times, etc). A progress report on the PEN maximum likelihood analysis will be presented. Work supported by NSF grant PHY-0970013.

  19. Composite likelihood and two-stage estimation in family studies

    DEFF Research Database (Denmark)

    Andersen, Elisabeth Anne Wreford

    2002-01-01

    Composite likelihood; Two-stage estimation; Family studies; Copula; Optimal weights; All possible pairs......Composite likelihood; Two-stage estimation; Family studies; Copula; Optimal weights; All possible pairs...

  20. MATHEMATICAL MODEL OF TRANSIENT PROCESSES PERTAIN-ING TO THREE-IMPULSE SYSTEM FOR AUTOMATIC CONTROL OF STEAM GENERATOR WATER SUPPLY ON LOAD RELIEF

    OpenAIRE

    G. T. Kulakov; A. T. Kulakov; A. N. Kukharenko

    2014-01-01

    The paper analyzes an operation of the standard three-impulse automatic control system (ACS) for steam generator water supply. Mathematical model for checking its operational ability on load relief has been developed in the paper and this model makes it possible to determine maximum deviations of water level without execution of actual tests and any corrections in the plants for starting-up of technological protection  systems in accordance with water level in the drum.  The paper reveals rea...