Maximum Likelihood Fusion Model
2014-08-09
Symposium of Robotics Re- search. Sienna, Italy: Springer, 2003. [12] D. Hall and J. Llinas, “An introduction to multisensor data fusion ,” Proceed- ings of...a data fusion approach for combining Gaussian metric models of an environment constructed by multiple agents that operate outside of a global... data fusion , hypothesis testing,maximum likelihood estimation, mobile robot navigation REPORT DOCUMENTATION PAGE 11. SPONSOR/MONITOR’S REPORT
In all likelihood statistical modelling and inference using likelihood
Pawitan, Yudi
2001-01-01
Based on a course in the theory of statistics this text concentrates on what can be achieved using the likelihood/Fisherian method of taking account of uncertainty when studying a statistical problem. It takes the concept ot the likelihood as providing the best methods for unifying the demands of statistical modelling and the theory of inference. Every likelihood concept is illustrated by realistic examples, which are not compromised by computational problems. Examples range from asimile comparison of two accident rates, to complex studies that require generalised linear or semiparametric mode
Model fit after pairwise maximum likelihood
Directory of Open Access Journals (Sweden)
M. T. eBarendse
2016-04-01
Full Text Available Maximum likelihood factor analysis of discrete data within the structural equation modeling framework rests on the assumption that the observed discrete responses are manifestations of underlying continuous scores that are normally distributed. As maximizing the likelihood of multivariate response patterns is computationally very intensive, the sum of the log--likelihoods of the bivariate response patterns is maximized instead. Little is yet known about how to assess model fit when the analysis is based on such a pairwise maximum likelihood (PML of two--way contingency tables. We propose new fit criteria for the PML method and conduct a simulation study to evaluate their performance in model selection. With large sample sizes (500 or more, PML performs as well the robust weighted least squares analysis of polychoric correlations.
Kinnear, John; Jackson, Ruth
2017-07-01
Although physicians are highly trained in the application of evidence-based medicine, and are assumed to make rational decisions, there is evidence that their decision making is prone to biases. One of the biases that has been shown to affect accuracy of judgements is that of representativeness and base-rate neglect, where the saliency of a person's features leads to overestimation of their likelihood of belonging to a group. This results in the substitution of 'subjective' probability for statistical probability. This study examines clinicians' propensity to make estimations of subjective probability when presented with clinical information that is considered typical of a medical condition. The strength of the representativeness bias is tested by presenting choices in textual and graphic form. Understanding of statistical probability is also tested by omitting all clinical information. For the questions that included clinical information, 46.7% and 45.5% of clinicians made judgements of statistical probability, respectively. Where the question omitted clinical information, 79.9% of clinicians made a judgement consistent with statistical probability. There was a statistically significant difference in responses to the questions with and without representativeness information (χ2 (1, n=254)=54.45, pprobability. One of the causes for this representativeness bias may be the way clinical medicine is taught where stereotypic presentations are emphasised in diagnostic decision making. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.
Likelihood analysis of the minimal AMSB model.
Bagnaschi, E; Borsato, M; Sakurai, K; Buchmueller, O; Cavanaugh, R; Chobanova, V; Citron, M; Costa, J C; De Roeck, A; Dolan, M J; Ellis, J R; Flächer, H; Heinemeyer, S; Isidori, G; Lucio, M; Luo, F; Santos, D Martínez; Olive, K A; Richards, A; Weiglein, G
2017-01-01
We perform a likelihood analysis of the minimal anomaly-mediated supersymmetry-breaking (mAMSB) model using constraints from cosmology and accelerator experiments. We find that either a wino-like or a Higgsino-like neutralino LSP, [Formula: see text], may provide the cold dark matter (DM), both with similar likelihoods. The upper limit on the DM density from Planck and other experiments enforces [Formula: see text] after the inclusion of Sommerfeld enhancement in its annihilations. If most of the cold DM density is provided by the [Formula: see text], the measured value of the Higgs mass favours a limited range of [Formula: see text] (and also for [Formula: see text] if [Formula: see text]) but the scalar mass [Formula: see text] is poorly constrained. In the wino-LSP case, [Formula: see text] is constrained to about [Formula: see text] and [Formula: see text] to [Formula: see text], whereas in the Higgsino-LSP case [Formula: see text] has just a lower limit [Formula: see text] ([Formula: see text]) and [Formula: see text] is constrained to [Formula: see text] in the [Formula: see text] ([Formula: see text]) scenario. In neither case can the anomalous magnetic moment of the muon, [Formula: see text], be improved significantly relative to its Standard Model (SM) value, nor do flavour measurements constrain the model significantly, and there are poor prospects for discovering supersymmetric particles at the LHC, though there are some prospects for direct DM detection. On the other hand, if the [Formula: see text] contributes only a fraction of the cold DM density, future LHC [Formula: see text]-based searches for gluinos, squarks and heavier chargino and neutralino states as well as disappearing track searches in the wino-like LSP region will be relevant, and interference effects enable [Formula: see text] to agree with the data better than in the SM in the case of wino-like DM with [Formula: see text].
Likelihood Analysis of the minimal AMSB model
Energy Technology Data Exchange (ETDEWEB)
Bagnaschi, E. [DESY Hamburg (Germany); Borsato, M. [Santiago de Compostela Univ. (Spain); Sakurai, K. [Durham Univ. (United Kingdom). Dept. of Physics; Warsaw Univ. (Poland). Inst. of Theoretical Physics; and others
2017-01-15
We perform a likelihood analysis of the minimal Anomaly-Mediated Supersymmetry Breaking (mAMSB) model using constraints from cosmology and accelerator experiments. We find that a wino-like or a Higgsino-like neutralino LSP, χ{sup 0}{sub 1}, may provide the cold dark matter (DM) with similar likelihood. The upper limit on the DM density from Planck and other experiments enforces m{sub χ{sup 0}{sub 1}}
Likelihood analysis of the minimal AMSB model
Energy Technology Data Exchange (ETDEWEB)
Bagnaschi, E.; Weiglein, G. [DESY, Hamburg (Germany); Borsato, M.; Chobanova, V.; Lucio, M.; Santos, D.M. [Universidade de Santiago de Compostela, Santiago de Compostela (Spain); Sakurai, K. [Institute for Particle Physics Phenomenology, University of Durham, Science Laboratories, Department of Physics, Durham (United Kingdom); University of Warsaw, Faculty of Physics, Institute of Theoretical Physics, Warsaw (Poland); Buchmueller, O.; Citron, M.; Costa, J.C.; Richards, A. [Imperial College, High Energy Physics Group, Blackett Laboratory, London (United Kingdom); Cavanaugh, R. [Fermi National Accelerator Laboratory, Batavia, IL (United States); University of Illinois at Chicago, Physics Department, Chicago, IL (United States); De Roeck, A. [Experimental Physics Department, CERN, Geneva (Switzerland); Antwerp University, Wilrijk (Belgium); Dolan, M.J. [School of Physics, University of Melbourne, ARC Centre of Excellence for Particle Physics at the Terascale, Melbourne (Australia); Ellis, J.R. [King' s College London, Theoretical Particle Physics and Cosmology Group, Department of Physics, London (United Kingdom); CERN, Theoretical Physics Department, Geneva (Switzerland); Flaecher, H. [University of Bristol, H.H. Wills Physics Laboratory, Bristol (United Kingdom); Heinemeyer, S. [Campus of International Excellence UAM+CSIC, Madrid (Spain); Instituto de Fisica Teorica UAM-CSIC, Madrid (Spain); Instituto de Fisica de Cantabria (CSIC-UC), Cantabria (Spain); Isidori, G. [Physik-Institut, Universitaet Zuerich, Zurich (Switzerland); Luo, F. [Kavli IPMU (WPI), UTIAS, The University of Tokyo, Kashiwa, Chiba (Japan); Olive, K.A. [School of Physics and Astronomy, University of Minnesota, William I. Fine Theoretical Physics Institute, Minneapolis, MN (United States)
2017-04-15
We perform a likelihood analysis of the minimal anomaly-mediated supersymmetry-breaking (mAMSB) model using constraints from cosmology and accelerator experiments. We find that either a wino-like or a Higgsino-like neutralino LSP, χ{sup 0}{sub 1}, may provide the cold dark matter (DM), both with similar likelihoods. The upper limit on the DM density from Planck and other experiments enforces m{sub χ{sup 0}{sub 1}}
Groeneboom, P.; Jongbloed, G.; Witte, B.I.
2010-01-01
We consider the problem of estimating the distribution function, the density and the hazard rate of the (unobservable) event time in the current status model. A well studied and natural nonparametric estimator for the distribution function in this model is the nonparametric maximum likelihood
Likelihood Inference for a Fractionally Cointegrated Vector Autoregressive Model
DEFF Research Database (Denmark)
Johansen, Søren; Nielsen, Morten Ørregaard
We consider model based inference in a fractionally cointegrated (or cofractional) vector autoregressive model based on the conditional Gaussian likelihood. The model allows the process X(t) to be fractional of order d and cofractional of order d-b; that is, there exist vectors ß for which ß...... the asymptotic distribution of the likelihood ratio test for cointegration rank, which is a functional of fractional Brownian motion of type II....
Likelihood inference for a fractionally cointegrated vector autoregressive model
DEFF Research Database (Denmark)
Johansen, Søren; Nielsen, Morten Ørregaard
We consider model based inference in a fractionally cointegrated (or cofractional) vector autoregressive model based on the conditional Gaussian likelihood. The model allows the process X_{t} to be fractional of order d and cofractional of order d-b; that is, there exist vectors β for which β...... also find the asymptotic distribution of the likelihood ratio test for cointegration rank, which is a functional of fractional Brownian motion of type II....
Mixture Rasch Models with Joint Maximum Likelihood Estimation
Willse, John T.
2011-01-01
This research provides a demonstration of the utility of mixture Rasch models. Specifically, a model capable of estimating a mixture partial credit model using joint maximum likelihood is presented. Like the partial credit model, the mixture partial credit model has the beneficial feature of being appropriate for analysis of assessment data…
Tapered composite likelihood for spatial max-stable models
Sang, Huiyan
2014-05-01
Spatial extreme value analysis is useful to environmental studies, in which extreme value phenomena are of interest and meaningful spatial patterns can be discerned. Max-stable process models are able to describe such phenomena. This class of models is asymptotically justified to characterize the spatial dependence among extremes. However, likelihood inference is challenging for such models because their corresponding joint likelihood is unavailable and only bivariate or trivariate distributions are known. In this paper, we propose a tapered composite likelihood approach by utilizing lower dimensional marginal likelihoods for inference on parameters of various max-stable process models. We consider a weighting strategy based on a "taper range" to exclude distant pairs or triples. The "optimal taper range" is selected to maximize various measures of the Godambe information associated with the tapered composite likelihood function. This method substantially reduces the computational cost and improves the efficiency over equally weighted composite likelihood estimators. We illustrate its utility with simulation experiments and an analysis of rainfall data in Switzerland.
Owen, Art B
2001-01-01
Empirical likelihood provides inferences whose validity does not depend on specifying a parametric model for the data. Because it uses a likelihood, the method has certain inherent advantages over resampling methods: it uses the data to determine the shape of the confidence regions, and it makes it easy to combined data from multiple sources. It also facilitates incorporating side information, and it simplifies accounting for censored, truncated, or biased sampling.One of the first books published on the subject, Empirical Likelihood offers an in-depth treatment of this method for constructing confidence regions and testing hypotheses. The author applies empirical likelihood to a range of problems, from those as simple as setting a confidence region for a univariate mean under IID sampling, to problems defined through smooth functions of means, regression models, generalized linear models, estimating equations, or kernel smooths, and to sampling with non-identically distributed data. Abundant figures offer vi...
Practical likelihood analysis for spatial generalized linear mixed models
DEFF Research Database (Denmark)
Bonat, W. H.; Ribeiro, Paulo Justiniano
2016-01-01
We investigate an algorithm for maximum likelihood estimation of spatial generalized linear mixed models based on the Laplace approximation. We compare our algorithm with a set of alternative approaches for two datasets from the literature. The Rhizoctonia root rot and the Rongelap are, respectiv......We investigate an algorithm for maximum likelihood estimation of spatial generalized linear mixed models based on the Laplace approximation. We compare our algorithm with a set of alternative approaches for two datasets from the literature. The Rhizoctonia root rot and the Rongelap are......, respectively, examples of binomial and count datasets modeled by spatial generalized linear mixed models. Our results show that the Laplace approximation provides similar estimates to Markov Chain Monte Carlo likelihood, Monte Carlo expectation maximization, and modified Laplace approximation. Some advantages...... of algorithms and convergence analysis, commonly required by simulation-based methods. © 2016 John Wiley & Sons, Ltd....
Generalized Self-Consistency: Multinomial logit model and Poisson likelihood.
Tsodikov, Alex; Chefo, Solomon
2008-01-01
A generalized self-consistency approach to maximum likelihood estimation (MLE) and model building was developed in (Tsodikov, 2003) and applied to a survival analysis problem. We extend the framework to obtain second-order results such as information matrix and properties of the variance. Multinomial model motivates the paper and is used throughout as an example. Computational challenges with the multinomial likelihood motivated Baker (1994) to develop the Multinomial-Poisson (MP) transformation for a large variety of regression models with multinomial likelihood kernel. Multinomial regression is transformed into a Poisson regression at the cost of augmenting model parameters and restricting the problem to discrete covariates. Imposing normalization restrictions by means of Lagrange multipliers (Lang, 1996) justifies the approach. Using the self-consistency framework we develop an alternative solution to multinomial model fitting that does not require augmenting parameters while allowing for a Poisson likelihood and arbitrary covariate structures. Normalization restrictions are imposed by averaging over artificial "missing data" (fake mixture). Lack of probabilistic interpretation at the "complete-data" level makes the use of the generalized self-consistency machinery essential.
Quantifying uncertainty, variability and likelihood for ordinary differential equation models
LENUS (Irish Health Repository)
Weisse, Andrea Y
2010-10-28
Abstract Background In many applications, ordinary differential equation (ODE) models are subject to uncertainty or variability in initial conditions and parameters. Both, uncertainty and variability can be quantified in terms of a probability density function on the state and parameter space. Results The partial differential equation that describes the evolution of this probability density function has a form that is particularly amenable to application of the well-known method of characteristics. The value of the density at some point in time is directly accessible by the solution of the original ODE extended by a single extra dimension (for the value of the density). This leads to simple methods for studying uncertainty, variability and likelihood, with significant advantages over more traditional Monte Carlo and related approaches especially when studying regions with low probability. Conclusions While such approaches based on the method of characteristics are common practice in other disciplines, their advantages for the study of biological systems have so far remained unrecognized. Several examples illustrate performance and accuracy of the approach and its limitations.
Likelihood inference for a fractionally cointegrated vector autoregressive model
DEFF Research Database (Denmark)
Johansen, Søren; Ørregård Nielsen, Morten
2012-01-01
We consider model based inference in a fractionally cointegrated (or cofractional) vector autoregressive model with a restricted constant term, ¿, based on the Gaussian likelihood conditional on initial values. The model nests the I(d) VAR model. We give conditions on the parameters such that the......We consider model based inference in a fractionally cointegrated (or cofractional) vector autoregressive model with a restricted constant term, ¿, based on the Gaussian likelihood conditional on initial values. The model nests the I(d) VAR model. We give conditions on the parameters...... such that the process X_{t} is fractional of order d and cofractional of order d-b; that is, there exist vectors ß for which ß'X_{t} is fractional of order d-b, and no other fractionality order is possible. We define the statistical model by 0... process in the parameters when errors are i.i.d. with suitable moment conditions and initial values are bounded. When the limit is deterministic this implies uniform convergence in probability of the conditional likelihood function. If the true value b0>1/2, we prove that the limit distribution of (ß...
Fitting stratified proportional odds models by amalgamating conditional likelihoods.
Mukherjee, Bhramar; Ahn, Jaeil; Liu, Ivy; Rathouz, Paul J; Sánchez, Brisa N
2008-10-30
Classical methods for fitting a varying intercept logistic regression model to stratified data are based on the conditional likelihood principle to eliminate the stratum-specific nuisance parameters. When the outcome variable has multiple ordered categories, a natural choice for the outcome model is a stratified proportional odds or cumulative logit model. However, classical conditioning techniques do not apply to the general K-category cumulative logit model (K>2) with varying stratum-specific intercepts as there is no reduction due to sufficiency; the nuisance parameters remain in the conditional likelihood. We propose a methodology to fit stratified proportional odds model by amalgamating conditional likelihoods obtained from all possible binary collapsings of the ordinal scale. The method allows for categorical and continuous covariates in a general regression framework. We provide a robust sandwich estimate of the variance of the proposed estimator. For binary exposures, we show equivalence of our approach to the estimators already proposed in the literature. The proposed recipe can be implemented very easily in standard software. We illustrate the methods via three real data examples related to biomedical research. Simulation results comparing the proposed method with a random effects model on the stratification parameters are also furnished. Copyright 2008 John Wiley & Sons, Ltd.
Robustness of the Approximate Likelihood of the Protracted Speciation Model.
Simonet, Camille Anna; Scherrer, Raphaël; Rego-Costa, Artur; Etienne, Rampal S
2017-12-22
The protracted speciation model presents a realistic and parsimonious explanation for the observed slowdown in lineage accumulation through time, by accounting for the fact that speciation takes time. A method to compute the likelihood for this model given a phylogeny is available and allows estimation of its parameters (rate of initiation of speciation, rate of completion of speciation, and extinction rate) and statistical comparison of this model to other proposed models of diversification. However this likelihood computation method makes an approximation of the protracted speciation model to be mathematically tractable: it sometimes counts fewer species than one would do from a biological perspective. This approximation may have large consequences for likelihood-based inferences: it may render any conclusions based on this method completely irrelevant. Here we study to what extent this approximation affects parameter estimations. We simulated phylogenies from which we reconstructed the tree of extant species according to the original, biologically meaningful protracted speciation model and according to the approximation. We then compared the resulting parameter estimates. We found that the differences were larger for high values of extinction rates and small values of speciation-completion rates. Indeed, a long speciation-completion time and a high extinction rate promote the appearance of cases to which the approximation applies. However, surprisingly, the deviation introduced is largely negligible over the parameter space explored, suggesting that this approximate likelihood can be applied reliably in practice to estimate biologically relevant parameters under the original protracted speciation model. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.
Directory of Open Access Journals (Sweden)
Nico Nagelkerke
Full Text Available The problem of discrimination and classification is central to much of epidemiology. Here we consider the estimation of a logistic regression/discrimination function from training samples, when one of the training samples is subject to misclassification or mislabeling, e.g. diseased individuals are incorrectly classified/labeled as healthy controls. We show that this leads to zero-inflated binomial model with a defective logistic regression or discrimination function, whose parameters can be estimated using standard statistical methods such as maximum likelihood. These parameters can be used to estimate the probability of true group membership among those, possibly erroneously, classified as controls. Two examples are analyzed and discussed. A simulation study explores properties of the maximum likelihood parameter estimates and the estimates of the number of mislabeled observations.
Penalized likelihood estimation of a trivariate additive probit model.
Filippou, Panagiota; Marra, Giampiero; Radice, Rosalba
2017-07-01
This article proposes a penalized likelihood method to estimate a trivariate probit model, which accounts for several types of covariate effects (such as linear, nonlinear, random, and spatial effects), as well as error correlations. The proposed approach also addresses the difficulty in estimating accurately the correlation coefficients, which characterize the dependence of binary responses conditional on covariates. The parameters of the model are estimated within a penalized likelihood framework based on a carefully structured trust region algorithm with integrated automatic multiple smoothing parameter selection. The relevant numerical computation can be easily carried out using the SemiParTRIV() function in a freely available R package. The proposed method is illustrated through a case study whose aim is to model jointly adverse birth binary outcomes in North Carolina. © The Author 2017. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Identification and estimation of heterogeneous agent models: A likelihood approach
DEFF Research Database (Denmark)
Parra-Alvarez, Juan Carlos; Posch, Olaf; Wang, Mu-Chun
parameters based on data representing a large cross-section of individual wealth. We also study the finite sample properties of the maximum likelihood estimator using Monte Carlo experiments. Our results suggest that while the parameters related to the household's preferences can be correctly identified......In this paper, we study the statistical properties of heterogeneous agent models with incomplete markets. Using a Bewley-Hugget-Aiyagari model we compute the equilibrium density function of wealth and show how it can be used for likelihood inference. We investigate the identifiability of the model...... and accurately estimated, the parameters associated with the supply side of the economy cannot be separately identified leading to inferential problems that persist even in large samples. In the presence of partially identification problems, we show that an empirical strategy based on fixing the value of one...
First Results of the Regional Earthquake Likelihood Models Experiment
Schorlemmer, Danijel; Zechar, J. Douglas; Maximilian J. Werner; Field, Edward H.; Jackson, David D; Jordan, Thomas H.
2010-01-01
The ability to successfully predict the future behavior of a system is a strong indication that the system is well understood. Certainly many details of the earthquake system remain obscure, but several hypotheses related to earthquake occurrence and seismic hazard have been proffered, and predicting earthquake behavior is a worthy goal and demanded by society. Along these lines, one of the primary objectives of the Regional Earthquake Likelihood Models (RELM) working group was to formalize e...
Survey data on entrepreneurs׳ subjective plan and perceptions of the likelihood of success.
Vuong, Quan Hoang
2016-03-01
Entrepreneurship is an important economic process in both developed and developing worlds. Nonetheless, many of its concepts appear to be difficult to 'operationalize' due to lack of empirical data; and this is particularly true with emerging economy. The data set described in this paper is available in Mendeley Data׳s "Vietnamese entrepreneurs' decisiveness and perceptions of the likelihood of success/continuity, Vuong (2015) [1]" http://dx.doi.org/10.17632/kbrtrf6hh4.2; and can enable the modeling after useful discrete data models such as BCL.
Survey data on entrepreneurs׳ subjective plan and perceptions of the likelihood of success
Directory of Open Access Journals (Sweden)
Quan Hoang Vuong
2016-03-01
Full Text Available Entrepreneurship is an important economic process in both developed and developing worlds. Nonetheless, many of its concepts appear to be difficult to ‘operationalize’ due to lack of empirical data; and this is particularly true with emerging economy. The data set described in this paper is available in Mendeley Data׳s “Vietnamese entrepreneurs’ decisiveness and perceptions of the likelihood of success/continuity, Vuong (2015 [1]” http://dx.doi.org/10.17632/kbrtrf6hh4.2; and can enable the modeling after useful discrete data models such as BCL.
Global Partial Likelihood for Nonparametric Proportional Hazards Models.
Chen, Kani; Guo, Shaojun; Sun, Liuquan; Wang, Jane-Ling
2010-01-01
As an alternative to the local partial likelihood method of Tibshirani and Hastie and Fan, Gijbels, and King, a global partial likelihood method is proposed to estimate the covariate effect in a nonparametric proportional hazards model, λ(t|x) = exp{ψ(x)}λ(0)(t). The estimator, ψ̂(x), reduces to the Cox partial likelihood estimator if the covariate is discrete. The estimator is shown to be consistent and semiparametrically efficient for linear functionals of ψ(x). Moreover, Breslow-type estimation of the cumulative baseline hazard function, using the proposed estimator ψ̂(x), is proved to be efficient. The asymptotic bias and variance are derived under regularity conditions. Computation of the estimator involves an iterative but simple algorithm. Extensive simulation studies provide evidence supporting the theory. The method is illustrated with the Stanford heart transplant data set. The proposed global approach is also extended to a partially linear proportional hazards model and found to provide efficient estimation of the slope parameter. This article has the supplementary materials online.
Marginal Maximum Likelihood Estimation of Item Response Models in R
Directory of Open Access Journals (Sweden)
Matthew S. Johnson
2007-02-01
Full Text Available Item response theory (IRT models are a class of statistical models used by researchers to describe the response behaviors of individuals to a set of categorically scored items. The most common IRT models can be classified as generalized linear fixed- and/or mixed-effect models. Although IRT models appear most often in the psychological testing literature, researchers in other fields have successfully utilized IRT-like models in a wide variety of applications. This paper discusses the three major methods of estimation in IRT and develops R functions utilizing the built-in capabilities of the R environment to find the marginal maximum likelihood estimates of the generalized partial credit model. The currently available R packages ltm is also discussed.
Likelihood-Based Inference in Nonlinear Error-Correction Models
DEFF Research Database (Denmark)
Kristensen, Dennis; Rahbæk, Anders
We consider a class of vector nonlinear error correction models where the transfer function (or loadings) of the stationary relation- ships is nonlinear. This includes in particular the smooth transition models. A general representation theorem is given which establishes the dynamic properties...... and a linear trend in general. Gaussian likelihood-based estimators are considered for the long- run cointegration parameters, and the short-run parameters. Asymp- totic theory is provided for these and it is discussed to what extend asymptotic normality and mixed normaity can be found. A simulation study...
Menyoal Elaboration Likelihood Model (ELM dan Teori Retorika
Directory of Open Access Journals (Sweden)
Yudi Perbawaningsih
2012-06-01
Full Text Available Abstract: Persuasion is a communication process to establish or change attitudes, which can be understood through theory of Rhetoric and theory of Elaboration Likelihood Model (ELM. This study elaborates these theories in a Public Lecture series which to persuade the students in choosing their concentration of study. The result shows that in term of persuasion effectiveness it is not quite relevant to separate the message and its source. The quality of source is determined by the quality of the message, and vice versa. Separating the two routes of the persuasion process as described in the ELM theory would not be relevant. Abstrak: Persuasi adalah proses komunikasi untuk membentuk atau mengubah sikap, yang dapat dipahami dengan teori Retorika dan teori Elaboration Likelihood Model (ELM. Penelitian ini mengelaborasi teori tersebut dalam Kuliah Umum sebagai sarana mempersuasi mahasiswa untuk memilih konsentrasi studi studi yang didasarkan pada proses pengolahan informasi. Menggunakan metode survey, didapatkan hasil yaitu tidaklah cukup relevan memisahkan pesan dan narasumber dalam melihat efektivitas persuasi. Keduanya menyatu yang berarti bahwa kualitas narasumber ditentukan oleh kualitas pesan yang disampaikannya, dan sebaliknya. Memisahkan proses persuasi dalam dua lajur seperti yang dijelaskan dalam ELM teori menjadi tidak relevan.
Maximum penalized likelihood estimation in a gamma-frailty model.
Rondeau, Virginie; Commenges, Daniel; Joly, Pierre
2003-06-01
The shared frailty models allow for unobserved heterogeneity or for statistical dependence between observed survival data. The most commonly used estimation procedure in frailty models is the EM algorithm, but this approach yields a discrete estimator of the distribution and consequently does not allow direct estimation of the hazard function. We show how maximum penalized likelihood estimation can be applied to nonparametric estimation of a continuous hazard function in a shared gamma-frailty model withright-censored and left-truncated data. We examine the problem of obtaining variance estimators for regression coefficients, the frailty parameter and baseline hazard functions. Some simulations for the proposed estimation procedure are presented. A prospective cohort (Paquid) with grouped survival data serves to illustrate the method which was used to analyze the relationship between environmental factors and the risk of dementia.
Hou, Jue; Chambers, Christina D; Xu, Ronghui
2017-12-13
We consider observational studies in pregnancy where the outcome of interest is spontaneous abortion (SAB). This at first sight is a binary 'yes' or 'no' variable, albeit there is left truncation as well as right-censoring in the data. Women who do not experience SAB by gestational week 20 are 'cured' from SAB by definition, that is, they are no longer at risk. Our data is different from the common cure data in the literature, where the cured subjects are always right-censored and not actually observed to be cured. We consider a commonly used cure rate model, with the likelihood function tailored specifically to our data. We develop a conditional nonparametric maximum likelihood approach. To tackle the computational challenge we adopt an EM algorithm making use of "ghost copies" of the data, and a closed form variance estimator is derived. Under suitable assumptions, we prove the consistency of the resulting estimator which involves an unbounded cumulative baseline hazard function, as well as the asymptotic normality. Simulation results are carried out to evaluate the finite sample performance. We present the analysis of the motivating SAB study to illustrate the advantages of our model addressing both occurrence and timing of SAB, as compared to existing approaches in practice.
Estimation of Financial Agent-Based Models with Simulated Maximum Likelihood
Czech Academy of Sciences Publication Activity Database
Kukačka, Jiří; Baruník, Jozef
2017-01-01
Roč. 85, č. 1 (2017), s. 21-45 ISSN 0165-1889 R&D Projects: GA ČR(CZ) GBP402/12/G097 Institutional support: RVO:67985556 Keywords : heterogeneous agent model, * simulated maximum likelihood * switching Subject RIV: AH - Economics Impact factor: 1.000, year: 2016 http://library.utia.cas.cz/separaty/2017/E/kukacka-0478481.pdf
Music genre classification via likelihood fusion from multiple feature models
Shiu, Yu; Kuo, C.-C. J.
2005-01-01
Music genre provides an efficient way to index songs in a music database, and can be used as an effective means to retrieval music of a similar type, i.e. content-based music retrieval. A new two-stage scheme for music genre classification is proposed in this work. At the first stage, we examine a couple of different features, construct their corresponding parametric models (e.g. GMM and HMM) and compute their likelihood functions to yield soft classification results. In particular, the timbre, rhythm and temporal variation features are considered. Then, at the second stage, these soft classification results are integrated to result in a hard decision for final music genre classification. Experimental results are given to demonstrate the performance of the proposed scheme.
Lakner, Clemens; Holder, Mark T; Goldman, Nick; Naylor, Gavin J P
2011-03-01
Most phylogenetic models of protein evolution assume that sites are independent and identically distributed. Interactions between sites are ignored, and the likelihood can be conveniently calculated as the product of the individual site likelihoods. The calculation considers all possible transition paths (also called substitution histories or mappings) that are consistent with the observed states at the terminals, and the probability density of any particular reconstruction depends on the substitution model. The likelihood is the integral of the probability density of each substitution history taken over all possible histories that are consistent with the observed data. We investigated the extent to which transition paths that are incompatible with a protein's three-dimensional structure contribute to the likelihood. Several empirical amino acid models were tested for sequence pairs of different degrees of divergence. When simulating substitutional histories starting from a real sequence, the structural integrity of the simulated sequences quickly disintegrated. This result indicates that simple models are clearly unable to capture the constraints on sequence evolution. However, when we sampled transition paths between real sequences from the posterior probability distribution according to these same models, we found that the sampled histories were largely consistent with the tertiary structure. This suggests that simple empirical substitution models may be adequate for interpolating changes between observed sequences during phylogenetic inference despite the fact that the models cannot predict the effects of structural constraints from first principles. This study is significant because it provides a quantitative assessment of the biological realism of substitution models from the perspective of protein structure, and it provides insight on the prospects for improving models of protein sequence evolution.
Molenaar, P.C.M.; Nesselroade, J.R.
1998-01-01
The study of intraindividual variability pervades empirical inquiry in virtually all subdisciplines of psychology. The statistical analysis of multivariate time-series data - a central product of intraindividual investigations - requires special modeling techniques. The dynamic factor model (DFM),
Subjects, Models, Languages, Transformations
Rensink, Arend; Bézivin, J.; Heckel, R.
2005-01-01
Discussions about model-driven approaches tend to be hampered by terminological confusion. This is at least partially caused by a lack of formal precision in defining the basic concepts, including that of "model" and "thing being modelled" - which we call subject in this paper. We propose a minimal
First Results of the Regional Earthquake Likelihood Models Experiment
Schorlemmer, Danijel; Zechar, J. Douglas; Werner, Maximilian J.; Field, Edward H.; Jackson, David D.; Jordan, Thomas H.
2010-08-01
The ability to successfully predict the future behavior of a system is a strong indication that the system is well understood. Certainly many details of the earthquake system remain obscure, but several hypotheses related to earthquake occurrence and seismic hazard have been proffered, and predicting earthquake behavior is a worthy goal and demanded by society. Along these lines, one of the primary objectives of the Regional Earthquake Likelihood Models (RELM) working group was to formalize earthquake occurrence hypotheses in the form of prospective earthquake rate forecasts in California. RELM members, working in small research groups, developed more than a dozen 5-year forecasts; they also outlined a performance evaluation method and provided a conceptual description of a Testing Center in which to perform predictability experiments. Subsequently, researchers working within the Collaboratory for the Study of Earthquake Predictability (CSEP) have begun implementing Testing Centers in different locations worldwide, and the RELM predictability experiment—a truly prospective earthquake prediction effort—is underway within the U.S. branch of CSEP. The experiment, designed to compare time-invariant 5-year earthquake rate forecasts, is now approximately halfway to its completion. In this paper, we describe the models under evaluation and present, for the first time, preliminary results of this unique experiment. While these results are preliminary—the forecasts were meant for an application of 5 years—we find interesting results: most of the models are consistent with the observation and one model forecasts the distribution of earthquakes best. We discuss the observed sample of target earthquakes in the context of historical seismicity within the testing region, highlight potential pitfalls of the current tests, and suggest plans for future revisions to experiments such as this one.
Examination of a Simple Errors-in-Variables Model: A Demonstration of Marginal Maximum Likelihood
Camilli, Gregory
2006-01-01
A simple errors-in-variables regression model is given in this article for illustrating the method of marginal maximum likelihood (MML). Given suitable estimates of reliability, error variables, as nuisance variables, can be integrated out of likelihood equations. Given the closed form expression of the resulting marginal likelihood, the effects…
Improved Likelihood Ratio Tests for Cointegration Rank in the VAR Model
DEFF Research Database (Denmark)
Boswijk, H. Peter; Jansson, Michael; Nielsen, Morten Ørregaard
We suggest improved tests for cointegration rank in the vector autoregressive (VAR) model and develop asymptotic distribution theory and local power results. The tests are (quasi-)likelihood ratio tests based on a Gaussian likelihood, but of course the asymptotic results apply more generally...
DEFF Research Database (Denmark)
Nielsen, Jan; Parner, Erik
2010-01-01
In this paper, we model multivariate time-to-event data by composite likelihood of pairwise frailty likelihoods and marginal hazards using natural cubic splines. Both right- and interval-censored data are considered. The suggested approach is applied on two types of family studies using the gamma...
Wixted, John T; Gaitan, Santino C
2002-11-01
B. F. Skinner (1977) once argued that cognitive theories are essentially surrogates for the organism's (usually unknown) reinforcement history. In this article, we argue that this notion applies rather directly to a class of likelihood ratio models of human recognition memory. The point is not that such models are fundamentally flawed or that they are not useful and should be abandoned. Instead, the point is that the role of reinforcement history in shaping memory decisions could help to explain what otherwise must be explained by assuming that subjects are inexplicably endowed with the relevant distributional information and computational abilities. To the degree that a role for an organism's reinforcement history is appreciated, the importance of animal memory research in understanding human memory comes into clearer focus. As Skinner was also fond of pointing out, it is only in the animal laboratory that an organism's history of reinforcement can be precisely controlled and its effects on behavior clearly understood.
Directory of Open Access Journals (Sweden)
Esra Saatci
2010-01-01
Full Text Available We propose a procedure to estimate the model parameters of presented nonlinear Resistance-Capacitance (RC and the widely used linear Resistance-Inductance-Capacitance (RIC models of the respiratory system by Maximum Likelihood Estimator (MLE. The measurement noise is assumed to be Generalized Gaussian Distributed (GGD, and the variance and the shape factor of the measurement noise are estimated by MLE and Kurtosis method, respectively. The performance of the MLE algorithm is also demonstrated by the Cramer-Rao Lower Bound (CRLB with artificially produced respiratory signals. Airway flow, mask pressure, and lung volume are measured from patients with Chronic Obstructive Pulmonary Disease (COPD under the noninvasive ventilation and from healthy subjects. Simulations show that respiratory signals from healthy subjects are better represented by the RIC model compared to the nonlinear RC model. On the other hand, the Patient group respiratory signals are fitted to the nonlinear RC model with lower measurement noise variance, better converged measurement noise shape factor, and model parameter tracks. Also, it is observed that for the Patient group the shape factor of the measurement noise converges to values between 1 and 2 whereas for the Control group shape factor values are estimated in the super-Gaussian area.
Likelihood-free Bayesian computation for structural model calibration: a feasibility study
Jin, Seung-Seop; Jung, Hyung-Jo
2016-04-01
Finite element (FE) model updating is often used to associate FE models with corresponding existing structures for the condition assessment. FE model updating is an inverse problem and prone to be ill-posed and ill-conditioning when there are many errors and uncertainties in both an FE model and its corresponding measurements. In this case, it is important to quantify these uncertainties properly. Bayesian FE model updating is one of the well-known methods to quantify parameter uncertainty by updating our prior belief on the parameters with the available measurements. In Bayesian inference, likelihood plays a central role in summarizing the overall residuals between model predictions and corresponding measurements. Therefore, likelihood should be carefully chosen to reflect the characteristics of the residuals. It is generally known that very little or no information is available regarding the statistical characteristics of the residuals. In most cases, the likelihood is assumed to be the independent identically distributed Gaussian distribution with the zero mean and constant variance. However, this assumption may cause biased and over/underestimated estimates of parameters, so that the uncertainty quantification and prediction are questionable. To alleviate the potential misuse of the inadequate likelihood, this study introduced approximate Bayesian computation (i.e., likelihood-free Bayesian inference), which relaxes the need for an explicit likelihood by analyzing the behavior similarities between model predictions and measurements. We performed FE model updating based on likelihood-free Markov chain Monte Carlo (MCMC) without using the likelihood. Based on the result of the numerical study, we observed that the likelihood-free Bayesian computation can quantify the updating parameters correctly and its predictive capability for the measurements, not used in calibrated, is also secured.
CERN. Geneva
2015-01-01
Most physics results at the LHC end in a likelihood ratio test. This includes discovery and exclusion for searches as well as mass, cross-section, and coupling measurements. The use of Machine Learning (multivariate) algorithms in HEP is mainly restricted to searches, which can be reduced to classification between two fixed distributions: signal vs. background. I will show how we can extend the use of ML classifiers to distributions parameterized by physical quantities like masses and couplings as well as nuisance parameters associated to systematic uncertainties. This allows for one to approximate the likelihood ratio while still using a high dimensional feature vector for the data. Both the MEM and ABC approaches mentioned above aim to provide inference on model parameters (like cross-sections, masses, couplings, etc.). ABC is fundamentally tied Bayesian inference and focuses on the “likelihood free” setting where only a simulator is available and one cannot directly compute the likelihood for the dat...
A MIXTURE LIKELIHOOD APPROACH FOR GENERALIZED LINEAR-MODELS
WEDEL, M; DESARBO, WS
1995-01-01
A mixture model approach is developed that simultaneously estimates the posterior membership probabilities of observations to a number of unobservable groups or latent classes, and the parameters of a generalized linear model which relates the observations, distributed according to some member of
Statistical modelling of survival data with random effects h-likelihood approach
Ha, Il Do; Lee, Youngjo
2017-01-01
This book provides a groundbreaking introduction to the likelihood inference for correlated survival data via the hierarchical (or h-) likelihood in order to obtain the (marginal) likelihood and to address the computational difficulties in inferences and extensions. The approach presented in the book overcomes shortcomings in the traditional likelihood-based methods for clustered survival data such as intractable integration. The text includes technical materials such as derivations and proofs in each chapter, as well as recently developed software programs in R (“frailtyHL”), while the real-world data examples together with an R package, “frailtyHL” in CRAN, provide readers with useful hands-on tools. Reviewing new developments since the introduction of the h-likelihood to survival analysis (methods for interval estimation of the individual frailty and for variable selection of the fixed effects in the general class of frailty models) and guiding future directions, the book is of interest to research...
Recent developments in maximum likelihood estimation of MTMM models for categorical data
Directory of Open Access Journals (Sweden)
Minjeong eJeon
2014-04-01
Full Text Available Maximum likelihood (ML estimation of categorical multitrait-multimethod (MTMM data is challenging because the likelihood involves high-dimensional integrals over the crossed method and trait factors, with no known closed-form solution.The purpose of the study is to introduce three newly developed ML methods that are eligible for estimating MTMM models with categorical responses: Variational maximization-maximization, Alternating imputation posterior, and Monte Carlo local likelihood. Each method is briefly described and its applicability for MTMM models with categorical data are discussed.An illustration is provided using an empirical example.
How to Maximize the Likelihood Function for a DSGE Model
DEFF Research Database (Denmark)
Andreasen, Martin Møller
This paper extends two optimization routines to deal with objective functions for DSGE models. The optimization routines are i) a version of Simulated Annealing developed by Corana, Marchesi & Ridella (1987), and ii) the evolutionary algorithm CMA-ES developed by Hansen, Müller & Koumoutsakos (2003...
The fine-tuning cost of the likelihood in SUSY models
Ghilencea, D M
2013-01-01
In SUSY models, the fine tuning of the electroweak (EW) scale with respect to their parameters gamma_i={m_0, m_{1/2}, mu_0, A_0, B_0,...} and the maximal likelihood L to fit the experimental data are usually regarded as two different problems. We show that, if one regards the EW minimum conditions as constraints that fix the EW scale, this commonly held view is not correct and that the likelihood contains all the information about fine-tuning. In this case we show that the corrected likelihood is equal to the ratio L/Delta of the usual likelihood L and the traditional fine tuning measure Delta of the EW scale. A similar result is obtained for the integrated likelihood over the set {gamma_i}, that can be written as a surface integral of the ratio L/Delta, with the surface in gamma_i space determined by the EW minimum constraints. As a result, a large likelihood actually demands a large ratio L/Delta or equivalently, a small chi^2_{new}=chi^2_{old}+2*ln(Delta). This shows the fine-tuning cost to the likelihood ...
Generalized Empirical Likelihood-Based Focused Information Criterion and Model Averaging
Directory of Open Access Journals (Sweden)
Naoya Sueishi
2013-07-01
Full Text Available This paper develops model selection and averaging methods for moment restriction models. We first propose a focused information criterion based on the generalized empirical likelihood estimator. We address the issue of selecting an optimal model, rather than a correct model, for estimating a specific parameter of interest. Then, this study investigates a generalized empirical likelihood-based model averaging estimator that minimizes the asymptotic mean squared error. A simulation study suggests that our averaging estimator can be a useful alternative to existing post-selection estimators.
Generalized linear models with random effects unified analysis via H-likelihood
Lee, Youngjo; Pawitan, Yudi
2006-01-01
Since their introduction in 1972, generalized linear models (GLMs) have proven useful in the generalization of classical normal models. Presenting methods for fitting GLMs with random effects to data, Generalized Linear Models with Random Effects: Unified Analysis via H-likelihood explores a wide range of applications, including combining information over trials (meta-analysis), analysis of frailty models for survival data, genetic epidemiology, and analysis of spatial and temporal models with correlated errors.Written by pioneering authorities in the field, this reference provides an introduction to various theories and examines likelihood inference and GLMs. The authors show how to extend the class of GLMs while retaining as much simplicity as possible. By maximizing and deriving other quantities from h-likelihood, they also demonstrate how to use a single algorithm for all members of the class, resulting in a faster algorithm as compared to existing alternatives. Complementing theory with examples, many of...
A note on the maximum likelihood estimator in the gamma regression model
Directory of Open Access Journals (Sweden)
Jerzy P. Rydlewski
2009-01-01
Full Text Available This paper considers a nonlinear regression model, in which the dependent variable has the gamma distribution. A model is considered in which the shape parameter of the random variable is the sum of continuous and algebraically independent functions. The paper proves that there is exactly one maximum likelihood estimator for the gamma regression model.
Maximum Likelihood Dynamic Factor Modeling for Arbitrary "N" and "T" Using SEM
Voelkle, Manuel C.; Oud, Johan H. L.; von Oertzen, Timo; Lindenberger, Ulman
2012-01-01
This article has 3 objectives that build on each other. First, we demonstrate how to obtain maximum likelihood estimates for dynamic factor models (the direct autoregressive factor score model) with arbitrary "T" and "N" by means of structural equation modeling (SEM) and compare the approach to existing methods. Second, we go beyond standard time…
Towards the mother-of-all-models: customised construction of the mark-recapture likelihood function
Directory of Open Access Journals (Sweden)
Barker, R. J.
2004-05-01
Full Text Available With a proliferation of mark–recapture models and studies collecting mark–recapture data, software and analysis methods are being continually revised. We consider the construction of the likelihood for a general model that incorporates all the features of the recently developed models: it is a multistate robust–design mark–recapture model that includes dead recoveries and resightings of marked animals and is parameterised in terms of state–specific recruitment, survival, movement, and capture probabilities, state–specific abundances, and state–specific recovery and resighting probabilities. The construction that we outline is based on a factorisation of the likelihood function with each factor corresponding to a different component of the data. Such a construction would allow the likelihood function for a mark–recapture analysis to be customized according to the components that are actually present in the dataset.
Block Empirical Likelihood for Longitudinal Single-Index Varying-Coefficient Model
Directory of Open Access Journals (Sweden)
Yunquan Song
2013-01-01
Full Text Available In this paper, we consider a single-index varying-coefficient model with application to longitudinal data. In order to accommodate the within-group correlation, we apply the block empirical likelihood procedure to longitudinal single-index varying-coefficient model, and prove a nonparametric version of Wilks’ theorem which can be used to construct the block empirical likelihood confidence region with asymptotically correct coverage probability for the parametric component. In comparison with normal approximations, the proposed method does not require a consistent estimator for the asymptotic covariance matrix, making it easier to conduct inference for the model's parametric component. Simulations demonstrate how the proposed method works.
A Composite Likelihood Inference in Latent Variable Models for Ordinal Longitudinal Responses
Vasdekis, Vassilis G. S.; Cagnone, Silvia; Moustaki, Irini
2012-01-01
The paper proposes a composite likelihood estimation approach that uses bivariate instead of multivariate marginal probabilities for ordinal longitudinal responses using a latent variable model. The model considers time-dependent latent variables and item-specific random effects to be accountable for the interdependencies of the multivariate…
Maximum Likelihood Item Easiness Models for Test Theory without an Answer Key
France, Stephen L.; Batchelder, William H.
2015-01-01
Cultural consensus theory (CCT) is a data aggregation technique with many applications in the social and behavioral sciences. We describe the intuition and theory behind a set of CCT models for continuous type data using maximum likelihood inference methodology. We describe how bias parameters can be incorporated into these models. We introduce…
Polytomous IRT models and monotone likelihood ratio of the total score
Hemker, BT; Sijtsma, Klaas; Molenaar, Ivo W; Junker, BW
1996-01-01
In a broad class of item response theory (IRT) models for dichotomous items the unweighted total score has monotone likelihood ratio (MLR) in the latent trait theta. In this study, it is shown that for polytomous items MLR holds for the partial credit model and a trivial generalization of this
Royle, J. Andrew; Sutherland, Christopher S.; Fuller, Angela K.; Sun, Catherine C.
2015-01-01
We develop a likelihood analysis framework for fitting spatial capture-recapture (SCR) models to data collected on class structured or stratified populations. Our interest is motivated by the necessity of accommodating the problem of missing observations of individual class membership. This is particularly problematic in SCR data arising from DNA analysis of scat, hair or other material, which frequently yields individual identity but fails to identify the sex. Moreover, this can represent a large fraction of the data and, given the typically small sample sizes of many capture-recapture studies based on DNA information, utilization of the data with missing sex information is necessary. We develop the class structured likelihood for the case of missing covariate values, and then we address the scaling of the likelihood so that models with and without class structured parameters can be formally compared regardless of missing values. We apply our class structured model to black bear data collected in New York in which sex could be determined for only 62 of 169 uniquely identified individuals. The models containing sex-specificity of both the intercept of the SCR encounter probability model and the distance coefficient, and including a behavioral response are strongly favored by log-likelihood. Estimated population sex ratio is strongly influenced by sex structure in model parameters illustrating the importance of rigorous modeling of sex differences in capture-recapture models.
SIERO, FW; DOOSJE, BJ
1993-01-01
An experiment was conducted to examine the influence of the perceived extremity of a message and motivation to elaborate upon the process of persuasion. The first goal was to test a model of attitude change relating Social Judgment Theory to the Elaboration Likelihood Model. The second objective was
Franco-Pedroso, Javier; Ramos, Daniel; Gonzalez-Rodriguez, Joaquin
2016-01-01
In forensic science, trace evidence found at a crime scene and on suspect has to be evaluated from the measurements performed on them, usually in the form of multivariate data (for example, several chemical compound or physical characteristics). In order to assess the strength of that evidence, the likelihood ratio framework is being increasingly adopted. Several methods have been derived in order to obtain likelihood ratios directly from univariate or multivariate data by modelling both the variation appearing between observations (or features) coming from the same source (within-source variation) and that appearing between observations coming from different sources (between-source variation). In the widely used multivariate kernel likelihood-ratio, the within-source distribution is assumed to be normally distributed and constant among different sources and the between-source variation is modelled through a kernel density function (KDF). In order to better fit the observed distribution of the between-source variation, this paper presents a different approach in which a Gaussian mixture model (GMM) is used instead of a KDF. As it will be shown, this approach provides better-calibrated likelihood ratios as measured by the log-likelihood ratio cost (Cllr) in experiments performed on freely available forensic datasets involving different trace evidences: inks, glass fragments and car paints. PMID:26901680
Two-Stage maximum likelihood estimation in the misspecified restricted latent class model.
Wang, Shiyu
2017-10-28
The maximum likelihood classification rule is a standard method to classify examinee attribute profiles in cognitive diagnosis models (CDMs). Its asymptotic behaviour is well understood when the model is assumed to be correct, but has not been explored in the case of misspecified latent class models. This paper investigates the asymptotic behaviour of a two-stage maximum likelihood classifier under a misspecified CDM. The analysis is conducted in a general restricted latent class model framework addressing all types of CDMs. Sufficient conditions are proposed under which a consistent classification can be obtained by using a misspecified model. Discussions are also provided on the inconsistency of classification under certain model misspecification scenarios. Simulation studies and a real data application are conducted to illustrate these results. Our findings can provide some guidelines as to when a misspecified simple model or a general model can be used to provide a good classification result. © 2017 The British Psychological Society.
Maximum likelihood estimation for Cox's regression model under nested case-control sampling
DEFF Research Database (Denmark)
Scheike, Thomas; Juul, Anders
2004-01-01
-like growth factor I was associated with ischemic heart disease. The study was based on a population of 3784 Danes and 231 cases of ischemic heart disease where controls were matched on age and gender. We illustrate the use of the MLE for these data and show how the maximum likelihood framework can be used......Nested case-control sampling is designed to reduce the costs of large cohort studies. It is important to estimate the parameters of interest as efficiently as possible. We present a new maximum likelihood estimator (MLE) for nested case-control sampling in the context of Cox's proportional hazards...... model. The MLE is computed by the EM-algorithm, which is easy to implement in the proportional hazards setting. Standard errors are estimated by a numerical profile likelihood approach based on EM aided differentiation. The work was motivated by a nested case-control study that hypothesized that insulin...
Magis, David; Raiche, Gilles
2012-01-01
This paper focuses on two estimators of ability with logistic item response theory models: the Bayesian modal (BM) estimator and the weighted likelihood (WL) estimator. For the BM estimator, Jeffreys' prior distribution is considered, and the corresponding estimator is referred to as the Jeffreys modal (JM) estimator. It is established that under…
Rios-Avila, Fernando; Canavire-Bacarreza, Gustavo
2017-01-01
Following Wooldridge (2014), we discuss and implement in Stata an efficient maximum likelihood approach to the estimation of corrected standard errors of two-stage optimization models. Specifically, we compare the robustness and efficiency of this estimate using different non-linear routines already implemented in Stata such as ivprobit, ivtobit, ivpoisson, heckman, and ivregress.
Statistical power of likelihood ratio and Wald tests in latent class models with covariates
Gudicha, D.W.; Schmittmann, V.D.; Vermunt, J.K.
2017-01-01
This paper discusses power and sample-size computation for likelihood ratio and Wald testing of the significance of covariate effects in latent class models. For both tests, asymptotic distributions can be used; that is, the test statistic can be assumed to follow a central Chi-square under the null
Xie, Yunlong; Zimmerman, Dale L
2013-08-30
Time index-ordered random variables are said to be antedependent (AD) of order (p1 ,p2 , … ,pn ) if the kth variable, conditioned on the pk immediately preceding variables, is independent of all further preceding variables. Inferential methods associated with AD models are well developed for continuous (primarily normal) longitudinal data, but not for categorical longitudinal data. In this article, we develop likelihood-based inferential procedures for unstructured AD models for categorical longitudinal data. Specifically, we derive maximum likelihood estimators (MLEs) of model parameters; penalized likelihood criteria and likelihood ratio tests for determining the order of antedependence; and likelihood ratio tests for homogeneity across groups, time invariance of transition probabilities, and strict stationarity. We give closed-form expressions for MLEs and test statistics, which allow for the possibility of empty cells and monotone missing data, for all cases save strict stationarity. For data with an arbitrary missingness pattern, we derive an efficient restricted expectation-maximization algorithm for obtaining MLEs. We evaluate the performance of the tests by simulation. We apply the methods to longitudinal studies of toenail infection severity (measured on a binary scale) and Alzheimer's disease severity (measured on an ordinal scale). The analysis of the toenail infection severity data reveals interesting nonstationary behavior of the transition probabilities and indicates that an unstructured first-order AD model is superior to stationary and other structured first-order AD models that have previously been fit to these data. The analysis of the Alzheimer's severity data indicates that the antedependence is second order with time-invariant transition probabilities, suggesting the use of a second-order autoregressive cumulative logit model. Copyright © 2013 John Wiley & Sons, Ltd.
Yi Li; Ross L. Prentice; Xihong Lin
2008-01-01
We consider a class of semiparametric normal transformation models for right-censored bivariate failure times. Nonparametric hazard rate models are transformed to a standard normal model and a joint normal distribution is assumed for the bivariate vector of transformed variates. A semiparametric maximum likelihood estimation procedure is developed for estimating the marginal survival distribution and the pairwise correlation parameters. This produces an efficient estimator of the correlation ...
Directory of Open Access Journals (Sweden)
Maja Olsbjerg
2015-10-01
Full Text Available Item response theory models are often applied when a number items are used to measure a unidimensional latent variable. Originally proposed and used within educational research, they are also used when focus is on physical functioning or psychological wellbeing. Modern applications often need more general models, typically models for multidimensional latent variables or longitudinal models for repeated measurements. This paper describes a SAS macro that fits two-dimensional polytomous Rasch models using a specification of the model that is sufficiently flexible to accommodate longitudinal Rasch models. The macro estimates item parameters using marginal maximum likelihood estimation. A graphical presentation of item characteristic curves is included.
Parameter Estimation for an Electric Arc Furnace Model Using Maximum Likelihood
Directory of Open Access Journals (Sweden)
Jesser J. Marulanda-Durango
2012-12-01
Full Text Available In this paper, we present a methodology for estimating the parameters of a model for an electrical arc furnace, by using maximum likelihood estimation. Maximum likelihood estimation is one of the most employed methods for parameter estimation in practical settings. The model for the electrical arc furnace that we consider, takes into account the non-periodic and non-linear variations in the voltage-current characteristic. We use NETLAB, an open source MATLAB® toolbox, for solving a set of non-linear algebraic equations that relate all the parameters to be estimated. Results obtained through simulation of the model in PSCADTM, are contrasted against real measurements taken during the furnance's most critical operating point. We show how the model for the electrical arc furnace, with appropriate parameter tuning, captures with great detail the real voltage and current waveforms generated by the system. Results obtained show a maximum error of 5% for the current's root mean square error.
Li, Yi; Prentice, Ross L.; Lin, Xihong
2008-01-01
SUMMARY We consider a class of semiparametric normal transformation models for right censored bivariate failure times. Nonparametric hazard rate models are transformed to a standard normal model and a joint normal distribution is assumed for the bivariate vector of transformed variates. A semiparametric maximum likelihood estimation procedure is developed for estimating the marginal survival distribution and the pairwise correlation parameters. This produces an efficient estimator of the correlation parameter of the semiparametric normal transformation model, which characterizes the bivariate dependence of bivariate survival outcomes. In addition, a simple positive-mass-redistribution algorithm can be used to implement the estimation procedures. Since the likelihood function involves infinite-dimensional parameters, the empirical process theory is utilized to study the asymptotic properties of the proposed estimators, which are shown to be consistent, asymptotically normal and semiparametric efficient. A simple estimator for the variance of the estimates is also derived. The finite sample performance is evaluated via extensive simulations. PMID:19079778
Li, Yi; Prentice, Ross L; Lin, Xihong
2008-12-01
We consider a class of semiparametric normal transformation models for right censored bivariate failure times. Nonparametric hazard rate models are transformed to a standard normal model and a joint normal distribution is assumed for the bivariate vector of transformed variates. A semiparametric maximum likelihood estimation procedure is developed for estimating the marginal survival distribution and the pairwise correlation parameters. This produces an efficient estimator of the correlation parameter of the semiparametric normal transformation model, which characterizes the bivariate dependence of bivariate survival outcomes. In addition, a simple positive-mass-redistribution algorithm can be used to implement the estimation procedures. Since the likelihood function involves infinite-dimensional parameters, the empirical process theory is utilized to study the asymptotic properties of the proposed estimators, which are shown to be consistent, asymptotically normal and semiparametric efficient. A simple estimator for the variance of the estimates is also derived. The finite sample performance is evaluated via extensive simulations.
ABC of SV: Limited Information Likelihood Inference in Stochastic Volatility Jump-Diffusion Models
DEFF Research Database (Denmark)
Creel, Michael; Kristensen, Dennis
We develop novel methods for estimation and filtering of continuous-time models with stochastic volatility and jumps using so-called Approximate Bayesian Computation which build likelihoods based on limited information. The proposed estimators and filters are computationally attractive relative...... to standard likelihood-based versions since they rely on low-dimensional auxiliary statistics and so avoid computation of high-dimensional integrals. Despite their computational simplicity, we find that estimators and filters perform well in practice and lead to precise estimates of model parameters...... stochastic volatility model for the dynamics of the S&P 500 equity index. We find evidence of the presence of a dynamic jump rate and in favor of a structural break in parameters at the time of the recent financial crisis. We find evidence that possible measurement error in log price is small and has little...
Maximum likelihood pixel labeling using a spatially variant finite mixture model
Energy Technology Data Exchange (ETDEWEB)
Gopal, S.S. [Univ. of Michigan, Ann Arbor, MI (United States); Hebert, T.J. [Univ. of Houston, TX (United States)
1996-12-31
We propose a spatially-variant mixture model for pixel labeling. Based on this spatially-variant mixture model we derive an expectation maximization algorithm for maximum likelihood estimation of the pixel labels. While most algorithms using mixture models entail the subsequent use of a Bayes classifier for pixel labeling, the proposed algorithm yields maximum likelihood estimates of the labels themselves and results in unambiguous pixel labels. The proposed algorithm is fast, robust, easy to implement, flexible in that it can be applied to any arbitrary image data where the number of classes is known and, most importantly, obviates the need for an explicit labeling rule. The algorithm is evaluated both quantitatively and qualitatively on simulated data and on clinical magnetic resonance images of the human brain.
Scheres, Sjors H. W.; Núñez-Ramírez, Rafael; Gómez-Llorente, Yacob; Martín, Carmen San; Eggermont, Paul P. B.; Carazo, José María
2007-01-01
The coexistence of multiple distinct structural states often obstructs the application of three-dimensional cryo-electron microscopy to large macromolecular complexes. Maximum likelihood approaches are emerging as robust tools for solving the image classification problems that are posed by such samples. Here, we propose a statistical data model that allows for a description of the experimental image formation within the formulation of 2D and 3D maximum likelihood refinement. The proposed approach comprises a formulation of the probability calculations in Fourier space, including a spatial frequency-dependent noise model and a description of defocus-dependent imaging effects. The Expectation-Maximization like algorithms presented are generally applicable to the alignment and classification of structurally heterogeneous projection data. Their effectiveness is demonstrated with various examples, including 2D classification of top views of the archaeal helicase MCM, and 3D classification of 70S E.coli ribosome and Simian Virus 40 large T-antigen projections. PMID:17937907
Adaptive hybrid likelihood model for visual tracking based on Gaussian particle filter
Wang, Yong; Tan, Yihua; Tian, Jinwen
2010-07-01
We present a new scheme based on multiple-cue integration for visual tracking within a Gaussian particle filter framework. The proposed method integrates the color, shape, and texture cues of an object to construct a hybrid likelihood model. During the measurement step, the likelihood model can be switched adaptively according to environmental changes, which improves the object representation to deal with the complex disturbances, such as appearance changes, partial occlusions, and significant clutter. Moreover, the confidence weights of the cues are adjusted online through the estimation using a particle filter, which ensures the tracking accuracy and reliability. Experiments are conducted on several real video sequences, and the results demonstrate that the proposed method can effectively track objects in complex scenarios. Compared with previous similar approaches through some quantitative and qualitative evaluations, the proposed method performs better in terms of tracking robustness and precision.
Computation of the Likelihood in Biallelic Diffusion Models Using Orthogonal Polynomials
Directory of Open Access Journals (Sweden)
Claus Vogl
2014-11-01
Full Text Available In population genetics, parameters describing forces such as mutation, migration and drift are generally inferred from molecular data. Lately, approximate methods based on simulations and summary statistics have been widely applied for such inference, even though these methods waste information. In contrast, probabilistic methods of inference can be shown to be optimal, if their assumptions are met. In genomic regions where recombination rates are high relative to mutation rates, polymorphic nucleotide sites can be assumed to evolve independently from each other. The distribution of allele frequencies at a large number of such sites has been called “allele-frequency spectrum” or “site-frequency spectrum” (SFS. Conditional on the allelic proportions, the likelihoods of such data can be modeled as binomial. A simple model representing the evolution of allelic proportions is the biallelic mutation-drift or mutation-directional selection-drift diffusion model. With series of orthogonal polynomials, specifically Jacobi and Gegenbauer polynomials, or the related spheroidal wave function, the diffusion equations can be solved efficiently. In the neutral case, the product of the binomial likelihoods with the sum of such polynomials leads to finite series of polynomials, i.e., relatively simple equations, from which the exact likelihoods can be calculated. In this article, the use of orthogonal polynomials for inferring population genetic parameters is investigated.
A likelihood-based two-part marginal model for longitudinal semi-continuous data
Su, Li; Tom, Brian D. M.; Farewell, Vernon T.
2016-01-01
Two-part models are an attractive approach to analyzing longitudinal semicontinuous data consisting of a mixture of true zeros and continuously distributed positive values. When interest lies in the population-averaged (marginal) covariate effects, two-part models that provide straightforward interpretation of the marginal effects are desirable. Presently, the only available approaches for fitting two-part marginal models to longitudinal semicontinuous data are computationally difficult to implement. Therefore there exists a need to develop two-part marginal models that can be easily implemented in practice. We propose a fully likelihood-based two-part marginal model that satisfies this need by using the bridge distribution for the random effect in the binary part of an underlying two-part mixed model; and its maximum likelihood estimation can be routinely implemented via standard statistical software such as the SAS NLMIXED procedure. We illustrate the usage of this new model by investigating the marginal effects of pre-specified genetic markers on physical functioning, as measured by the Health Assessment Questionnaire (HAQ), in a cohort of psoriatic arthritis (PsA) patients from the University of Toronto Psoriatic Arthritis Clinic. An added benefit of our proposed marginal model when compared to a two-part mixed model is the robustness in regression parameter estimation when departure from the true random effects structure occurs. This is demonstrated through simulation. PMID:21873302
A likelihood-based two-part marginal model for longitudinal semicontinuous data.
Su, Li; Tom, Brian Dm; Farewell, Vernon T
2015-04-01
Two-part models are an attractive approach for analysing longitudinal semicontinuous data consisting of a mixture of true zeros and continuously distributed positive values. When the population-averaged (marginal) covariate effects are of interest, two-part models that provide straightforward interpretation of the marginal effects are desirable. Presently, the only available approaches for fitting two-part marginal models to longitudinal semicontinuous data are computationally difficult to implement. Therefore, there exists a need to develop two-part marginal models that can be easily implemented in practice. We propose a fully likelihood-based two-part marginal model that satisfies this need by using the bridge distribution for the random effect in the binary part of an underlying two-part mixed model; and its maximum likelihood estimation can be routinely implemented via standard statistical software such as the SAS NLMIXED procedure. We illustrate the usage of this new model by investigating the marginal effects of pre-specified genetic markers on physical functioning, as measured by the Health Assessment Questionnaire, in a cohort of psoriatic arthritis patients from the University of Toronto Psoriatic Arthritis Clinic. An added benefit of our proposed marginal model when compared to a two-part mixed model is the robustness in regression parameter estimation when departure from the true random effects structure occurs. This is demonstrated through simulation. © The Author(s) 2011 Reprints and permissions: sagepub.co.uk/journalsPermissions.nav.
Fitting Cox Models with Doubly Censored Data Using Spline-Based Sieve Marginal Likelihood.
Li, Zhiguo; Owzar, Kouros
2016-06-01
In some applications, the failure time of interest is the time from an originating event to a failure event, while both event times are interval censored. We propose fitting Cox proportional hazards models to this type of data using a spline-based sieve maximum marginal likelihood, where the time to the originating event is integrated out in the empirical likelihood function of the failure time of interest. This greatly reduces the complexity of the objective function compared with the fully semiparametric likelihood. The dependence of the time of interest on time to the originating event is induced by including the latter as a covariate in the proportional hazards model for the failure time of interest. The use of splines results in a higher rate of convergence of the estimator of the baseline hazard function compared with the usual nonparametric estimator. The computation of the estimator is facilitated by a multiple imputation approach. Asymptotic theory is established and a simulation study is conducted to assess its finite sample performance. It is also applied to analyzing a real data set on AIDS incubation time.
Approximate Methods for Maximum Likelihood Estimation of Multivariate Nonlinear Mixed-Effects Models
Directory of Open Access Journals (Sweden)
Wan-Lun Wang
2015-07-01
Full Text Available Multivariate nonlinear mixed-effects models (MNLMM have received increasing use due to their flexibility for analyzing multi-outcome longitudinal data following possibly nonlinear profiles. This paper presents and compares five different iterative algorithms for maximum likelihood estimation of the MNLMM. These algorithmic schemes include the penalized nonlinear least squares coupled to the multivariate linear mixed-effects (PNLS-MLME procedure, Laplacian approximation, the pseudo-data expectation conditional maximization (ECM algorithm, the Monte Carlo EM algorithm and the importance sampling EM algorithm. When fitting the MNLMM, it is rather difficult to exactly evaluate the observed log-likelihood function in a closed-form expression, because it involves complicated multiple integrals. To address this issue, the corresponding approximations of the observed log-likelihood function under the five algorithms are presented. An expected information matrix of parameters is also provided to calculate the standard errors of model parameters. A comparison of computational performances is investigated through simulation and a real data example from an AIDS clinical study.
Likelihood based observability analysis and confidence intervals for predictions of dynamic models
Directory of Open Access Journals (Sweden)
Kreutz Clemens
2012-09-01
Full Text Available Abstract Background Predicting a system’s behavior based on a mathematical model is a primary task in Systems Biology. If the model parameters are estimated from experimental data, the parameter uncertainty has to be translated into confidence intervals for model predictions. For dynamic models of biochemical networks, the nonlinearity in combination with the large number of parameters hampers the calculation of prediction confidence intervals and renders classical approaches as hardly feasible. Results In this article reliable confidence intervals are calculated based on the prediction profile likelihood. Such prediction confidence intervals of the dynamic states can be utilized for a data-based observability analysis. The method is also applicable if there are non-identifiable parameters yielding to some insufficiently specified model predictions that can be interpreted as non-observability. Moreover, a validation profile likelihood is introduced that should be applied when noisy validation experiments are to be interpreted. Conclusions The presented methodology allows the propagation of uncertainty from experimental to model predictions. Although presented in the context of ordinary differential equations, the concept is general and also applicable to other types of models. Matlab code which can be used as a template to implement the method is provided at http://www.fdmold.uni-freiburg.de/∼ckreutz/PPL.
Directory of Open Access Journals (Sweden)
Zhang Zhang
2009-06-01
Full Text Available A major analytical challenge in computational biology is the detection and description of clusters of specified site types, such as polymorphic or substituted sites within DNA or protein sequences. Progress has been stymied by a lack of suitable methods to detect clusters and to estimate the extent of clustering in discrete linear sequences, particularly when there is no a priori specification of cluster size or cluster count. Here we derive and demonstrate a maximum likelihood method of hierarchical clustering. Our method incorporates a tripartite divide-and-conquer strategy that models sequence heterogeneity, delineates clusters, and yields a profile of the level of clustering associated with each site. The clustering model may be evaluated via model selection using the Akaike Information Criterion, the corrected Akaike Information Criterion, and the Bayesian Information Criterion. Furthermore, model averaging using weighted model likelihoods may be applied to incorporate model uncertainty into the profile of heterogeneity across sites. We evaluated our method by examining its performance on a number of simulated datasets as well as on empirical polymorphism data from diverse natural alleles of the Drosophila alcohol dehydrogenase gene. Our method yielded greater power for the detection of clustered sites across a breadth of parameter ranges, and achieved better accuracy and precision of estimation of clusters, than did the existing empirical cumulative distribution function statistics.
Yi Zhou; Hongqing Zhu; Xuan Tao
2017-07-01
Finite mixture model (FMM) has been widely used for unsupervised segmentation of magnetic resonance (MR) images in recent years. However, in real applications, the distribution of the observed data usually contains an unknown fraction of outliers, which would interfere with the estimation of the parameters of the mixture model. The statistical model-based technique which provides a theoretically well segmentation criterion in presence of outliers is the mixture modeling and the trimming approach. Therefore, in this paper, a robust estimation of asymmetric Student's-t mixture model (ASMM) using the trimmed likelihood estimator for MR image segmentation has been proposed. The proposed method is supposed to discard the outliers, and then to estimate the parameters of the ASMM with the remaining samples. The advantages of the proposed algorithm are that its robustness to dispose the disturbance of outliers and its flexibility to describe various shapes of data. Finally, expectation-maximization (EM) algorithm is adopted to maximize the log-likelihood and to obtain the estimation of the parameters. The experimental results show that the proposed method has a better performance on the segmentation of synthetic data and real MR images.
Stone, Clement A.
1992-01-01
Monte Carlo methods are used to evaluate marginal maximum likelihood estimation of item parameters and maximum likelihood estimates of theta in the two-parameter logistic model for varying test lengths, sample sizes, and assumed theta distributions. Results with 100 datasets demonstrate the methods' general precision and stability. Exceptions are…
Silvennoinen, Annestiina; Terasvirta, Timo
2017-01-01
A new multivariate volatility model that belongs to the family of conditional correlation GARCH models is introduced. The GARCH equations of this model contain a multiplicative deterministic component to describe long-run movements in volatility and, in addition, the correlations are deterministically time-varying. Parameters of the model are estimated jointly using maximum likelihood. Consistency and asymptotic normality of maximum likelihood estimators is proved. Numerical aspects of the es...
Lee, Ya-Ting; Turcotte, Donald L.; Holliday, James R.; Michael K. Sachs; Rundle, John B.; Chen, Chien-Chih; Tiampo, Kristy F.
2011-01-01
The Regional Earthquake Likelihood Models (RELM) test of earthquake forecasts in California was the first competitive evaluation of forecasts of future earthquake occurrence. Participants submitted expected probabilities of occurrence of M≥4.95 earthquakes in 0.1° × 0.1° cells for the period 1 January 1, 2006, to December 31, 2010. Probabilities were submitted for 7,682 cells in California and adjacent regions. During this period, 31 M≥4.95 earthquakes occurred in the test region. These earth...
A maximum likelihood estimation framework for delay logistic differential equation model
Mahmoud, Ahmed Adly; Dass, Sarat Chandra; Muthuvalu, Mohana S.
2016-11-01
This paper will introduce the maximum likelihood method of estimation for delay differential equation model governed by unknown delay and other parameters of interest followed by a numerical solver approach. As an example we consider the delayed logistic differential equation. A grid based estimation framework is proposed. Our methodology estimates correctly the delay parameter as well as the initial starting value of the dynamical system based on simulation data. The computations have been carried out with help of mathematical software: MATLAB® 8.0 R2012b.
The early maximum likelihood estimation model of audiovisual integration in speech perception
DEFF Research Database (Denmark)
Andersen, Tobias
2015-01-01
Speech perception is facilitated by seeing the articulatory mouth movements of the talker. This is due to perceptual audiovisual integration, which also causes the McGurk−MacDonald illusion, and for which a comprehensive computational account is still lacking. Decades of research have largely...... integration to speech perception along with three model variations. In early MLE, integration is based on a continuous internal representation before categorization, which can make the model more parsimonious by imposing constraints that reflect experimental designs. The study also shows that cross...... focused on the fuzzy logical model of perception (FLMP), which provides excellent fits to experimental observations but also has been criticized for being too flexible, post hoc and difficult to interpret. The current study introduces the early maximum likelihood estimation (MLE) model of audiovisual...
Yu, Haitao; Liu, Jing; Cai, Lihui; Wang, Jiang; Cao, Yibin; Hao, Chongqing
2017-02-01
Electroencephalogram (EEG) signal evoked by acupuncture stimulation at "Zusanli" acupoint is analyzed to investigate the modulatory effect of manual acupuncture on the functional brain activity. Power spectral density of EEG signal is first calculated based on the autoregressive Burg method. It is shown that the EEG power is significantly increased during and after acupuncture in delta and theta bands, but decreased in alpha band. Furthermore, synchronization likelihood is used to estimate the nonlinear correlation between each pairwise EEG signals. By applying a threshold to resulting synchronization matrices, functional networks for each band are reconstructed and further quantitatively analyzed to study the impact of acupuncture on network structure. Graph theoretical analysis demonstrates that the functional connectivity of the brain undergoes obvious change under different conditions: pre-acupuncture, acupuncture, and post-acupuncture. The minimum path length is largely decreased and the clustering coefficient keeps increasing during and after acupuncture in delta and theta bands. It is indicated that acupuncture can significantly modulate the functional activity of the brain, and facilitate the information transmission within different brain areas. The obtained results may facilitate our understanding of the long-lasting effect of acupuncture on the brain function.
Model uncertainty estimation and risk assessment is essential to environmental management and informed decision making on pollution mitigation strategies. In this study, we apply a probabilistic methodology, which combines Bayesian Monte Carlo simulation and Maximum Likelihood e...
Estimation of Stochastic Frontier Models with Fixed Effects through Monte Carlo Maximum Likelihood
Directory of Open Access Journals (Sweden)
Grigorios Emvalomatis
2011-01-01
Full Text Available Estimation of nonlinear fixed-effects models is plagued by the incidental parameters problem. This paper proposes a procedure for choosing appropriate densities for integrating the incidental parameters from the likelihood function in a general context. The densities are based on priors that are updated using information from the data and are robust to possible correlation of the group-specific constant terms with the explanatory variables. Monte Carlo experiments are performed in the specific context of stochastic frontier models to examine and compare the sampling properties of the proposed estimator with those of the random-effects and correlated random-effects estimators. The results suggest that the estimator is unbiased even in short panels. An application to a cross-country panel of EU manufacturing industries is presented as well. The proposed estimator produces a distribution of efficiency scores suggesting that these industries are highly efficient, while the other estimators suggest much poorer performance.
Frequency-Domain Maximum-Likelihood Estimation of High-Voltage Pulse Transformer Model Parameters
Aguglia, D; Martins, C.D.A.
2014-01-01
This paper presents an offline frequency-domain nonlinear and stochastic identification method for equivalent model parameter estimation of high-voltage pulse transformers. Such kinds of transformers are widely used in the pulsed-power domain, and the difficulty in deriving pulsed-power converter optimal control strategies is directly linked to the accuracy of the equivalent circuit parameters. These components require models which take into account electric fields energies represented by stray capacitance in the equivalent circuit. These capacitive elements must be accurately identified, since they greatly influence the general converter performances. A nonlinear frequency-based identification method, based on maximum-likelihood estimation, is presented, and a sensitivity analysis of the best experimental test to be considered is carried out. The procedure takes into account magnetic saturation and skin effects occurring in the windings during the frequency tests. The presented method is validated by experim...
Sze, N N; Wong, S C; Lee, C Y
2014-12-01
In past several decades, many countries have set quantified road safety targets to motivate transport authorities to develop systematic road safety strategies and measures and facilitate the achievement of continuous road safety improvement. Studies have been conducted to evaluate the association between the setting of quantified road safety targets and road fatality reduction, in both the short and long run, by comparing road fatalities before and after the implementation of a quantified road safety target. However, not much work has been done to evaluate whether the quantified road safety targets are actually achieved. In this study, we used a binary logistic regression model to examine the factors - including vehicle ownership, fatality rate, and national income, in addition to level of ambition and duration of target - that contribute to a target's success. We analyzed 55 quantified road safety targets set by 29 countries from 1981 to 2009, and the results indicate that targets that are in progress and with lower level of ambitions had a higher likelihood of eventually being achieved. Moreover, possible interaction effects on the association between level of ambition and the likelihood of success are also revealed. Copyright © 2014 Elsevier Ltd. All rights reserved.
Likelihood Analysis of Multivariate Probit Models Using a Parameter Expanded MCEM Algorithm.
Xu, Huiping; Craig, Bruce A
2010-08-01
Multivariate binary data arise in a variety of settings. In this paper, we propose a practical and efficient computational framework for maximum likelihood estimation of multivariate probit regression models. This approach uses the Monte Carlo EM (MCEM) algorithm, with parameter expansion to complete the M-step, to avoid the direct evaluation of the intractable multivariate normal orthant probabilities. The parameter expansion not only enables a closed-form solution in the M-step but also improves efficiency. Using the simulation studies, we compare the performance of our approach with the MCEM algorithms developed by Chib and Greenberg (1998) and Song and Lee (2005), as well as the iterative approach proposed by Li and Schafer (2008). Our approach is further illustrated using a real-world example.
Likelihood Analysis of Multivariate Probit Models Using a Parameter Expanded MCEM Algorithm
Xu, Huiping; Craig, Bruce A.
2010-01-01
Multivariate binary data arise in a variety of settings. In this paper, we propose a practical and efficient computational framework for maximum likelihood estimation of multivariate probit regression models. This approach uses the Monte Carlo EM (MCEM) algorithm, with parameter expansion to complete the M-step, to avoid the direct evaluation of the intractable multivariate normal orthant probabilities. The parameter expansion not only enables a closed-form solution in the M-step but also improves efficiency. Using the simulation studies, we compare the performance of our approach with the MCEM algorithms developed by Chib and Greenberg (1998) and Song and Lee (2005), as well as the iterative approach proposed by Li and Schafer (2008). Our approach is further illustrated using a real-world example. PMID:21042430
Yan, Jun; Yu, Kegen; Wu, Lenan
2014-12-01
To mitigate the non-line-of-sight (NLOS) effect, a three-step positioning approach is proposed in this article for target tracking. The possibility of each distance measurement under line-of-sight condition is first obtained by applying the truncated triangular probability-possibility transformation associated with fuzzy modeling. Based on the calculated possibilities, the measurements are utilized to obtain intermediate position estimates using the maximum likelihood estimation (MLE), according to identified measurement condition. These intermediate position estimates are then filtered using a linear Kalman filter (KF) to produce the final target position estimates. The target motion information and statistical characteristics of the MLE results are employed in updating the KF parameters. The KF position prediction is exploited for MLE parameter initialization and distance measurement selection. Simulation results demonstrate that the proposed approach outperforms the existing algorithms in the presence of unknown NLOS propagation conditions and achieves a performance close to that when propagation conditions are perfectly known.
Directory of Open Access Journals (Sweden)
Matthew N Benedict
2014-10-01
Full Text Available Genome-scale metabolic models provide a powerful means to harness information from genomes to deepen biological insights. With exponentially increasing sequencing capacity, there is an enormous need for automated reconstruction techniques that can provide more accurate models in a short time frame. Current methods for automated metabolic network reconstruction rely on gene and reaction annotations to build draft metabolic networks and algorithms to fill gaps in these networks. However, automated reconstruction is hampered by database inconsistencies, incorrect annotations, and gap filling largely without considering genomic information. Here we develop an approach for applying genomic information to predict alternative functions for genes and estimate their likelihoods from sequence homology. We show that computed likelihood values were significantly higher for annotations found in manually curated metabolic networks than those that were not. We then apply these alternative functional predictions to estimate reaction likelihoods, which are used in a new gap filling approach called likelihood-based gap filling to predict more genomically consistent solutions. To validate the likelihood-based gap filling approach, we applied it to models where essential pathways were removed, finding that likelihood-based gap filling identified more biologically relevant solutions than parsimony-based gap filling approaches. We also demonstrate that models gap filled using likelihood-based gap filling provide greater coverage and genomic consistency with metabolic gene functions compared to parsimony-based approaches. Interestingly, despite these findings, we found that likelihoods did not significantly affect consistency of gap filled models with Biolog and knockout lethality data. This indicates that the phenotype data alone cannot necessarily be used to discriminate between alternative solutions for gap filling and therefore, that the use of other information
Petersen, Maya; Schwab, Joshua; Gruber, Susan; Blaser, Nello; Schomaker, Michael; van der Laan, Mark
2014-06-18
This paper describes a targeted maximum likelihood estimator (TMLE) for the parameters of longitudinal static and dynamic marginal structural models. We consider a longitudinal data structure consisting of baseline covariates, time-dependent intervention nodes, intermediate time-dependent covariates, and a possibly time-dependent outcome. The intervention nodes at each time point can include a binary treatment as well as a right-censoring indicator. Given a class of dynamic or static interventions, a marginal structural model is used to model the mean of the intervention-specific counterfactual outcome as a function of the intervention, time point, and possibly a subset of baseline covariates. Because the true shape of this function is rarely known, the marginal structural model is used as a working model. The causal quantity of interest is defined as the projection of the true function onto this working model. Iterated conditional expectation double robust estimators for marginal structural model parameters were previously proposed by Robins (2000, 2002) and Bang and Robins (2005). Here we build on this work and present a pooled TMLE for the parameters of marginal structural working models. We compare this pooled estimator to a stratified TMLE (Schnitzer et al. 2014) that is based on estimating the intervention-specific mean separately for each intervention of interest. The performance of the pooled TMLE is compared to the performance of the stratified TMLE and the performance of inverse probability weighted (IPW) estimators using simulations. Concepts are illustrated using an example in which the aim is to estimate the causal effect of delayed switch following immunological failure of first line antiretroviral therapy among HIV-infected patients. Data from the International Epidemiological Databases to Evaluate AIDS, Southern Africa are analyzed to investigate this question using both TML and IPW estimators. Our results demonstrate practical advantages of the
Semantic Likelihood Models for Bayesian Inference in Human-Robot Interaction
Sweet, Nicholas
Autonomous systems, particularly unmanned aerial systems (UAS), remain limited in au- tonomous capabilities largely due to a poor understanding of their environment. Current sensors simply do not match human perceptive capabilities, impeding progress towards full autonomy. Recent work has shown the value of humans as sources of information within a human-robot team; in target applications, communicating human-generated 'soft data' to autonomous systems enables higher levels of autonomy through large, efficient information gains. This requires development of a 'human sensor model' that allows soft data fusion through Bayesian inference to update the probabilistic belief representations maintained by autonomous systems. Current human sensor models that capture linguistic inputs as semantic information are limited in their ability to generalize likelihood functions for semantic statements: they may be learned from dense data; they do not exploit the contextual information embedded within groundings; and they often limit human input to restrictive and simplistic interfaces. This work provides mechanisms to synthesize human sensor models from constraints based on easily attainable a priori knowledge, develops compression techniques to capture information-dense semantics, and investigates the problem of capturing and fusing semantic information contained within unstructured natural language. A robotic experimental testbed is also developed to validate the above contributions.
Romeo, José S; Torres-Avilés, Francisco; López-Kleine, Liliana
2013-02-01
Publicly available genomic data are a great source of biological knowledge that can be extracted when appropriate data analysis is used. Predicting the biological function of genes is of interest to understand molecular mechanisms of virulence and resistance in pathogens and hosts and is important for drug discovery and disease control. This is commonly done by searching for similar gene expression behavior. Here, we used publicly available Streptococcus pyogenes microarray data obtained during primate infection to identify genes that have a potential influence on virulence and Phytophtora infestance inoculated tomato microarray data to identify genes potentially implicated in resistance processes. This approach goes beyond co-expression analysis. We employed a quasi-likelihood model separated by primate gender/inoculation condition to model median gene expression of known virulence/resistance factors. Based on this model, an influence analysis considering time course measurement was performed to detect genes with atypical expression. This procedure allowed for the detection of genes potentially implicated in the infection process. Finally, we discuss the biological meaning of these results, showing that influence analysis is an efficient and useful alternative for functional gene prediction.
Maximum Likelihood Implementation of an Isolation-with-Migration Model for Three Species.
Dalquen, Daniel A; Zhu, Tianqi; Yang, Ziheng
2017-05-01
We develop a maximum likelihood (ML) method for estimating migration rates between species using genomic sequence data. A species tree is used to accommodate the phylogenetic relationships among three species, allowing for migration between the two sister species, while the third species is used as an out-group. A Markov chain characterization of the genealogical process of coalescence and migration is used to integrate out the migration histories at each locus analytically, whereas Gaussian quadrature is used to integrate over the coalescent times on each genealogical tree numerically. This is an extension of our early implementation of the symmetrical isolation-with-migration model for three species to accommodate arbitrary loci with two or three sequences per locus and to allow asymmetrical migration rates. Our implementation can accommodate tens of thousands of loci, making it feasible to analyze genome-scale data sets to test for gene flow. We calculate the posterior probabilities of gene trees at individual loci to identify genomic regions that are likely to have been transferred between species due to gene flow. We conduct a simulation study to examine the statistical properties of the likelihood ratio test for gene flow between the two in-group species and of the ML estimates of model parameters such as the migration rate. Inclusion of data from a third out-group species is found to increase dramatically the power of the test and the precision of parameter estimation. We compiled and analyzed several genomic data sets from the Drosophila fruit flies. Our analyses suggest no migration from D. melanogaster to D. simulans, and a significant amount of gene flow from D. simulans to D. melanogaster, at the rate of ~0.02 migrant individuals per generation. We discuss the utility of the multispecies coalescent model for species tree estimation, accounting for incomplete lineage sorting and migration. © The Author(s) 2016. Published by Oxford University Press, on
Directory of Open Access Journals (Sweden)
Francesco Bartolucci
2017-06-01
Full Text Available We illustrate the R package cquad for conditional maximum likelihood estimation of the quadratic exponential (QE model proposed by Bartolucci and Nigro (2010 for the analysis of binary panel data. The package also allows us to estimate certain modified versions of the QE model, which are based on alternative parametrizations, and it includes a function for the pseudo-conditional likelihood estimation of the dynamic logit model, as proposed by Bartolucci and Nigro (2012. We also illustrate a reduced version of this package that is available in Stata. The use of the main functions of this package is based on examples using labor market data.
Royle, J. Andrew; Chandler, Richard B.; Yackulic, Charles; Nichols, James D.
2012-01-01
1. Understanding the factors affecting species occurrence is a pre-eminent focus of applied ecological research. However, direct information about species occurrence is lacking for many species. Instead, researchers sometimes have to rely on so-called presence-only data (i.e. when no direct information about absences is available), which often results from opportunistic, unstructured sampling. MAXENT is a widely used software program designed to model and map species distribution using presence-only data. 2. We provide a critical review of MAXENT as applied to species distribution modelling and discuss how it can lead to inferential errors. A chief concern is that MAXENT produces a number of poorly defined indices that are not directly related to the actual parameter of interest – the probability of occurrence (ψ). This focus on an index was motivated by the belief that it is not possible to estimate ψ from presence-only data; however, we demonstrate that ψ is identifiable using conventional likelihood methods under the assumptions of random sampling and constant probability of species detection. 3. The model is implemented in a convenient r package which we use to apply the model to simulated data and data from the North American Breeding Bird Survey. We demonstrate that MAXENT produces extreme under-predictions when compared to estimates produced by logistic regression which uses the full (presence/absence) data set. We note that MAXENT predictions are extremely sensitive to specification of the background prevalence, which is not objectively estimated using the MAXENT method. 4. As with MAXENT, formal model-based inference requires a random sample of presence locations. Many presence-only data sets, such as those based on museum records and herbarium collections, may not satisfy this assumption. However, when sampling is random, we believe that inference should be based on formal methods that facilitate inference about interpretable ecological quantities
Lee, Ya-Ting; Turcotte, Donald L; Holliday, James R; Sachs, Michael K; Rundle, John B; Chen, Chien-Chih; Tiampo, Kristy F
2011-10-04
The Regional Earthquake Likelihood Models (RELM) test of earthquake forecasts in California was the first competitive evaluation of forecasts of future earthquake occurrence. Participants submitted expected probabilities of occurrence of M ≥ 4.95 earthquakes in 0.1° × 0.1° cells for the period 1 January 1, 2006, to December 31, 2010. Probabilities were submitted for 7,682 cells in California and adjacent regions. During this period, 31 M ≥ 4.95 earthquakes occurred in the test region. These earthquakes occurred in 22 test cells. This seismic activity was dominated by earthquakes associated with the M = 7.2, April 4, 2010, El Mayor-Cucapah earthquake in northern Mexico. This earthquake occurred in the test region, and 16 of the other 30 earthquakes in the test region could be associated with it. Nine complete forecasts were submitted by six participants. In this paper, we present the forecasts in a way that allows the reader to evaluate which forecast is the most "successful" in terms of the locations of future earthquakes. We conclude that the RELM test was a success and suggest ways in which the results can be used to improve future forecasts.
DEFF Research Database (Denmark)
Silvennoinen, Annestiina; Terasvirta, Timo
A new multivariate volatility model that belongs to the family of conditional correlation GARCH models is introduced. The GARCH equations of this model contain a multiplicative deterministic component to describe long-run movements in volatility and, in addition, the correlations are deterministi......A new multivariate volatility model that belongs to the family of conditional correlation GARCH models is introduced. The GARCH equations of this model contain a multiplicative deterministic component to describe long-run movements in volatility and, in addition, the correlations...... are deterministically time-varying. Parameters of the model are estimated jointly using maximum likelihood. Consistency and asymptotic normality of maximum likelihood estimators is proved. Numerical aspects of the estimation algorithm are discussed. A bivariate empirical example is provided....
Maximum likelihood estimation for Cox's regression model under nested case-control sampling
DEFF Research Database (Denmark)
Scheike, Thomas Harder; Juul, Anders
2004-01-01
Nested case-control sampling is designed to reduce the costs of large cohort studies. It is important to estimate the parameters of interest as efficiently as possible. We present a new maximum likelihood estimator (MLE) for nested case-control sampling in the context of Cox's proportional hazards......-like growth factor I was associated with ischemic heart disease. The study was based on a population of 3784 Danes and 231 cases of ischemic heart disease where controls were matched on age and gender. We illustrate the use of the MLE for these data and show how the maximum likelihood framework can be used...
Kelderman, Henk
1992-01-01
In this paper algorithms are described for obtaining the maximum likelihood estimates of the parameters in loglinear models. Modified versions of the iterative proportional fitting and Newton-Raphson algorithms are described that work on the minimal sufficient statistics rather than on the usual
Kelderman, Henk
1991-01-01
In this paper, algorithms are described for obtaining the maximum likelihood estimates of the parameters in log-linear models. Modified versions of the iterative proportional fitting and Newton-Raphson algorithms are described that work on the minimal sufficient statistics rather than on the usual
Animal Models of Subjective Tinnitus
Directory of Open Access Journals (Sweden)
Wolfger von der Behrens
2014-01-01
Full Text Available Tinnitus is one of the major audiological diseases, affecting a significant portion of the ageing society. Despite its huge personal and presumed economic impact there are only limited therapeutic options available. The reason for this deficiency lies in the very nature of the disease as it is deeply connected to elementary plasticity of auditory processing in the central nervous system. Understanding these mechanisms is essential for developing a therapy that reverses the plastic changes underlying the pathogenesis of tinnitus. This requires experiments that address individual neurons and small networks, something usually not feasible in human patients. However, in animals such invasive experiments on the level of single neurons with high spatial and temporal resolution are possible. Therefore, animal models are a very critical element in the combined efforts for engineering new therapies. This review provides an overview over the most important features of animal models of tinnitus: which laboratory species are suitable, how to induce tinnitus, and how to characterize the perceived tinnitus by behavioral means. In particular, these aspects of tinnitus animal models are discussed in the light of transferability to the human patients.
Animal Models of Subjective Tinnitus
2014-01-01
Tinnitus is one of the major audiological diseases, affecting a significant portion of the ageing society. Despite its huge personal and presumed economic impact there are only limited therapeutic options available. The reason for this deficiency lies in the very nature of the disease as it is deeply connected to elementary plasticity of auditory processing in the central nervous system. Understanding these mechanisms is essential for developing a therapy that reverses the plastic changes underlying the pathogenesis of tinnitus. This requires experiments that address individual neurons and small networks, something usually not feasible in human patients. However, in animals such invasive experiments on the level of single neurons with high spatial and temporal resolution are possible. Therefore, animal models are a very critical element in the combined efforts for engineering new therapies. This review provides an overview over the most important features of animal models of tinnitus: which laboratory species are suitable, how to induce tinnitus, and how to characterize the perceived tinnitus by behavioral means. In particular, these aspects of tinnitus animal models are discussed in the light of transferability to the human patients. PMID:24829805
Casabianca, Jodi M.; Lewis, Charles
2015-01-01
Loglinear smoothing (LLS) estimates the latent trait distribution while making fewer assumptions about its form and maintaining parsimony, thus leading to more precise item response theory (IRT) item parameter estimates than standard marginal maximum likelihood (MML). This article provides the expectation-maximization algorithm for MML estimation…
Barrett, Jessica; Diggle, Peter; Henderson, Robin; Taylor-Robinson, David
2015-01-01
Random effects or shared parameter models are commonly advocated for the analysis of combined repeated measurement and event history data, including dropout from longitudinal trials. Their use in practical applications has generally been limited by computational cost and complexity, meaning that only simple special cases can be fitted by using readily available software. We propose a new approach that exploits recent distributional results for the extended skew normal family to allow exact likelihood inference for a flexible class of random-effects models. The method uses a discretization of the timescale for the time-to-event outcome, which is often unavoidable in any case when events correspond to dropout. We place no restriction on the times at which repeated measurements are made. An analysis of repeated lung function measurements in a cystic fibrosis cohort is used to illustrate the method.
Implications of the Regional Earthquake Likelihood Models test of earthquake forecasts in California
Directory of Open Access Journals (Sweden)
Michael Karl Sachs
2012-09-01
Full Text Available The Regional Earthquake Likelihood Models (RELM test was the first competitive comparison of prospective earthquake forecasts. The test was carried out over 5 years from 1 January 2006 to 31 December 2010 over a region that included all of California. The test area was divided into 7682 0.1°x0.1° spatial cells. Each submitted forecast gave the predicted numbers of earthquakes Nemi larger than M=4.95 in 0.1 magnitude bins for each cell. In this paper we present a method that separates the forecast of the number of test earthquakes from the forecast of their locations. We first obtain the number Nem of forecast earthquakes in magnitude bin m. We then determine the conditional probability λemi=Nemi/Nem that an earthquake in magnitude bin m will occur in cell i. The summation of λemi over all 7682 cells is unity. A random (no skill forecast gives equal values of λemi for all spatial cells and magnitude bins. The skill of a forecast, in terms of the location of the earthquakes, is measured by the success in assigning large values of λemi to the cells in which earthquakes occur and low values of λemi to the cells where earthquakes do not occur. Thirty-one test earthquakes occurred in 27 different combinations of spatial cells i and magnitude bins m, we had the highest value of λemi for that mi cell. We evaluate the performance of eleven submitted forecasts in two ways. First, we determine the number of mi cells for which the forecast λemi was the largest, the best forecast is the one with the highest number. Second, we determine the mean value of λemi for the 27 mi cells for each forecast. The best forecast has the highest mean value of λemi. The success of a forecast during the test period is dependent on the allocation of the probabilities λemi between the mi cells, since the sum over the mi cells is unity. We illustrate the forecast distributions of λemi and discuss their differences. We conclude that the RELM test was successful in
Directory of Open Access Journals (Sweden)
Jensen Just
2004-01-01
Full Text Available Abstract A Gaussian mixture model with a finite number of components and correlated random effects is described. The ultimate objective is to model somatic cell count information in dairy cattle and to develop criteria for genetic selection against mastitis, an important udder disease. Parameter estimation is by maximum likelihood or by an extension of restricted maximum likelihood. A Monte Carlo expectation-maximization algorithm is used for this purpose. The expectation step is carried out using Gibbs sampling, whereas the maximization step is deterministic. Ranking rules based on the conditional probability of membership in a putative group of uninfected animals, given the somatic cell information, are discussed. Several extensions of the model are suggested.
Zhang, Ruoqiao; Thibault, Jean-Baptiste; Bouman, Charles A; Sauer, Ken D; Hsieh, Jiang
2014-01-01
Dual-energy X-ray CT (DECT) has the potential to improve contrast and reduce artifacts as compared to traditional CT. Moreover, by applying model-based iterative reconstruction (MBIR) to dual-energy data, one might also expect to reduce noise and improve resolution. However, the direct implementation of dual-energy MBIR requires the use of a nonlinear forward model, which increases both complexity and computation. Alternatively, simplified forward models have been used which treat the material-decomposed channels separately, but these approaches do not fully account for the statistical dependencies in the channels. In this paper, we present a method for joint dual-energy MBIR (JDE-MBIR), which simplifies the forward model while still accounting for the complete statistical dependency in the material-decomposed sinogram components. The JDE-MBIR approach works by using a quadratic approximation to the polychromatic log-likelihood and a simple but exact nonnegativity constraint in the image domain. We demonstrate that our method is particularly effective when the DECT system uses fast kVp switching, since in this case the model accounts for the inaccuracy of interpolated sinogram entries. Both phantom and clinical results show that the proposed model produces images that compare favorably in quality to previous decomposition-based methods, including FBP and other statistical iterative approaches.
Directory of Open Access Journals (Sweden)
Yi Zhou
2018-01-01
Full Text Available Finite mixture model (FMM is being increasingly used for unsupervised image segmentation. In this paper, a new finite mixture model based on a combination of generalized Gamma and Gaussian distributions using a trimmed likelihood estimator (GGMM-TLE is proposed. GGMM-TLE combines the effectiveness of Gaussian distribution with the asymmetric capability of generalized Gamma distribution to provide superior flexibility for describing different shapes of observation data. Another advantage is that we consider the spatial information among neighbouring pixels by introducing Markov random field (MRF; thus, the proposed mixture model remains sufficiently robust with respect to different types and levels of noise. Moreover, this paper presents a new component-based confidence level ordering trimmed likelihood estimator, with a simple form, allowing GGMM-TLE to estimate the parameters after discarding the outliers. Thus, the proposed algorithm can effectively eliminate the disturbance of outliers. Furthermore, the paper proves the identifiability of the proposed mixture model in theory to guarantee that the parameter estimation procedures are well defined. Finally, an expectation maximization (EM algorithm is included to estimate the parameters of GGMM-TLE by maximizing the log-likelihood function. Experiments on multiple public datasets demonstrate that GGMM-TLE achieves a superior performance compared with several existing methods in image segmentation tasks.
Lirio, R B; Dondériz, I C; Pérez Abalo, M C
1992-08-01
The methodology of Receiver Operating Characteristic curves based on the signal detection model is extended to evaluate the accuracy of two-stage diagnostic strategies. A computer program is developed for the maximum likelihood estimation of parameters that characterize the sensitivity and specificity of two-stage classifiers according to this extended methodology. Its use is briefly illustrated with data collected in a two-stage screening for auditory defects.
Modelling the lLikelihood of Line-of-Sight for air-to-ground radio propagation in urban environments
Feng, Q; Tameh, EK; Nix, AR; McGeehan, JP
2006-01-01
The likelihood of line-of-sight (LoS) is an essential component in any radio channel model. It is particularly useful for radio network planning and urban coverage prediction. Empirical LoS models are hard to derive due to a strong dependency on local topology and the need for large measurement datasets. Since buildings are the major obstructions in a dense urban environment, we propose a new theoretical model to determine the LoS probability for air-to-ground channels based on local building...
Meyer, P. D.; Ye, M.; Neuman, S. P.; Rockhold, M. L.
2006-12-01
Applications of groundwater flow and transport models to regulatory and design problems have illustrated the potential importance of accounting for uncertainties in model conceptualization and structure as well as model parameters. One approach to this issue is to characterize model uncertainty using a discrete set of alternatives and assess the prediction uncertainty arising from the joint impact of model and parameter uncertainty. We demonstrate the application of this approach to the modeling of groundwater flow and uranium transport at the 300 Area of the Dept. of Energy Hanford Site in Washington State using the recently developed Maximum Likelihood Bayesian Model Averaging (MLBMA) method. Model uncertainty was included using alternative representations of the hydrogeologic units at the 300 Area and alternative representations of uranium adsorption. Parameter uncertainties for each model were based on the estimated parameter covariances resulting from the joint calibration of each model alternative to observations of hydraulic head and uranium concentration. The relative plausibility of each calibrated model was expressed in terms of a posterior model probability computed on the basis of Kashyap's information criterion KIC. Results of the application show that model uncertainty may dominate parameter uncertainty for the set of alternative models considered. We discuss the sensitivity of model probabilities to differences in KIC values and examine the effect of particular calibration data on model probabilities. In addition, we discuss the advantages of KIC over other model discrimination criteria for estimating model probabilities.
Sentürk, Damla; Dalrymple, Lorien S; Mu, Yi; Nguyen, Danh V
2014-11-10
We propose a new weighted hurdle regression method for modeling count data, with particular interest in modeling cardiovascular events in patients on dialysis. Cardiovascular disease remains one of the leading causes of hospitalization and death in this population. Our aim is to jointly model the relationship/association between covariates and (i) the probability of cardiovascular events, a binary process, and (ii) the rate of events once the realization is positive-when the 'hurdle' is crossed-using a zero-truncated Poisson distribution. When the observation period or follow-up time, from the start of dialysis, varies among individuals, the estimated probability of positive cardiovascular events during the study period will be biased. Furthermore, when the model contains covariates, then the estimated relationship between the covariates and the probability of cardiovascular events will also be biased. These challenges are addressed with the proposed weighted hurdle regression method. Estimation for the weighted hurdle regression model is a weighted likelihood approach, where standard maximum likelihood estimation can be utilized. The method is illustrated with data from the United States Renal Data System. Simulation studies show the ability of proposed method to successfully adjust for differential follow-up times and incorporate the effects of covariates in the weighting. Copyright © 2014 John Wiley & Sons, Ltd.
Maximum Likelihood in a Generalized Linear Finite Mixture Model by Using the EM Algorithm
Jansen, R.C.
A generalized linear finite mixture model and an EM algorithm to fit the model to data are described. By this approach the finite mixture model is embedded within the general framework of generalized linear models (GLMs). Implementation of the proposed EM algorithm can be readily done in statistical
Pal, Suvra; Balakrishnan, N
2017-10-01
In this paper, we consider a competing cause scenario and assume the number of competing causes to follow a Conway-Maxwell Poisson distribution which can capture both over and under dispersion that is usually encountered in discrete data. Assuming the population of interest having a component cure and the form of the data to be interval censored, as opposed to the usually considered right-censored data, the main contribution is in developing the steps of the expectation maximization algorithm for the determination of the maximum likelihood estimates of the model parameters of the flexible Conway-Maxwell Poisson cure rate model with Weibull lifetimes. An extensive Monte Carlo simulation study is carried out to demonstrate the performance of the proposed estimation method. Model discrimination within the Conway-Maxwell Poisson distribution is addressed using the likelihood ratio test and information-based criteria to select a suitable competing cause distribution that provides the best fit to the data. A simulation study is also carried out to demonstrate the loss in efficiency when selecting an improper competing cause distribution which justifies the use of a flexible family of distributions for the number of competing causes. Finally, the proposed methodology and the flexibility of the Conway-Maxwell Poisson distribution are illustrated with two known data sets from the literature: smoking cessation data and breast cosmesis data.
Falk, Carl F.; Cai, Li
2015-01-01
We present a semi-parametric approach to estimating item response functions (IRF) useful when the true IRF does not strictly follow commonly used functions. Our approach replaces the linear predictor of the generalized partial credit model with a monotonic polynomial. The model includes the regular generalized partial credit model at the lowest order polynomial. Our approach extends Liang’s (2007) method for dichotomous item responses to the case of polytomous data. Furthermore, item parameter estimation is implemented with maximum marginal likelihood using the Bock-Aitkin EM algorithm, thereby facilitating multiple-group analyses useful in operational settings. Our approach is demonstrated on both educational and psychological data. We present simulation results comparing our approach to more standard IRF estimation approaches and other non-parametric and semi-parametric alternatives. PMID:25487423
Directory of Open Access Journals (Sweden)
Mingyu Liu
2016-12-01
Full Text Available Nowadays, the use of freeform surfaces in various functional applications has become more widespread. Multi-sensor coordinate measuring machines (CMMs are becoming popular and are produced by many CMM manufacturers since their measurement ability can be significantly improved with the help of different kinds of sensors. Moreover, the measurement accuracy after data fusion for multiple sensors can be improved. However, the improvement is affected by many issues in practice, especially when the measurement results have bias and there exists uncertainty regarding the data modelling method. This paper proposes a generic data modelling and data fusion method for the measurement of freeform surfaces using multi-sensor CMMs and attempts to study the factors which affect the fusion result. Based on the data modelling method for the original measurement datasets and the statistical Bayesian inference data fusion method, this paper presents a Gaussian process data modelling and maximum likelihood data fusion method for supporting multi-sensor CMM measurement of freeform surfaces. The datasets from different sensors are firstly modelled with the Gaussian process to obtain the mean surfaces and covariance surfaces, which represent the underlying surfaces and associated measurement uncertainties. Hence, the mean surfaces and the covariance surfaces are fused together with the maximum likelihood principle so as to obtain the statistically best estimated underlying surface and associated measurement uncertainty. With this fusion method, the overall measurement uncertainty after fusion is smaller than each of the single-sensor measurements. The capability of the proposed method is demonstrated through a series of simulations and real measurements of freeform surfaces on a multi-sensor CMM. The accuracy of the Gaussian process data modelling and the influence of the form error and measurement noise are also discussed and demonstrated in a series of experiments
Klein, Daniel; Zezula, Ivan
The extended growth curve model is discussed in this paper. There are two versions of the model studied in the literature, which differ in the way how the column spaces of the design matrices are nested. The nesting is applied either to the between-individual or to the within-individual design
DEFF Research Database (Denmark)
Cavaliere, Giuseppe; Nielsen, Morten Ørregaard; Taylor, Robert
We consider the problem of conducting estimation and inference on the parameters of univariate heteroskedastic fractionally integrated time series models. We first extend existing results in the literature, developed for conditional sum-of squares estimators in the context of parametric fractional...... time series models driven by conditionally homoskedastic shocks, to allow for conditional and unconditional heteroskedasticity both of a quite general and unknown form. Global consistency and asymptotic normality are shown to still obtain; however, the covariance matrix of the limiting distribution...... of the estimator now depends on nuisance parameters derived both from the weak dependence and heteroskedasticity present in the shocks. We then investigate classical methods of inference based on the Wald, likelihood ratio and Lagrange multiplier tests for linear hypotheses on either or both of the long and short...
Johnson, T. J.; Harding, A. K.; Venter, C.
2012-01-01
Pulsed gamma rays have been detected with the Fermi Large Area Telescope (LAT) from more than 20 millisecond pulsars (MSPs), some of which were discovered in radio observations of bright, unassociated LAT sources. We have fit the radio and gamma-ray light curves of 19 LAT-detected MSPs in the context of geometric, outermagnetospheric emission models assuming the retarded vacuum dipole magnetic field using a Markov chain Monte Carlo maximum likelihood technique. We find that, in many cases, the models are able to reproduce the observed light curves well and provide constraints on the viewing geometries that are in agreement with those from radio polarization measurements. Additionally, for some MSPs we constrain the altitudes of both the gamma-ray and radio emission regions. The best-fit magnetic inclination angles are found to cover a broader range than those of non-recycled gamma-ray pulsars.
A Monte Carlo Study of Marginal Maximum Likelihood Parameter Estimates for the Graded Model.
Ankenmann, Robert D.; Stone, Clement A.
Effects of test length, sample size, and assumed ability distribution were investigated in a multiple replication Monte Carlo study under the 1-parameter (1P) and 2-parameter (2P) logistic graded model with five score levels. Accuracy and variability of item parameter and ability estimates were examined. Monte Carlo methods were used to evaluate…
Morales-Casique, E.; Neuman, S.P.; Vesselinov, V.V.
2010-01-01
We use log permeability and porosity data obtained from single-hole pneumatic packer tests in six boreholes drilled into unsaturated fractured tuff near Superior, Arizona, to postulate, calibrate and compare five alternative variogram models (exponential, exponential with linear drift, power,
Inferring fixed effects in a mixed linear model from an integrated likelihood
DEFF Research Database (Denmark)
Gianola, Daniel; Sorensen, Daniel
2008-01-01
of all nuisances, viewing random effects and variance components as missing data. In a simulation of a grazing trial, the procedure was compared with four widely used estimators of fixed effects in mixed models, and found to be competitive. An analysis of body weight in freshwater crayfish was conducted...
Haned, Hinda; Benschop, Corina C G; Gill, Peter D; Sijen, Titia
2015-05-01
The interpretation of mixed DNA profiles obtained from low template DNA samples has proven to be a particularly difficult task in forensic casework. Newly developed likelihood ratio (LR) models that account for PCR-related stochastic effects, such as allelic drop-out, drop-in and stutters, have enabled the analysis of complex cases that would otherwise have been reported as inconclusive. In such samples, there are uncertainties about the number of contributors, and the correct sets of propositions to consider. Using experimental samples, where the genotypes of the donors are known, we evaluated the feasibility and the relevance of the interpretation of high order mixtures, of three, four and five donors. The relative risks of analyzing high order mixtures of three, four, and five donors, were established by comparison of a 'gold standard' LR, to the LR that would be obtained in casework. The 'gold standard' LR is the ideal LR: since the genotypes and number of contributors are known, it follows that the parameters needed to compute the LR can be determined per contributor. The 'casework LR' was calculated as used in standard practice, where unknown donors are assumed; the parameters were estimated from the available data. Both LRs were calculated using the basic standard model, also termed the drop-out/drop-in model, implemented in the LRmix module of the R package Forensim. We show how our results furthered the understanding of the relevance of analyzing high order mixtures in a forensic context. Limitations are highlighted, and it is illustrated how our study serves as a guide to implement likelihood ratio interpretation of complex DNA profiles in forensic casework. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Chase, Henry W; Kumar, Poornima; Eickhoff, Simon B; Dombrovski, Alexandre Y
2015-06-01
Reinforcement learning describes motivated behavior in terms of two abstract signals. The representation of discrepancies between expected and actual rewards/punishments-prediction error-is thought to update the expected value of actions and predictive stimuli. Electrophysiological and lesion studies have suggested that mesostriatal prediction error signals control behavior through synaptic modification of cortico-striato-thalamic networks. Signals in the ventromedial prefrontal and orbitofrontal cortex are implicated in representing expected value. To obtain unbiased maps of these representations in the human brain, we performed a meta-analysis of functional magnetic resonance imaging studies that had employed algorithmic reinforcement learning models across a variety of experimental paradigms. We found that the ventral striatum (medial and lateral) and midbrain/thalamus represented reward prediction errors, consistent with animal studies. Prediction error signals were also seen in the frontal operculum/insula, particularly for social rewards. In Pavlovian studies, striatal prediction error signals extended into the amygdala, whereas instrumental tasks engaged the caudate. Prediction error maps were sensitive to the model-fitting procedure (fixed or individually estimated) and to the extent of spatial smoothing. A correlate of expected value was found in a posterior region of the ventromedial prefrontal cortex, caudal and medial to the orbitofrontal regions identified in animal studies. These findings highlight a reproducible motif of reinforcement learning in the cortico-striatal loops and identify methodological dimensions that may influence the reproducibility of activation patterns across studies.
Mousavi, Sayyed R; Khodadadi, Ilnaz; Falsafain, Hossein; Nadimi, Reza; Ghadiri, Nasser
2014-06-07
Human haplotypes include essential information about SNPs, which in turn provide valuable information for such studies as finding relationships between some diseases and their potential genetic causes, e.g., for Genome Wide Association Studies. Due to expensiveness of directly determining haplotypes and recent progress in high throughput sequencing, there has been an increasing motivation for haplotype assembly, which is the problem of finding a pair of haplotypes from a set of aligned fragments. Although the problem has been extensively studied and a number of algorithms have already been proposed for the problem, more accurate methods are still beneficial because of high importance of the haplotypes information. In this paper, first, we develop a probabilistic model, that incorporates the Minor Allele Frequency (MAF) of SNP sites, which is missed in the existing maximum likelihood models. Then, we show that the probabilistic model will reduce to the Minimum Error Correction (MEC) model when the information of MAF is omitted and some approximations are made. This result provides a novel theoretical support for the MEC, despite some criticisms against it in the recent literature. Next, under the same approximations, we simplify the model to an extension of the MEC in which the information of MAF is used. Finally, we extend the haplotype assembly algorithm HapSAT by developing a weighted Max-SAT formulation for the simplified model, which is evaluated empirically with positive results. Copyright © 2014 Elsevier Ltd. All rights reserved.
Directory of Open Access Journals (Sweden)
Salces Judit
2011-08-01
Full Text Available Abstract Background Reference genes with stable expression are required to normalize expression differences of target genes in qPCR experiments. Several procedures and companion software have been proposed to find the most stable genes. Model based procedures are attractive because they provide a solid statistical framework. NormFinder, a widely used software, uses a model based method. The pairwise comparison procedure implemented in GeNorm is a simpler procedure but one of the most extensively used. In the present work a statistical approach based in Maximum Likelihood estimation under mixed models was tested and compared with NormFinder and geNorm softwares. Sixteen candidate genes were tested in whole blood samples from control and heat stressed sheep. Results A model including gene and treatment as fixed effects, sample (animal, gene by treatment, gene by sample and treatment by sample interactions as random effects with heteroskedastic residual variance in gene by treatment levels was selected using goodness of fit and predictive ability criteria among a variety of models. Mean Square Error obtained under the selected model was used as indicator of gene expression stability. Genes top and bottom ranked by the three approaches were similar; however, notable differences for the best pair of genes selected for each method and the remaining genes of the rankings were shown. Differences among the expression values of normalized targets for each statistical approach were also found. Conclusions Optimal statistical properties of Maximum Likelihood estimation joined to mixed model flexibility allow for more accurate estimation of expression stability of genes under many different situations. Accurate selection of reference genes has a direct impact over the normalized expression values of a given target gene. This may be critical when the aim of the study is to compare expression rate differences among samples under different environmental
Analytic methods for cosmological likelihoods
Taylor, A. N.; Kitching, T. D.
2010-10-01
We present general, analytic methods for cosmological likelihood analysis and solve the `many parameters' problem in cosmology. Maxima are found by Newton's method, while marginalization over nuisance parameters, and parameter errors and covariances are estimated by analytic marginalization of an arbitrary likelihood function, expanding the log-likelihood to second order, with flat or Gaussian priors. We show that information about remaining parameters is preserved by marginalization. Marginalizing over all parameters, we find an analytic expression for the Bayesian evidence for model selection. We apply these methods to data described by Gaussian likelihoods with parameters in the mean and covariance. These methods can speed up conventional likelihood analysis by orders of magnitude when combined with Markov chain Monte Carlo methods, while Bayesian model selection becomes effectively instantaneous.
Storm, Emma; Weniger, Christoph; Calore, Francesca
2017-08-01
We present SkyFACT (Sky Factorization with Adaptive Constrained Templates), a new approach for studying, modeling and decomposing diffuse gamma-ray emission. Like most previous analyses, the approach relies on predictions from cosmic-ray propagation codes like GALPROP and DRAGON. However, in contrast to previous approaches, we account for the fact that models are not perfect and allow for a very large number (gtrsim 105) of nuisance parameters to parameterize these imperfections. We combine methods of image reconstruction and adaptive spatio-spectral template regression in one coherent hybrid approach. To this end, we use penalized Poisson likelihood regression, with regularization functions that are motivated by the maximum entropy method. We introduce methods to efficiently handle the high dimensionality of the convex optimization problem as well as the associated semi-sparse covariance matrix, using the L-BFGS-B algorithm and Cholesky factorization. We test the method both on synthetic data as well as on gamma-ray emission from the inner Galaxy, |l|<90o and |b|<20o, as observed by the Fermi Large Area Telescope. We finally define a simple reference model that removes most of the residual emission from the inner Galaxy, based on conventional diffuse emission components as well as components for the Fermi bubbles, the Fermi Galactic center excess, and extended sources along the Galactic disk. Variants of this reference model can serve as basis for future studies of diffuse emission in and outside the Galactic disk.
Directory of Open Access Journals (Sweden)
Huilin Yin
2015-01-01
Full Text Available Imperfect preventive maintenance (PM activities are very common in industrial systems. For condition-based maintenance (CBM, it is necessary to model the failure likelihood of systems subject to imperfect PM activities. In this paper, the models in the field of survival analysis are introduced into CBM. Namely, the generalized accelerated failure time (AFT frailty model is investigated to model the failure likelihood of industrial systems. Further, on the basis of the traditional maximum likelihood (ML estimation and expectation maximization (EM algorithm, the hybrid ML-EM algorithm is investigated for the estimation of parameters. The hybrid iterative estimation procedure is analyzed in detail. In the evaluation experiment, the generated data of a typical degradation model are verified to be appropriate for the real industrial processes with imperfect PM activities. The estimates of the model parameters are calculated using the training data. Then, the performance of the model is analyzed through the prediction of remaining useful life (RUL using the testing data. Finally, comparison between the results of the proposed model and the existing model verifies the effectiveness of the generalized AFT frailty model.
Chatterjee, Nilanjan; Chen, Yi-Hau; Maas, Paige; Carroll, Raymond J
2016-03-01
Information from various public and private data sources of extremely large sample sizes are now increasingly available for research purposes. Statistical methods are needed for utilizing information from such big data sources while analyzing data from individual studies that may collect more detailed information required for addressing specific hypotheses of interest. In this article, we consider the problem of building regression models based on individual-level data from an "internal" study while utilizing summary-level information, such as information on parameters for reduced models, from an "external" big data source. We identify a set of very general constraints that link internal and external models. These constraints are used to develop a framework for semiparametric maximum likelihood inference that allows the distribution of covariates to be estimated using either the internal sample or an external reference sample. We develop extensions for handling complex stratified sampling designs, such as case-control sampling, for the internal study. Asymptotic theory and variance estimators are developed for each case. We use simulation studies and a real data application to assess the performance of the proposed methods in contrast to the generalized regression (GR) calibration methodology that is popular in the sample survey literature.
Energy Technology Data Exchange (ETDEWEB)
Hogden, J.
1996-11-05
The goal of the proposed research is to test a statistical model of speech recognition that incorporates the knowledge that speech is produced by relatively slow motions of the tongue, lips, and other speech articulators. This model is called Maximum Likelihood Continuity Mapping (Malcom). Many speech researchers believe that by using constraints imposed by articulator motions, we can improve or replace the current hidden Markov model based speech recognition algorithms. Unfortunately, previous efforts to incorporate information about articulation into speech recognition algorithms have suffered because (1) slight inaccuracies in our knowledge or the formulation of our knowledge about articulation may decrease recognition performance, (2) small changes in the assumptions underlying models of speech production can lead to large changes in the speech derived from the models, and (3) collecting measurements of human articulator positions in sufficient quantity for training a speech recognition algorithm is still impractical. The most interesting (and in fact, unique) quality of Malcom is that, even though Malcom makes use of a mapping between acoustics and articulation, Malcom can be trained to recognize speech using only acoustic data. By learning the mapping between acoustics and articulation using only acoustic data, Malcom avoids the difficulties involved in collecting articulator position measurements and does not require an articulatory synthesizer model to estimate the mapping between vocal tract shapes and speech acoustics. Preliminary experiments that demonstrate that Malcom can learn the mapping between acoustics and articulation are discussed. Potential applications of Malcom aside from speech recognition are also discussed. Finally, specific deliverables resulting from the proposed research are described.
Directory of Open Access Journals (Sweden)
Katarzyna A Dembek
Full Text Available BACKGROUND: Medical management of critically ill equine neonates (foals can be expensive and labor intensive. Predicting the odds of foal survival using clinical information could facilitate the decision-making process for owners and clinicians. Numerous prognostic indicators and mathematical models to predict outcome in foals have been published; however, a validated scoring method to predict survival in sick foals has not been reported. The goal of this study was to develop and validate a scoring system that can be used by clinicians to predict likelihood of survival of equine neonates based on clinical data obtained on admission. METHODS AND RESULTS: Data from 339 hospitalized foals of less than four days of age admitted to three equine hospitals were included to develop the model. Thirty seven variables including historical information, physical examination and laboratory findings were analyzed by generalized boosted regression modeling (GBM to determine which ones would be included in the survival score. Of these, six variables were retained in the final model. The weight for each variable was calculated using a generalized linear model and the probability of survival for each total score was determined. The highest (7 and the lowest (0 scores represented 97% and 3% probability of survival, respectively. Accuracy of this survival score was validated in a prospective study on data from 283 hospitalized foals from the same three hospitals. Sensitivity, specificity, positive and negative predictive values for the survival score in the prospective population were 96%, 71%, 91%, and 85%, respectively. CONCLUSIONS: The survival score developed in our study was validated in a large number of foals with a wide range of diseases and can be easily implemented using data available in most equine hospitals. GBM was a useful tool to develop the survival score. Further evaluations of this scoring system in field conditions are needed.
Bhutada, Nilesh S; Rollins, Brent L; Perri, Matthew
2017-04-01
A randomized, posttest-only online survey study of adult U.S. consumers determined the advertising effectiveness (attitude toward ad, brand, company, spokes-characters, attention paid to the ad, drug inquiry intention, and perceived product risk) of animated spokes-characters in print direct-to-consumer (DTC) advertising of prescription drugs and the moderating effects of consumers' involvement. Consumers' responses (n = 490) were recorded for animated versus nonanimated (human) spokes-characters in a fictitious DTC ad. Guided by the elaboration likelihood model, data were analyzed using a 2 (spokes-character type: animated/human) × 2 (involvement: high/low) factorial multivariate analysis of covariance (MANCOVA). The MANCOVA indicated significant main effects of spokes-character type and involvement on the dependent variables after controlling for covariate effects. Of the several ad effectiveness variables, consumers only differed on their attitude toward the spokes-characters between the two spokes-character types (specifically, more favorable attitudes toward the human spokes-character). Apart from perceived product risk, high-involvement consumers reacted more favorably to the remaining ad effectiveness variables compared to the low-involvement consumers, and exhibited significantly stronger drug inquiry intentions during their next doctor visit. Further, the moderating effect of consumers' involvement was not observed (nonsignificant interaction effect between spokes-character type and involvement).
Phoebe L. Zarnetske; Thomas C., Jr. Edwards; Gretchen G. Moisen
2007-01-01
Estimating species likelihood of occurrence across extensive landscapes is a powerful management tool. Unfortunately, available occurrence data for landscape-scale modeling is often lacking and usually only in the form of observed presences. Ecologically based pseudo-absence points were generated from within habitat envelopes to accompany presence-only data in habitat...
De Rooi, J.J.; Van der Pers, N.M.; Hendrikx, R.W.A.; Delhez, R.; Bottger, A.J.; Eilers, P.H.C.
2014-01-01
X-ray diffraction scans consist of series of counts; these numbers obey Poisson distributions with varying expected values. These scans are often smoothed and the K2 component is removed. This article proposes a framework in which both issues are treated. Penalized likelihood estimation is used to
Empirical likelihood method in survival analysis
Zhou, Mai
2015-01-01
Add the Empirical Likelihood to Your Nonparametric ToolboxEmpirical Likelihood Method in Survival Analysis explains how to use the empirical likelihood method for right censored survival data. The author uses R for calculating empirical likelihood and includes many worked out examples with the associated R code. The datasets and code are available for download on his website and CRAN.The book focuses on all the standard survival analysis topics treated with empirical likelihood, including hazard functions, cumulative distribution functions, analysis of the Cox model, and computation of empiric
A moving blocks empirical likelihood method for longitudinal data.
Qiu, Jin; Wu, Lang
2015-09-01
In the analysis of longitudinal or panel data, neglecting the serial correlations among the repeated measurements within subjects may lead to inefficient inference. In particular, when the number of repeated measurements is large, it may be desirable to model the serial correlations more generally. An appealing approach is to accommodate the serial correlations nonparametrically. In this article, we propose a moving blocks empirical likelihood method for general estimating equations. Asymptotic results are derived under sequential limits. Simulation studies are conducted to investigate the finite sample performances of the proposed methods and compare them with the elementwise and subject-wise empirical likelihood methods of Wang et al. (2010, Biometrika 97, 79-93) and the block empirical likelihood method of You et al. (2006, Can. J. Statist. 34, 79-96). An application to an AIDS longitudinal study is presented. © 2015, The International Biometric Society.
Directory of Open Access Journals (Sweden)
Jesús Vega Encabo
2015-11-01
Full Text Available In this paper, I claim that subjectivity is a way of being that is constituted through a set of practices in which the self is subject to the dangers of fictionalizing and plotting her life and self-image. I examine some ways of becoming subject through narratives and through theatrical performance before others. Through these practices, a real and active subjectivity is revealed, capable of self-knowledge and self-transformation.
Gong, Qi; Schaubel, Douglas E
2018-01-22
Mean survival time is often of inherent interest in medical and epidemiologic studies. In the presence of censoring and when covariate effects are of interest, Cox regression is the strong default, but mostly due to convenience and familiarity. When survival times are uncensored, covariate effects can be estimated as differences in mean survival through linear regression. Tobit regression can validly be performed through maximum likelihood when the censoring times are fixed (ie, known for each subject, even in cases where the outcome is observed). However, Tobit regression is generally inapplicable when the response is subject to random right censoring. We propose Tobit regression methods based on weighted maximum likelihood which are applicable to survival times subject to both fixed and random censoring times. Under the proposed approach, known right censoring is handled naturally through the Tobit model, with inverse probability of censoring weighting used to overcome random censoring. Essentially, the re-weighting data are intended to represent those that would have been observed in the absence of random censoring. We develop methods for estimating the Tobit regression parameter, then the population mean survival time. A closed form large-sample variance estimator is proposed for the regression parameter estimator, with a semiparametric bootstrap standard error estimator derived for the population mean. The proposed methods are easily implementable using standard software. Finite-sample properties are assessed through simulation. The methods are applied to a large cohort of patients wait-listed for kidney transplantation. Copyright © 2018 John Wiley & Sons, Ltd.
Thomas, J. M.; Hanagud, S.
1975-01-01
The results of two questionnaires sent to engineering experts are statistically analyzed and compared with objective data from Saturn V design and testing. Engineers were asked how likely it was for structural failure to occur at load increments above and below analysts' stress limit predictions. They were requested to estimate the relative probabilities of different failure causes, and of failure at each load increment given a specific cause. Three mathematical models are constructed based on the experts' assessment of causes. The experts' overall assessment of prediction strength fits the Saturn V data better than the models do, but a model test option (T-3) based on the overall assessment gives more design change likelihood to overstrength structures than does an older standard test option. T-3 compares unfavorably with the standard option in a cost optimum structural design problem. The report reflects a need for subjective data when objective data are unavailable.
Using Topic Models to Interpret MEDLINE's Medical Subject Headings
Newman, David; Karimi, Sarvnaz; Cavedon, Lawrence
We consider the task of interpreting and understanding a taxonomy of classification terms applied to documents in a collection. In particular, we show how unsupervised topic models are useful for interpreting and understanding MeSH, the Medical Subject Headings applied to articles in MEDLINE. We introduce the resampled author model, which captures some of the advantages of both the topic model and the author-topic model. We demonstrate how topic models complement and add to the information conveyed in a traditional listing and description of a subject heading hierarchy.
2013-03-01
Proliferation Treaty OSINT Open Source Intelligence SAFF Safing, Arming, Fuzing, Firing SIAM Situational Influence Assessment Module SME Subject...expertise. One of the analysts can also be trained to tweak CAST logic as needed. In this initial build, only open-source intelligence ( OSINT ) will
Directory of Open Access Journals (Sweden)
Seung Oh Lee
2013-10-01
Full Text Available Collection and investigation of flood information are essential to understand the nature of floods, but this has proved difficult in data-poor environments, or in developing or under-developed countries due to economic and technological limitations. The development of remote sensing data, GIS, and modeling techniques have, therefore, proved to be useful tools in the analysis of the nature of floods. Accordingly, this study attempts to estimate a flood discharge using the generalized likelihood uncertainty estimation (GLUE methodology and a 1D hydraulic model, with remote sensing data and topographic data, under the assumed condition that there is no gauge station in the Missouri river, Nebraska, and Wabash River, Indiana, in the United States. The results show that the use of Landsat leads to a better discharge approximation on a large-scale reach than on a small-scale. Discharge approximation using the GLUE depended on the selection of likelihood measures. Consideration of physical conditions in study reaches could, therefore, contribute to an appropriate selection of informal likely measurements. The river discharge assessed by using Landsat image and the GLUE Methodology could be useful in supplementing flood information for flood risk management at a planning level in ungauged basins. However, it should be noted that this approach to the real-time application might be difficult due to the GLUE procedure.
2015-08-01
by ANSI Std. Z39.18 ii Blank iii EXECUTIVE SUMMARY Applying a generalized linear model (GLM) with a logit or probit link is a routine...4 log log 1 log 1 log (2) The canonical link derived for logit from eq 2 is expressed as log 1 (3) The logit model fits to the Ɵ for a...FOR A DOSE-RESPONSE MODEL ECBC-TN-068 Kyong H. Park Steven J. Lagan RESEARCH AND TECHNOLOGY DIRECTORATE August 2015 Approved for public release
Obtaining reliable likelihood ratio tests from simulated likelihood functions.
Directory of Open Access Journals (Sweden)
Laura Mørch Andersen
Full Text Available MIXED MODELS: Models allowing for continuous heterogeneity by assuming that value of one or more parameters follow a specified distribution have become increasingly popular. This is known as 'mixing' parameters, and it is standard practice by researchers--and the default option in many statistical programs--to base test statistics for mixed models on simulations using asymmetric draws (e.g. Halton draws. PROBLEM 1: INCONSISTENT LR TESTS DUE TO ASYMMETRIC DRAWS: This paper shows that when the estimated likelihood functions depend on standard deviations of mixed parameters this practice is very likely to cause misleading test results for the number of draws usually used today. The paper illustrates that increasing the number of draws is a very inefficient solution strategy requiring very large numbers of draws to ensure against misleading test statistics. The main conclusion of this paper is that the problem can be solved completely by using fully antithetic draws, and that using one dimensionally antithetic draws is not enough to solve the problem. PROBLEM 2: MAINTAINING THE CORRECT DIMENSIONS WHEN REDUCING THE MIXING DISTRIBUTION: A second point of the paper is that even when fully antithetic draws are used, models reducing the dimension of the mixing distribution must replicate the relevant dimensions of the quasi-random draws in the simulation of the restricted likelihood. Again this is not standard in research or statistical programs. The paper therefore recommends using fully antithetic draws replicating the relevant dimensions of the quasi-random draws in the simulation of the restricted likelihood and that this should become the default option in statistical programs. JEL classification: C15; C25.
Langbein, John O.
2012-01-01
Recent studies have documented that global positioning system (GPS) time series of position estimates have temporal correlations which have been modeled as a combination of power-law and white noise processes. When estimating quantities such as a constant rate from GPS time series data, the estimated uncertainties on these quantities are more realistic when using a noise model that includes temporal correlations than simply assuming temporally uncorrelated noise. However, the choice of the specific representation of correlated noise can affect the estimate of uncertainty. For many GPS time series, the background noise can be represented by either: (1) a sum of flicker and random-walk noise or, (2) as a power-law noise model that represents an average of the flicker and random-walk noise. For instance, if the underlying noise model is a combination of flicker and random-walk noise, then incorrectly choosing the power-law model could underestimate the rate uncertainty by a factor of two. Distinguishing between the two alternate noise models is difficult since the flicker component can dominate the assessment of the noise properties because it is spread over a significant portion of the measurable frequency band. But, although not necessarily detectable, the random-walk component can be a major constituent of the estimated rate uncertainty. None the less, it is possible to determine the upper bound on the random-walk noise.
Wang, Wenhui; Nunez-Iglesias, Juan; Luan, Yihui; Sun, Fengzhu
2009-09-03
Many aspects of biological functions can be modeled by biological networks, such as protein interaction networks, metabolic networks, and gene coexpression networks. Studying the statistical properties of these networks in turn allows us to infer biological function. Complex statistical network models can potentially more accurately describe the networks, but it is not clear whether such complex models are better suited to find biologically meaningful subnetworks. Recent studies have shown that the degree distribution of the nodes is not an adequate statistic in many molecular networks. We sought to extend this statistic with 2nd and 3rd order degree correlations and developed a pseudo-likelihood approach to estimate the parameters. The approach was used to analyze the MIPS and BIOGRID yeast protein interaction networks, and two yeast coexpression networks. We showed that 2nd order degree correlation information gave better predictions of gene interactions in both protein interaction and gene coexpression networks. However, in the biologically important task of predicting functionally homogeneous modules, degree correlation information performs marginally better in the case of the MIPS and BIOGRID protein interaction networks, but worse in the case of gene coexpression networks. Our use of dK models showed that incorporation of degree correlations could increase predictive power in some contexts, albeit sometimes marginally, but, in all contexts, the use of third-order degree correlations decreased accuracy. However, it is possible that other parameter estimation methods, such as maximum likelihood, will show the usefulness of incorporating 2nd and 3rd degree correlations in predicting functionally homogeneous modules.
Directory of Open Access Journals (Sweden)
Luan Yihui
2009-09-01
Full Text Available Abstract Background Many aspects of biological functions can be modeled by biological networks, such as protein interaction networks, metabolic networks, and gene coexpression networks. Studying the statistical properties of these networks in turn allows us to infer biological function. Complex statistical network models can potentially more accurately describe the networks, but it is not clear whether such complex models are better suited to find biologically meaningful subnetworks. Results Recent studies have shown that the degree distribution of the nodes is not an adequate statistic in many molecular networks. We sought to extend this statistic with 2nd and 3rd order degree correlations and developed a pseudo-likelihood approach to estimate the parameters. The approach was used to analyze the MIPS and BIOGRID yeast protein interaction networks, and two yeast coexpression networks. We showed that 2nd order degree correlation information gave better predictions of gene interactions in both protein interaction and gene coexpression networks. However, in the biologically important task of predicting functionally homogeneous modules, degree correlation information performs marginally better in the case of the MIPS and BIOGRID protein interaction networks, but worse in the case of gene coexpression networks. Conclusion Our use of dK models showed that incorporation of degree correlations could increase predictive power in some contexts, albeit sometimes marginally, but, in all contexts, the use of third-order degree correlations decreased accuracy. However, it is possible that other parameter estimation methods, such as maximum likelihood, will show the usefulness of incorporating 2nd and 3rd degree correlations in predicting functionally homogeneous modules.
Carreras, B. A.; Newman, D. E.; Dobson, Ian; Zeidenberg, Matthew
2009-12-01
A simple dynamic model of agent operation of an infrastructure system is presented. This system evolves over a long time scale by a daily increase in consumer demand that raises the overall load on the system and an engineering response to failures that involves upgrading of the components. The system is controlled by adjusting the upgrading rate of the components and the replacement time of the components. Two agents operate the system. Their behavior is characterized by their risk-averse and risk-taking attitudes while operating the system, their response to large events, and the effect of learning time on adapting to new conditions. A risk-averse operation causes a reduction in the frequency of failures and in the number of failures per unit time. However, risk aversion brings an increase in the probability of extreme events.
Gorfine, Malka; Zucker, David M; Hsu, Li
2009-01-01
In this work we deal with correlated failure time (age at onset) data arising from population-based case-control studies, where case and control probands are selected by population-based sampling and an array of risk factor measures is collected for both cases and controls and their relatives. Parameters of interest are effects of risk factors on the failure time hazard function and within-family dependencies among failure times after adjusting for the risk factors. Due to the retrospective sampling scheme, large sample theory for existing methods has not been established. We develop a novel technique for estimating the parameters of interest under a general semiparametric shared frailty model. We also present a simple, easily computed, and non-iterative nonparametric estimator for the cumulative baseline hazard function. We provide rigorous large sample theory for the proposed method. We also present simulation results and a real data example for illustrating the utility of the proposed method.
Likelihood estimators for multivariate extremes
Huser, Raphaël
2015-11-17
The main approach to inference for multivariate extremes consists in approximating the joint upper tail of the observations by a parametric family arising in the limit for extreme events. The latter may be expressed in terms of componentwise maxima, high threshold exceedances or point processes, yielding different but related asymptotic characterizations and estimators. The present paper clarifies the connections between the main likelihood estimators, and assesses their practical performance. We investigate their ability to estimate the extremal dependence structure and to predict future extremes, using exact calculations and simulation, in the case of the logistic model.
Obtaining reliable likelihood ratio tests from simulated likelihood functions
DEFF Research Database (Denmark)
Andersen, Laura Mørch
2014-01-01
programs - to base test statistics for mixed models on simulations using asymmetric draws (e.g. Halton draws). Problem 1: Inconsistent LR tests due to asymmetric draws: This paper shows that when the estimated likelihood functions depend on standard deviations of mixed parameters this practice is very......Mixed models: Models allowing for continuous heterogeneity by assuming that value of one or more parameters follow a specified distribution have become increasingly popular. This is known as ‘mixing’ parameters, and it is standard practice by researchers - and the default option in many statistical...... likely to cause misleading test results for the number of draws usually used today. The paper illustrates that increasing the number of draws is a very inefficient solution strategy requiring very large numbers of draws to ensure against misleading test statistics. The main conclusion of this paper...
Ishikawa, Sohta A; Inagaki, Yuji; Hashimoto, Tetsuo
2012-01-01
In phylogenetic analyses of nucleotide sequences, 'homogeneous' substitution models, which assume the stationarity of base composition across a tree, are widely used, albeit individual sequences may bear distinctive base frequencies. In the worst-case scenario, a homogeneous model-based analysis can yield an artifactual union of two distantly related sequences that achieved similar base frequencies in parallel. Such potential difficulty can be countered by two approaches, 'RY-coding' and 'non-homogeneous' models. The former approach converts four bases into purine and pyrimidine to normalize base frequencies across a tree, while the heterogeneity in base frequency is explicitly incorporated in the latter approach. The two approaches have been applied to real-world sequence data; however, their basic properties have not been fully examined by pioneering simulation studies. Here, we assessed the performances of the maximum-likelihood analyses incorporating RY-coding and a non-homogeneous model (RY-coding and non-homogeneous analyses) on simulated data with parallel convergence to similar base composition. Both RY-coding and non-homogeneous analyses showed superior performances compared with homogeneous model-based analyses. Curiously, the performance of RY-coding analysis appeared to be significantly affected by a setting of the substitution process for sequence simulation relative to that of non-homogeneous analysis. The performance of a non-homogeneous analysis was also validated by analyzing a real-world sequence data set with significant base heterogeneity.
Models of subjective response to in-flight motion data
Rudrapatna, A. N.; Jacobson, I. D.
1973-01-01
Mathematical relationships between subjective comfort and environmental variables in an air transportation system are investigated. As a first step in model building, only the motion variables are incorporated and sensitivities are obtained using stepwise multiple regression analysis. The data for these models have been collected from commercial passenger flights. Two models are considered. In the first, subjective comfort is assumed to depend on rms values of the six-degrees-of-freedom accelerations. The second assumes a Rustenburg type human response function in obtaining frequency weighted rms accelerations, which are used in a linear model. The form of the human response function is examined and the results yield a human response weighting function for different degrees of freedom.
Pharmacokinetic modeling of glimepiride plasma concentration in healthy subjects.
Antonesi, Ioana Maria; Potur, Roxana; Potur, D M; Ghiciuc, Cristina Mihaela; Lupuşoru, Cătălina Elena
2011-01-01
To determine the pharmacokinetics of glimepiride, a sulfonylurea antidiabetic agent, after single dose administration in healthy subjects. Pharmacokinetic data for modeling were extracted from a single-center, randomized, single-dose, fasting state, two-way crossover bioequivalence study on 4 mg glimepiride in 24 healthy subjects. Plasma concentrations of glimepiride were measured using a validated LC/MS/MS method. The pharmacokinetic parameters were calculated using non-compartmental analysis. Different pharmacokinetic models were tested to evaluate pharmacokinetics of glimepiride. The optimal model was chosen based on Akaike's Information Criteria. Compartmental analysis demonstrated that oral glimepiride tablets obey one compartment open model with rapid absorption following a first order kinetics and a short half-life.
A composite likelihood approach for spatially correlated survival data.
Paik, Jane; Ying, Zhiliang
2013-01-01
The aim of this paper is to provide a composite likelihood approach to handle spatially correlated survival data using pairwise joint distributions. With e-commerce data, a recent question of interest in marketing research has been to describe spatially clustered purchasing behavior and to assess whether geographic distance is the appropriate metric to describe purchasing dependence. We present a model for the dependence structure of time-to-event data subject to spatial dependence to characterize purchasing behavior from the motivating example from e-commerce data. We assume the Farlie-Gumbel-Morgenstern (FGM) distribution and then model the dependence parameter as a function of geographic and demographic pairwise distances. For estimation of the dependence parameters, we present pairwise composite likelihood equations. We prove that the resulting estimators exhibit key properties of consistency and asymptotic normality under certain regularity conditions in the increasing-domain framework of spatial asymptotic theory.
A composite likelihood approach for spatially correlated survival data
Paik, Jane; Ying, Zhiliang
2013-01-01
The aim of this paper is to provide a composite likelihood approach to handle spatially correlated survival data using pairwise joint distributions. With e-commerce data, a recent question of interest in marketing research has been to describe spatially clustered purchasing behavior and to assess whether geographic distance is the appropriate metric to describe purchasing dependence. We present a model for the dependence structure of time-to-event data subject to spatial dependence to characterize purchasing behavior from the motivating example from e-commerce data. We assume the Farlie-Gumbel-Morgenstern (FGM) distribution and then model the dependence parameter as a function of geographic and demographic pairwise distances. For estimation of the dependence parameters, we present pairwise composite likelihood equations. We prove that the resulting estimators exhibit key properties of consistency and asymptotic normality under certain regularity conditions in the increasing-domain framework of spatial asymptotic theory. PMID:24223450
Yuan, Jinghe; He, Kangmin; Cheng, Ming; Yu, Jianqiang; Fang, Xiaohong
2014-08-01
The step analysis of single-molecule photobleaching data offers a new approach for studying protein stoichiometry under physiological conditions. As such, it is important to develop suitable algorithms that can accurately extract the step events from the noisy single-molecule data. Herein, we report a HMM method that combines maximum-likelihood clustering for initializing the emission-probability distribution of the HMMs with an extended silhouette clustering criterion for estimating the state number of single molecules. In this way, the limitations of standard HMM in terms of processing typical single-molecule data with a short sequence are overcome. By using this method, the number and time points of the step events are automatically determined, without the introduction of any subjectivity. Simulation experiments on the experimental photobleaching data indicate that our method is very effective and robust in the analysis of single-molecule fluorescence photobleaching curves if the signal/noise ratio is larger than 2:1. This method was employed for processing photobleaching data that were obtained from single-molecule fluorescence imaging of transforming growth factor typeII receptors on a cell surface. This method is also expected to be applicable to the analysis of other stepwise events. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Obtaining reliable Likelihood Ratio tests from simulated likelihood functions
DEFF Research Database (Denmark)
Andersen, Laura Mørch
It is standard practice by researchers and the default option in many statistical programs to base test statistics for mixed models on simulations using asymmetric draws (e.g. Halton draws). This paper shows that when the estimated likelihood functions depend on standard deviations of mixed...... parameters this practice is very likely to cause misleading test results for the number of draws usually used today. The paper shows that increasing the number of draws is a very inefficient solution strategy requiring very large numbers of draws to ensure against misleading test statistics. The paper shows...... that using one dimensionally antithetic draws does not solve the problem but that the problem can be solved completely by using fully antithetic draws. The paper also shows that even when fully antithetic draws are used, models testing away mixing dimensions must replicate the relevant dimensions...
Chen, Qingxia; Ibrahim, Joseph G
2014-07-01
Multiple Imputation, Maximum Likelihood and Fully Bayesian methods are the three most commonly used model-based approaches in missing data problems. Although it is easy to show that when the responses are missing at random (MAR), the complete case analysis is unbiased and efficient, the aforementioned methods are still commonly used in practice for this setting. To examine the performance of and relationships between these three methods in this setting, we derive and investigate small sample and asymptotic expressions of the estimates and standard errors, and fully examine how these estimates are related for the three approaches in the linear regression model when the responses are MAR. We show that when the responses are MAR in the linear model, the estimates of the regression coefficients using these three methods are asymptotically equivalent to the complete case estimates under general conditions. One simulation and a real data set from a liver cancer clinical trial are given to compare the properties of these methods when the responses are MAR.
Andrew D. Richardson; David Y. Hollinger; David Y. Hollinger
2005-01-01
Whether the goal is to fill gaps in the flux record, or to extract physiological parameters from eddy covariance data, researchers are frequently interested in fitting simple models of ecosystem physiology to measured data. Presently, there is no consensus on the best models to use, or the ideal optimization criteria. We demonstrate that, given our estimates of the...
Mathematical problem solving, modelling, applications, and links to other subjects
Blum, Werner; Niss, Mogens
1989-01-01
The paper will consist of three parts. In part I we shall present some background considerations which are necessary as a basis for what follows. We shall try to clarify some basic concepts and notions, and we shall collect the most important arguments (and related goals) in favour of problem solving, modelling and applications to other subjects in mathematics instruction. In the main part II we shall review the present state, recent trends, and prospective lines of developm...
DEFF Research Database (Denmark)
Nielsen, Anders; Lewy, Peter
2002-01-01
A simulation study was carried out for a separable fish stock assessment model including commercial and survey catch-at-age and effort data. All catches are considered stochastic variables subject to sampling and process variations. The results showed that the Bayes estimator of spawning biomass...... simulations were based on the North Sea plaice ( Pleuronectes platessa ) stock and fishery data....
Empirical Likelihood in Nonignorable Covariate-Missing Data Problems.
Xie, Yanmei; Zhang, Biao
2017-04-20
Missing covariate data occurs often in regression analysis, which frequently arises in the health and social sciences as well as in survey sampling. We study methods for the analysis of a nonignorable covariate-missing data problem in an assumed conditional mean function when some covariates are completely observed but other covariates are missing for some subjects. We adopt the semiparametric perspective of Bartlett et al. (Improving upon the efficiency of complete case analysis when covariates are MNAR. Biostatistics 2014;15:719-30) on regression analyses with nonignorable missing covariates, in which they have introduced the use of two working models, the working probability model of missingness and the working conditional score model. In this paper, we study an empirical likelihood approach to nonignorable covariate-missing data problems with the objective of effectively utilizing the two working models in the analysis of covariate-missing data. We propose a unified approach to constructing a system of unbiased estimating equations, where there are more equations than unknown parameters of interest. One useful feature of these unbiased estimating equations is that they naturally incorporate the incomplete data into the data analysis, making it possible to seek efficient estimation of the parameter of interest even when the working regression function is not specified to be the optimal regression function. We apply the general methodology of empirical likelihood to optimally combine these unbiased estimating equations. We propose three maximum empirical likelihood estimators of the underlying regression parameters and compare their efficiencies with other existing competitors. We present a simulation study to compare the finite-sample performance of various methods with respect to bias, efficiency, and robustness to model misspecification. The proposed empirical likelihood method is also illustrated by an analysis of a data set from the US National Health and
Liu, Junhui
2012-01-01
The current study investigated how between-subject and within-subject variance-covariance structures affected the detection of a finite mixture of unobserved subpopulations and parameter recovery of growth mixture models in the context of linear mixed-effects models. A simulation study was conducted to evaluate the impact of variance-covariance…
Boesten, J.J.T.I.
2000-01-01
User-dependent subjectivity in the process of testing pesticide leaching models is relevant because it may result in wrong interpretation of model tests. About 20 modellers used the same data set to test pesticide leaching models (one or two models per modeller). The data set included laboratory
Modelling of subject specific based segmental dynamics of knee joint
Nasir, N. H. M.; Ibrahim, B. S. K. K.; Huq, M. S.; Ahmad, M. K. I.
2017-09-01
This study determines segmental dynamics parameters based on subject specific method. Five hemiplegic patients participated in the study, two men and three women. Their ages ranged from 50 to 60 years, weights from 60 to 70 kg and heights from 145 to 170 cm. Sample group included patients with different side of stroke. The parameters of the segmental dynamics resembling the knee joint functions measured via measurement of Winter and its model generated via the employment Kane's equation of motion. Inertial parameters in the form of the anthropometry can be identified and measured by employing Standard Human Dimension on the subjects who are in hemiplegia condition. The inertial parameters are the location of centre of mass (COM) at the length of the limb segment, inertia moment around the COM and masses of shank and foot to generate accurate motion equations. This investigation has also managed to dig out a few advantages of employing the table of anthropometry in movement biomechanics of Winter's and Kane's equation of motion. A general procedure is presented to yield accurate measurement of estimation for the inertial parameters for the joint of the knee of certain subjects with stroke history.
McGirt, Matthew J; Sivaganesan, Ahilan; Asher, Anthony L; Devin, Clinton J
2015-12-01
OBJECT Lumbar spine surgery has been demonstrated to be efficacious for many degenerative spine conditions. However, there is wide variability in outcome after spine surgery at the individual patient level. All stakeholders in spine care will benefit from identification of the unique patient or disease subgroups that are least likely to benefit from surgery, are prone to costly complications, and have increased health care utilization. There remains a large demand for individual patient-level predictive analytics to guide decision support to optimize outcomes at the patient and population levels. METHODS One thousand eight hundred three consecutive patients undergoing spine surgery for various degenerative lumbar diagnoses were prospectively enrolled and followed for 1 year. A comprehensive patient interview and health assessment was performed at baseline and at 3 and 12 months after surgery. All predictive covariates were selected a priori. Eighty percent of the sample was randomly selected for model development, and 20% for model validation. Linear regression was performed with Bayesian model averaging to model 12-month ODI (Oswestry Disability Index). Logistic regression with Bayesian model averaging was used to model likelihood of complications, 30-day readmission, need for inpatient rehabilitation, and return to work. Goodness-of-fit was assessed via R(2) for 12-month ODI and via the c-statistic, area under the receiver operating characteristic curve (AUC), for the categorical endpoints. Discrimination (predictive performance) was assessed, using R(2) for the ODI model and the c-statistic for the categorical endpoint models. Calibration was assessed using a plot of predicted versus observed values for the ODI model and the Hosmer-Lemeshow test for the categorical endpoint models. RESULTS On average, all patient-reported outcomes (PROs) were improved after surgery (ODI baseline vs 12 month: 50.4 vs 29.5%, p work, and 449 (24.5%) experienced an unplanned outcome
Likelihood devices in spatial statistics
Zwet, E.W. van
1999-01-01
One of the main themes of this thesis is the application to spatial data of modern semi- and nonparametric methods. Another, closely related theme is maximum likelihood estimation from spatial data. Maximum likelihood estimation is not common practice in spatial statistics. The method of moments
Xu, Lei; Johnson, Timothy D.; Nichols, Thomas E.; Nee, Derek E.
2010-01-01
Summary The aim of this work is to develop a spatial model for multi-subject fMRI data. There has been extensive work on univariate modeling of each voxel for single and multi-subject data, some work on spatial modeling of single-subject data, and some recent work on spatial modeling of multi-subject data. However, there has been no work on spatial models that explicitly account for inter-subject variability in activation locations. In this work, we use the idea of activation centers and model the inter-subject variability in activation locations directly. Our model is specified in a Bayesian hierarchical frame work which allows us to draw inferences at all levels: the population level, the individual level and the voxel level. We use Gaussian mixtures for the probability that an individual has a particular activation. This helps answer an important question which is not addressed by any of the previous methods: What proportion of subjects had a significant activity in a given region. Our approach incorporates the unknown number of mixture components into the model as a parameter whose posterior distribution is estimated by reversible jump Markov Chain Monte Carlo. We demonstrate our method with a fMRI study of resolving proactive interference and show dramatically better precision of localization with our method relative to the standard mass-univariate method. Although we are motivated by fMRI data, this model could easily be modified to handle other types of imaging data. PMID:19210732
Boyce, Jessica A; Kuijer, Roeline G
2014-04-01
Although research consistently shows that images of thin women in the media (media body ideals) affect women negatively (e.g., increased weight dissatisfaction and food intake), this effect is less clear among restrained eaters. The majority of experiments demonstrate that restrained eaters - identified with the Restraint Scale - consume more food than do other participants after viewing media body ideal images; whereas a minority of experiments suggest that such images trigger restrained eaters' dietary restraint. Weight satisfaction and mood results are just as variable. One reason for these inconsistent results might be that different methods of image exposure (e.g., slideshow vs. film) afford varying levels of attention. Therefore, we manipulated attention levels and measured participants' weight satisfaction and food intake. We based our hypotheses on the elaboration likelihood model and on restraint theory. We hypothesised that advertent (i.e., processing the images via central routes of persuasion) and inadvertent (i.e., processing the images via peripheral routes of persuasion) exposure would trigger differing degrees of weight dissatisfaction and dietary disinhibition among restrained eaters (cf. restraint theory). Participants (N = 174) were assigned to one of four conditions: advertent or inadvertent exposure to media or control images. The dependent variables were measured in a supposedly unrelated study. Although restrained eaters' weight satisfaction was not significantly affected by either media exposure condition, advertent (but not inadvertent) media exposure triggered restrained eaters' eating. These results suggest that teaching restrained eaters how to pay less attention to media body ideal images might be an effective strategy in media-literary interventions. Copyright © 2014 Elsevier Ltd. All rights reserved.
Multi-Subject Analyses with Dynamic Causal Modeling
Kasess, Christian Herbert; Stephan, Klaas Enno; Weissenbacher, Andreas; Pezawas, Lukas; Moser, Ewald; Windischberger, Christian
2010-01-01
Currently, most studies that employ dynamic causal modeling (DCM) use random-effects (RFX) analysis to make group inferences, applying a second-level frequentist test to subjects’ parameter estimates. In some instances, however, fixed-effects (FFX) analysis can be more appropriate. Such analyses can be implemented by combining the subjects’ posterior densities according to Bayes’ theorem either on a multivariate (Bayesian parameter averaging or BPA) or univariate basis (posterior variance weighted averaging or PVWA), or by applying DCM to time-series averaged across subjects beforehand (temporal averaging or TA). While all these FFX approaches have the advantage of allowing for Bayesian inferences on parameters a systematic comparison of their statistical properties has been lacking so far. Based on simulated data generated from a two-region network we examined the effects of signal-to-noise ratio (SNR) and population heterogeneity on group-level parameter estimates. Data sets were simulated assuming either a homogeneous large population (N=60) with constant connectivities across subjects or a heterogeneous population with varying parameters. TA showed advantages at lower SNR but is limited in its applicability. Because BPA and PVWA take into account posterior (co)variance structure, they can yield non-intuitive results when only considering posterior means. This problem is relevant for high SNR data, pronounced parameter interdependencies and when FFX assumptions are violated (i.e. inhomogeneous groups). It diminishes with decreasing SNR and is absent for models with independent parameters or when FFX assumptions are appropriate. Group results obtained with these FFX approaches should therefore be interpreted carefully by considering estimates of dependencies among model parameters. PMID:19941963
Roy, Surupa; Banerjee, Tathagata
2009-06-01
A multivariate probit model for correlated binary responses given the predictors of interest has been considered. Some of the responses are subject to classification errors and hence are not directly observable. Also measurements on some of the predictors are not available; instead the measurements on its surrogate are available. However, the conditional distribution of the unobservable predictors given the surrogate is completely specified. Models are proposed taking into account either or both of these sources of errors. Likelihood-based methodologies are proposed to fit these models. To ascertain the effect of ignoring classification errors and/or measurement error on the estimates of the regression and correlation parameters, a sensitivity study is carried out through simulation. Finally, the proposed methodology is illustrated through an example.
Physical Modelling of Bucket Foundations Subjected to Axial Loading
DEFF Research Database (Denmark)
Vaitkunaite, Evelina
Compared to oil and gas structures, marine renewable energy devices are usually much lighter, operate in shallower waters and are subjected to severe cyclic loading and dynamic excitations. These factors result in different structural behaviours. Bucket foundations are a potentially cost-effectiv......Compared to oil and gas structures, marine renewable energy devices are usually much lighter, operate in shallower waters and are subjected to severe cyclic loading and dynamic excitations. These factors result in different structural behaviours. Bucket foundations are a potentially cost......-effective solution for various offshore structures, and not least marine renewables. The present thesis focuses on several critical design problems related to the behaviour of bucket foundations exposed to tensile loading. Among those are the soil-structure interface parameters, tensile loading under various...... displacement rates and tensile cyclic loading. A new laboratory testing facility is constructed allowing large scale foundation model testing under long-term cyclic loadings. Another test set-up - a pressure tank – is employed for the displacement rate analysis. The extensive testing campaign provides valuable...
Centrifuge modeling of buried continuous pipelines subjected to normal faulting
Moradi, Majid; Rojhani, Mahdi; Galandarzadeh, Abbas; Takada, Shiro
2013-03-01
Seismic ground faulting is the greatest hazard for continuous buried pipelines. Over the years, researchers have attempted to understand pipeline behavior mostly via numerical modeling such as the finite element method. The lack of well-documented field case histories of pipeline failure from seismic ground faulting and the cost and complicated facilities needed for full-scale experimental simulation mean that a centrifuge-based method to determine the behavior of pipelines subjected to faulting is best to verify numerical approaches. This paper presents results from three centrifuge tests designed to investigate continuous buried steel pipeline behavior subjected to normal faulting. The experimental setup and procedure are described and the recorded axial and bending strains induced in a pipeline are presented and compared to those obtained via analytical methods. The influence of factors such as faulting offset, burial depth and pipe diameter on the axial and bending strains of pipes and on ground soil failure and pipeline deformation patterns are also investigated. Finally, the tensile rupture of a pipeline due to normal faulting is investigated.
Likelihood Analysis of the CMSSM Parameter Space
Ellis, Jonathan Richard; Santoso, Y; Spanos, V C; Ellis, John; Olive, Keith A.; Santoso, Yudi; Spanos, Vassilis C.
2004-01-01
We present a likelihood analysis of the parameter space of the constrained minimal supersymmetric extension of the Standard Model (CMSSM), in which the input scalar masses m_0 and fermion masses m_{1/2} are each assumed to be universal. We include the full experimental likelihood function from the LEP Higgs search as well as the likelihood from a global precision electroweak fit. We also include the likelihoods for b to s gamma decay and (optionally) g_mu - 2. For each of these inputs, both the experimental and theoretical errors are treated. We include the systematic errors stemming from the uncertainties in m_t and m_b, which are important for delineating the allowed CMSSM parameter space as well as calculating the relic density of supersymmetric particles. We assume that these dominate the cold dark matter density, with a density in the range favoured by WMAP. We display the global likelihood function along cuts in the (m_{1/2}, m_0) planes for tan beta = 10 and both signs of mu, tan beta = 35, mu 0, whic...
Extended empirical likelihood for estimating equations
Min Tsao; Fan Wu
2014-01-01
We derive an extended empirical likelihood for parameters defined by estimating equations which generalizes the original empirical likelihood to the full parameter space. Under mild conditions, the extended empirical likelihood has all the asymptotic properties of the original empirical likelihood. The first-order extended empirical likelihood is easy to use and substantially more accurate than the original empirical likelihood.
User Experience Research: Modelling and Describing the Subjective
Directory of Open Access Journals (Sweden)
Michael Glanznig
2012-10-01
Full Text Available User experience research in the field of human-computer interaction tries to understand how humans experience the interaction with technological artefacts. It is a young and still emerging field that exists in an area of tension. There is no consensus on how the concept of user experience should be defined or on how it should be researched. This paper focuses on two major strands of research in the field that are competing. It tries to give an overview over both and relate them to each other.Both start from the same premise: usability (focusing on performance is not enough. It is only part of the interaction with technological artefacts. And further: user experience is not very different from experience in general. Then they develop quite different accounts of the concept. While one focuses more on uncovering the objective in the subjective, on the precise and the formal, the other one stresses the ambiguous, the human and suggests to live with the subjectivity that is inherent in the concept of (user experience. One focuses more on evaluation rather than design and the other more on design than evaluation. One is a model and the other one more a framework of thought.Both can be criticised. The model can be questioned in terms of validity and the results of the other approach do not easily generalize across contexts – the reliability can be questioned. Sometimes the need for a unified view in user experience research is emphasized. While I doubt the possibility of a unified view I think it is possible to combine the two approaches. This combination has only rarely been attempted and not been critically reflected.
Multi-Channel Maximum Likelihood Pitch Estimation
DEFF Research Database (Denmark)
Christensen, Mads Græsbøll
2012-01-01
In this paper, a method for multi-channel pitch estimation is proposed. The method is a maximum likelihood estimator and is based on a parametric model where the signals in the various channels share the same fundamental frequency but can have different amplitudes, phases, and noise characteristi...
Maximum likelihood estimation for integrated diffusion processes
DEFF Research Database (Denmark)
Baltazar-Larios, Fernando; Sørensen, Michael
by measurement errors. Integrated volatility is an example of this type of observations. Another example is ice-core data on oxygen isotopes used to investigate paleo-temperatures. The data can be viewed as incomplete observations of a model with a tractable likelihood function. Therefore we propose a simulated...
Faustini, J. M.; Jones, J. A.
2001-12-01
This study used an empirical modeling approach to explore landscape controls on spatial variations in reach-scale channel response to peak flows in a mountain watershed. We used historical cross-section surveys spanning 20 years at five sites on 2nd to 5th-order channels and stream gaging records spanning up to 50 years. We related the observed proportion of cross-sections at a site exhibiting detectable change between consecutive surveys to the recurrence interval of the largest peak flow during the corresponding period using a quasi-likelihood logistic regression model. Stream channel response was linearly related to flood size or return period through the logit function, but the shape of the response function varied according to basin size, bed material, and the presence or absence of large wood. At the watershed scale, we hypothesized that the spatial scale and frequency of channel adjustment should increase in the downstream direction as sediment supply increases relative to transport capacity, resulting in more transportable sediment in the channel and hence increased bed mobility. Consistent with this hypothesis, cross sections from the 4th and 5th-order main stem channels exhibit more frequent detectable changes than those at two steep third-order tributary sites. Peak flows able to mobilize bed material sufficiently to cause detectable changes in 50% of cross-section profiles had an estimated recurrence interval of 3 years for the 4th and 5th-order channels and 4 to 6 years for the 3rd-order sites. This difference increased for larger magnitude channel changes; peak flows with recurrence intervals of about 7 years produced changes in 90% of cross sections at the main stem sites, but flows able to produce the same level of response at tributary sites were three times less frequent. At finer scales, this trend of increasing bed mobility in the downstream direction is modified by variations in the degree of channel confinement by bedrock and landforms, the
Vestige: Maximum likelihood phylogenetic footprinting
Directory of Open Access Journals (Sweden)
Maxwell Peter
2005-05-01
Full Text Available Abstract Background Phylogenetic footprinting is the identification of functional regions of DNA by their evolutionary conservation. This is achieved by comparing orthologous regions from multiple species and identifying the DNA regions that have diverged less than neutral DNA. Vestige is a phylogenetic footprinting package built on the PyEvolve toolkit that uses probabilistic molecular evolutionary modelling to represent aspects of sequence evolution, including the conventional divergence measure employed by other footprinting approaches. In addition to measuring the divergence, Vestige allows the expansion of the definition of a phylogenetic footprint to include variation in the distribution of any molecular evolutionary processes. This is achieved by displaying the distribution of model parameters that represent partitions of molecular evolutionary substitutions. Examination of the spatial incidence of these effects across regions of the genome can identify DNA segments that differ in the nature of the evolutionary process. Results Vestige was applied to a reference dataset of the SCL locus from four species and provided clear identification of the known conserved regions in this dataset. To demonstrate the flexibility to use diverse models of molecular evolution and dissect the nature of the evolutionary process Vestige was used to footprint the Ka/Ks ratio in primate BRCA1 with a codon model of evolution. Two regions of putative adaptive evolution were identified illustrating the ability of Vestige to represent the spatial distribution of distinct molecular evolutionary processes. Conclusion Vestige provides a flexible, open platform for phylogenetic footprinting. Underpinned by the PyEvolve toolkit, Vestige provides a framework for visualising the signatures of evolutionary processes across the genome of numerous organisms simultaneously. By exploiting the maximum-likelihood statistical framework, the complex interplay between mutational
Vestige: maximum likelihood phylogenetic footprinting.
Wakefield, Matthew J; Maxwell, Peter; Huttley, Gavin A
2005-05-29
Phylogenetic footprinting is the identification of functional regions of DNA by their evolutionary conservation. This is achieved by comparing orthologous regions from multiple species and identifying the DNA regions that have diverged less than neutral DNA. Vestige is a phylogenetic footprinting package built on the PyEvolve toolkit that uses probabilistic molecular evolutionary modelling to represent aspects of sequence evolution, including the conventional divergence measure employed by other footprinting approaches. In addition to measuring the divergence, Vestige allows the expansion of the definition of a phylogenetic footprint to include variation in the distribution of any molecular evolutionary processes. This is achieved by displaying the distribution of model parameters that represent partitions of molecular evolutionary substitutions. Examination of the spatial incidence of these effects across regions of the genome can identify DNA segments that differ in the nature of the evolutionary process. Vestige was applied to a reference dataset of the SCL locus from four species and provided clear identification of the known conserved regions in this dataset. To demonstrate the flexibility to use diverse models of molecular evolution and dissect the nature of the evolutionary process Vestige was used to footprint the Ka/Ks ratio in primate BRCA1 with a codon model of evolution. Two regions of putative adaptive evolution were identified illustrating the ability of Vestige to represent the spatial distribution of distinct molecular evolutionary processes. Vestige provides a flexible, open platform for phylogenetic footprinting. Underpinned by the PyEvolve toolkit, Vestige provides a framework for visualising the signatures of evolutionary processes across the genome of numerous organisms simultaneously. By exploiting the maximum-likelihood statistical framework, the complex interplay between mutational processes, DNA repair and selection can be evaluated both
An opportunity cost model of subjective effort and task performance.
Kurzban, Robert; Duckworth, Angela; Kable, Joseph W; Myers, Justus
2013-12-01
Why does performing certain tasks cause the aversive experience of mental effort and concomitant deterioration in task performance? One explanation posits a physical resource that is depleted over time. We propose an alternative explanation that centers on mental representations of the costs and benefits associated with task performance. Specifically, certain computational mechanisms, especially those associated with executive function, can be deployed for only a limited number of simultaneous tasks at any given moment. Consequently, the deployment of these computational mechanisms carries an opportunity cost--that is, the next-best use to which these systems might be put. We argue that the phenomenology of effort can be understood as the felt output of these cost/benefit computations. In turn, the subjective experience of effort motivates reduced deployment of these computational mechanisms in the service of the present task. These opportunity cost representations, then, together with other cost/benefit calculations, determine effort expended and, everything else equal, result in performance reductions. In making our case for this position, we review alternative explanations for both the phenomenology of effort associated with these tasks and for performance reductions over time. Likewise, we review the broad range of relevant empirical results from across sub-disciplines, especially psychology and neuroscience. We hope that our proposal will help to build links among the diverse fields that have been addressing similar questions from different perspectives, and we emphasize ways in which alternative models might be empirically distinguished.
Subjective quality assessment of an adaptive video streaming model
Tavakoli, Samira; Brunnström, Kjell; Wang, Kun; Andrén, Börje; Shahid, Muhammad; Garcia, Narciso
2014-01-01
With the recent increased popularity and high usage of HTTP Adaptive Streaming (HAS) techniques, various studies have been carried out in this area which generally focused on the technical enhancement of HAS technology and applications. However, a lack of common HAS standard led to multiple proprietary approaches which have been developed by major Internet companies. In the emerging MPEG-DASH standard the packagings of the video content and HTTP syntax have been standardized; but all the details of the adaptation behavior are left to the client implementation. Nevertheless, to design an adaptation algorithm which optimizes the viewing experience of the enduser, the multimedia service providers need to know about the Quality of Experience (QoE) of different adaptation schemes. Taking this into account, the objective of this experiment was to study the QoE of a HAS-based video broadcast model. The experiment has been carried out through a subjective study of the end user response to various possible clients' behavior for changing the video quality taking different QoE-influence factors into account. The experimental conclusions have made a good insight into the QoE of different adaptation schemes which can be exploited by HAS clients for designing the adaptation algorithms.
Niklitschek, Edwin J; Darnaude, Audrey M
2016-01-01
Mixture models (MM) can be used to describe mixed stocks considering three sets of parameters: the total number of contributing sources, their chemical baseline signatures and their mixing proportions. When all nursery sources have been previously identified and sampled for juvenile fish to produce baseline nursery-signatures, mixing proportions are the only unknown set of parameters to be estimated from the mixed-stock data. Otherwise, the number of sources, as well as some/all nursery-signatures may need to be also estimated from the mixed-stock data. Our goal was to assess bias and uncertainty in these MM parameters when estimated using unconditional maximum likelihood approaches (ML-MM), under several incomplete sampling and nursery-signature separation scenarios. We used a comprehensive dataset containing otolith elemental signatures of 301 juvenile Sparus aurata, sampled in three contrasting years (2008, 2010, 2011), from four distinct nursery habitats. (Mediterranean lagoons) Artificial nursery-source and mixed-stock datasets were produced considering: five different sampling scenarios where 0-4 lagoons were excluded from the nursery-source dataset and six nursery-signature separation scenarios that simulated data separated 0.5, 1.5, 2.5, 3.5, 4.5 and 5.5 standard deviations among nursery-signature centroids. Bias (BI) and uncertainty (SE) were computed to assess reliability for each of the three sets of MM parameters. Both bias and uncertainty in mixing proportion estimates were low (BI ≤ 0.14, SE ≤ 0.06) when all nursery-sources were sampled but exhibited large variability among cohorts and increased with the number of non-sampled sources up to BI = 0.24 and SE = 0.11. Bias and variability in baseline signature estimates also increased with the number of non-sampled sources, but tended to be less biased, and more uncertain than mixing proportion ones, across all sampling scenarios (BI nursery signatures improved reliability of mixing proportion
Directory of Open Access Journals (Sweden)
Edwin J. Niklitschek
2016-10-01
Full Text Available Background Mixture models (MM can be used to describe mixed stocks considering three sets of parameters: the total number of contributing sources, their chemical baseline signatures and their mixing proportions. When all nursery sources have been previously identified and sampled for juvenile fish to produce baseline nursery-signatures, mixing proportions are the only unknown set of parameters to be estimated from the mixed-stock data. Otherwise, the number of sources, as well as some/all nursery-signatures may need to be also estimated from the mixed-stock data. Our goal was to assess bias and uncertainty in these MM parameters when estimated using unconditional maximum likelihood approaches (ML-MM, under several incomplete sampling and nursery-signature separation scenarios. Methods We used a comprehensive dataset containing otolith elemental signatures of 301 juvenile Sparus aurata, sampled in three contrasting years (2008, 2010, 2011, from four distinct nursery habitats. (Mediterranean lagoons Artificial nursery-source and mixed-stock datasets were produced considering: five different sampling scenarios where 0–4 lagoons were excluded from the nursery-source dataset and six nursery-signature separation scenarios that simulated data separated 0.5, 1.5, 2.5, 3.5, 4.5 and 5.5 standard deviations among nursery-signature centroids. Bias (BI and uncertainty (SE were computed to assess reliability for each of the three sets of MM parameters. Results Both bias and uncertainty in mixing proportion estimates were low (BI ≤ 0.14, SE ≤ 0.06 when all nursery-sources were sampled but exhibited large variability among cohorts and increased with the number of non-sampled sources up to BI = 0.24 and SE = 0.11. Bias and variability in baseline signature estimates also increased with the number of non-sampled sources, but tended to be less biased, and more uncertain than mixing proportion ones, across all sampling scenarios (BI < 0.13, SE < 0
Score-based likelihood ratios for handwriting evidence.
Hepler, Amanda B; Saunders, Christopher P; Davis, Linda J; Buscaglia, JoAnn
2012-06-10
Score-based approaches for computing forensic likelihood ratios are becoming more prevalent in the forensic literature. When two items of evidential value are entangled via a scorefunction, several nuances arise when attempting to model the score behavior under the competing source-level propositions. Specific assumptions must be made in order to appropriately model the numerator and denominator probability distributions. This process is fairly straightforward for the numerator of the score-based likelihood ratio, entailing the generation of a database of scores obtained by pairing items of evidence from the same source. However, this process presents ambiguities for the denominator database generation - in particular, how best to generate a database of scores between two items of different sources. Many alternatives have appeared in the literature, three of which we will consider in detail. They differ in their approach to generating denominator databases, by pairing (1) the item of known source with randomly selected items from a relevant database; (2) the item of unknown source with randomly generated items from a relevant database; or (3) two randomly generated items. When the two items differ in type, perhaps one having higher information content, these three alternatives can produce very different denominator databases. While each of these alternatives has appeared in the literature, the decision of how to generate the denominator database is often made without calling attention to the subjective nature of this process. In this paper, we compare each of the three methods (and the resulting score-based likelihood ratios), which can be thought of as three distinct interpretations of the denominator proposition. Our goal in performing these comparisons is to illustrate the effect that subtle modifications of these propositions can have on inferences drawn from the evidence evaluation procedure. The study was performed using a data set composed of cursive writing
Maximum likelihood estimation of fractionally cointegrated systems
DEFF Research Database (Denmark)
Lasak, Katarzyna
In this paper we consider a fractionally cointegrated error correction model and investigate asymptotic properties of the maximum likelihood (ML) estimators of the matrix of the cointe- gration relations, the degree of fractional cointegration, the matrix of the speed of adjustment to the equilib......In this paper we consider a fractionally cointegrated error correction model and investigate asymptotic properties of the maximum likelihood (ML) estimators of the matrix of the cointe- gration relations, the degree of fractional cointegration, the matrix of the speed of adjustment...... to the equilibrium parameters and the variance-covariance matrix of the error term. We show that using ML principles to estimate jointly all parameters of the fractionally cointegrated system we obtain consistent estimates and provide their asymptotic distributions. The cointegration matrix is asymptotically mixed...
Richards growth model and viability indicators for populations subject to interventions
Directory of Open Access Journals (Sweden)
Selene Loibel
2010-12-01
Full Text Available In this work we study the problem of modeling identification of a population employing a discrete dynamic model based on the Richards growth model. The population is subjected to interventions due to consumption, such as hunting or farming animals. The model identification allows us to estimate the probability or the average time for a population number to reach a certain level. The parameter inference for these models are obtained with the use of the likelihood profile technique as developed in this paper. The identification method here developed can be applied to evaluate the productivity of animal husbandry or to evaluate the risk of extinction of autochthon populations. It is applied to data of the Brazilian beef cattle herd population, and the the population number to reach a certain goal level is investigated.Neste trabalho estudamos o problema de identificação do modelo de uma população utilizando um modelo dinâmico discreto baseado no modelo de crescimento de Richards. A população é submetida a intervenções devido ao consumo, como no caso de caça ou na criação de animais. A identificação do modelo permite-nos estimar a probabilidade ou o tempo médio de ocorrência para que se atinja um certo número populacional. A inferência paramétrica dos modelos é obtida através da técnica de perfil de máxima verossimilhança como desenvolvida neste trabalho. O método de identificação desenvolvido pode ser aplicado para avaliar a produtividade de criação animal ou o risco de extinção de uma população autóctone. Ele foi aplicado aos dados da população global de gado de corte bovino brasileiro, e é utilizado na investigação de a população atingir um certo número desejado de cabeças.
Modelling and management of subjective information in a fuzzy setting
Bouchon-Meunier, Bernadette; Lesot, Marie-Jeanne; Marsala, Christophe
2013-01-01
Subjective information is very natural for human beings. It is an issue at the crossroad of cognition, semiotics, linguistics, and psycho-physiology. Its management requires dedicated methods, among which we point out the usefulness of fuzzy and possibilistic approaches and related methods, such as evidence theory. We distinguish three aspects of subjectivity: the first deals with perception and sensory information, including the elicitation of quality assessment and the establishment of a link between physical and perceived properties; the second is related to emotions, their fuzzy nature, and their identification; and the last aspect stems from natural language and takes into account information quality and reliability of information.
Maintenance Models for Systems subject to Measurable Deterioration
R.P. Nicolai (Robin)
2008-01-01
textabstractComplex engineering systems such as bridges, roads, flood defence structures, and power pylons play an important role in our society. Unfortunately such systems are subject to deterioration, meaning that in course of time their condition falls from higher to lower, and possibly even to
Agent-based modeling of subjective well-being
Baggio, J.; Papyrakis, E.
2014-01-01
There has been extensive empirical research in recent years pointing to a weak correlation between economic growth and subjective well-being (happiness), at least for developed economies (i.e. the so-called 'Easterlin paradox'). Recent findings from the behavioural sciences and happiness literature
Analysis of Subjective Conceptualizations Towards Collective Conceptual Modelling
DEFF Research Database (Denmark)
Glückstad, Fumiko Kano; Herlau, Tue; Schmidt, Mikkel Nørgaard
2013-01-01
This work is conducted as a preliminary study for a project where individuals' conceptualizations of domain knowledge will thoroughly be analyzed across 150 subjects from 6 countries. The project aims at investigating how humans' conceptualizations differ according to different types of mother la...
Analysis of Subjective Conceptualizations Towards Collective Conceptual Modelling
DEFF Research Database (Denmark)
Kano Glückstad, Fumiko; Herlau, Tue; Schmidt, Mikkel N.
2013-01-01
This work is conducted as a preliminary study for a project where individuals' conceptualizations of domain knowledge will thoroughly be analyzed across 150 subjects from 6 countries. The project aims at investigating how humans' conceptualizations dier according to dierent types of mother langua...
A Predictive Likelihood Approach to Bayesian Averaging
Directory of Open Access Journals (Sweden)
Tomáš Jeřábek
2015-01-01
Full Text Available Multivariate time series forecasting is applied in a wide range of economic activities related to regional competitiveness and is the basis of almost all macroeconomic analysis. In this paper we combine multivariate density forecasts of GDP growth, inflation and real interest rates from four various models, two type of Bayesian vector autoregression (BVAR models, a New Keynesian dynamic stochastic general equilibrium (DSGE model of small open economy and DSGE-VAR model. The performance of models is identified using historical dates including domestic economy and foreign economy, which is represented by countries of the Eurozone. Because forecast accuracy of observed models are different, the weighting scheme based on the predictive likelihood, the trace of past MSE matrix, model ranks are used to combine the models. The equal-weight scheme is used as a simple combination scheme. The results show that optimally combined densities are comparable to the best individual models.
Likelihood inference for unions of interacting discs
DEFF Research Database (Denmark)
Møller, Jesper; Helisova, K.
2010-01-01
This is probably the first paper which discusses likelihood inference for a random set using a germ-grain model, where the individual grains are unobservable, edge effects occur and other complications appear. We consider the case where the grains form a disc process modelled by a marked point...... process, where the germs are the centres and the marks are the associated radii of the discs. We propose to use a recent parametric class of interacting disc process models, where the minimal sufficient statistic depends on various geometric properties of the random set, and the density is specified...
Maximum likelihood continuity mapping for fraud detection
Energy Technology Data Exchange (ETDEWEB)
Hogden, J.
1997-05-01
The author describes a novel time-series analysis technique called maximum likelihood continuity mapping (MALCOM), and focuses on one application of MALCOM: detecting fraud in medical insurance claims. Given a training data set composed of typical sequences, MALCOM creates a stochastic model of sequence generation, called a continuity map (CM). A CM maximizes the probability of sequences in the training set given the model constraints, CMs can be used to estimate the likelihood of sequences not found in the training set, enabling anomaly detection and sequence prediction--important aspects of data mining. Since MALCOM can be used on sequences of categorical data (e.g., sequences of words) as well as real valued data, MALCOM is also a potential replacement for database search tools such as N-gram analysis. In a recent experiment, MALCOM was used to evaluate the likelihood of patient medical histories, where ``medical history`` is used to mean the sequence of medical procedures performed on a patient. Physicians whose patients had anomalous medical histories (according to MALCOM) were evaluated for fraud by an independent agency. Of the small sample (12 physicians) that has been evaluated, 92% have been determined fraudulent or abusive. Despite the small sample, these results are encouraging.
Extended empirical likelihood for general estimating equations
Tsao, Min; Wu, Fan
2013-01-01
We derive an extended empirical likelihood for parameters defined by estimating equations which generalizes the original empirical likelihood for such parameters to the full parameter space. Under mild conditions, the extended empirical likelihood has all asymptotic properties of the original empirical likelihood. Its contours retain the data-driven shape of the latter. It can also attain the second order accuracy. The first order extended empirical likelihood is easy-to-use yet it is substan...
Generalized empirical likelihood methods for analyzing longitudinal data
Wang, S.
2010-02-16
Efficient estimation of parameters is a major objective in analyzing longitudinal data. We propose two generalized empirical likelihood based methods that take into consideration within-subject correlations. A nonparametric version of the Wilks theorem for the limiting distributions of the empirical likelihood ratios is derived. It is shown that one of the proposed methods is locally efficient among a class of within-subject variance-covariance matrices. A simulation study is conducted to investigate the finite sample properties of the proposed methods and compare them with the block empirical likelihood method by You et al. (2006) and the normal approximation with a correctly estimated variance-covariance. The results suggest that the proposed methods are generally more efficient than existing methods which ignore the correlation structure, and better in coverage compared to the normal approximation with correctly specified within-subject correlation. An application illustrating our methods and supporting the simulation study results is also presented.
Damage modelling in concrete subject to sulfate attack
Directory of Open Access Journals (Sweden)
N. Cefis
2014-07-01
Full Text Available In this paper, we consider the mechanical effect of the sulfate attack on concrete. The durability analysis of concrete structures in contact to external sulfate solutions requires the definition of a proper diffusion-reaction model, for the computation of the varying sulfate concentration and of the consequent ettringite formation, coupled to a mechanical model for the prediction of swelling and material degradation. In this work, we make use of a two-ions formulation of the reactive-diffusion problem and we propose a bi-phase chemo-elastic damage model aimed to simulate the mechanical response of concrete and apt to be used in structural analyses.
Model reduction of nonlinear systems subject to input disturbances
Ndoye, Ibrahima
2017-07-10
The method of convex optimization is used as a tool for model reduction of a class of nonlinear systems in the presence of disturbances. It is shown that under some conditions the nonlinear disturbed system can be approximated by a reduced order nonlinear system with similar disturbance-output properties to the original plant. The proposed model reduction strategy preserves the nonlinearity and the input disturbance nature of the model. It guarantees a sufficiently small error between the outputs of the original and the reduced-order systems, and also maintains the properties of input-to-state stability. The matrices of the reduced order system are given in terms of a set of linear matrix inequalities (LMIs). The paper concludes with a demonstration of the proposed approach on model reduction of a nonlinear electronic circuit with additive disturbances.
Empirical likelihood inference in randomized clinical trials.
Zhang, Biao
2017-01-01
In individually randomized controlled trials, in addition to the primary outcome, information is often available on a number of covariates prior to randomization. This information is frequently utilized to undertake adjustment for baseline characteristics in order to increase precision of the estimation of average treatment effects; such adjustment is usually performed via covariate adjustment in outcome regression models. Although the use of covariate adjustment is widely seen as desirable for making treatment effect estimates more precise and the corresponding hypothesis tests more powerful, there are considerable concerns that objective inference in randomized clinical trials can potentially be compromised. In this paper, we study an empirical likelihood approach to covariate adjustment and propose two unbiased estimating functions that automatically decouple evaluation of average treatment effects from regression modeling of covariate-outcome relationships. The resulting empirical likelihood estimator of the average treatment effect is as efficient as the existing efficient adjusted estimators(1) when separate treatment-specific working regression models are correctly specified, yet are at least as efficient as the existing efficient adjusted estimators(1) for any given treatment-specific working regression models whether or not they coincide with the true treatment-specific covariate-outcome relationships. We present a simulation study to compare the finite sample performance of various methods along with some results on analysis of a data set from an HIV clinical trial. The simulation results indicate that the proposed empirical likelihood approach is more efficient and powerful than its competitors when the working covariate-outcome relationships by treatment status are misspecified.
Subjective surfaces: a geometric model for boundary completion
Energy Technology Data Exchange (ETDEWEB)
Sarti, Alessandro; Malladi, Ravi; Sethian, J.A.
2000-06-01
We present a geometric model and a computational method for segmentation of images with missing boundaries. In many situations, the human visual system fills in missing gaps in edges and boundaries, building and completing information that is not present. Boundary completion presents a considerable challenge in computer vision, since most algorithms attempt to exploit existing data. A large body of work concerns completion models, which postulate how to construct missing data; these models are often trained and specific to particular images. In this paper, we take the following, alternative perspective: we consider a reference point within an image as given, and then develop an algorithm which tries to build missing information on the basis of the given point of view and the available information as boundary data to the algorithm. Starting from this point of view, a surface is constructed. It is then evolved with the mean curvature flow in the metric induced by the image until a piecewise constant solution is reached. We test the computational model on modal completion, amodal completion, texture, photo and medical images. We extend the geometric model and the algorithm to 3D in order to extract shapes from low signal/noise ratio medical volumes. Results in 3D echocardiography and 3D fetal echography are presented.
Integrating Character Education Model With Spiral System In Chemistry Subject
Hartutik; Rusdarti; Sumaryanto; Supartono
2017-04-01
Integrating character education is the responsibility of all subject teachers including chemistry teacher. The integration of character education is just administrative requirements so that the character changes are not measurable. The research objective 1) describing the actual conditions giving character education, 2) mapping the character integration of chemistry syllabus with a spiral system, and 3) producing syllabus and guide system integrating character education in chemistry lessons. Of the eighteen value character, each character is mapped to the material chemistry value concepts of class X and repeated the system in class XI and class XII. Spiral system integration means integrating the character values of chemistry subjects in steps from class X to XII repeatedly at different depth levels. Besides developing the syllabus, also made the integration of characters in a learning guide. This research was designed with research and development [3] with the scope of 20 chemistry teachers in Semarang. The focus of the activities is the existence of the current character study, mapping the character values in the syllabus, and assessment of the integration guides of character education. The validity test of Syllabus and Lesson Plans by experts in FGD. The data were taken with questionnaire and interviews, then processed by descriptive analysis. The result shows 1) The factual condition, in general, the teachers designed learning one-time face-to-face with the integration of more than four characters so that behaviour changes and depth of character is poorly controlled, 2) Mapping each character values focused in the syllabus. Meaning, on one or two basic competence in four or five times, face to face, enough integrated with the value of one character. In this way, there are more noticeable changes in students behaviour. Guidance is needed to facilitate the integration of character education for teachers integrating systems. Product syllabus and guidelines
Heat transfer modelling of first walls subject to plasma disruption
Energy Technology Data Exchange (ETDEWEB)
Fillo, J.A.; Makowitz, H.
1981-01-01
A brief description of the plasma disruption problem and potential thermal consequences to the first wall is given. Thermal models reviewed include: a) melting of a solid with melt layer in place; b) melting of a solid with complete removal of melt (ablation); c) melting/vaporization of a solid; and d) vaporization of a solid but no phase change affecting the temperature profile.
Mass Change Prediction Model of Concrete Subjected to Sulfate Attack
Directory of Open Access Journals (Sweden)
Kwang-Myong Lee
2015-01-01
Full Text Available The present study suggested a mass change prediction model for sulfate attack of concrete containing mineral admixtures through an immersion test in sulfate solutions. For this, 100% OPC as well as binary and ternary blended cement concrete specimens were manufactured by changing the types and amount of mineral admixture. The concrete specimens were immersed in fresh water, 10% sodium sulfate solution, and 10% magnesium sulfate solution, respectively, and mass change of the specimens was measured at 28, 56, 91, 182, and 365 days. The experimental results indicated that resistance of concrete containing mineral admixture against sodium sulfate attack was far greater than that of 100% OPC concrete. However, in terms of resistance against magnesium sulfate attack, concrete containing mineral admixture was lower than 100% OPC concrete due to the formation of magnesium silicate hydrate (M-S-H, the noncementitious material. Ultimately, based on the experimental results, a mass change prediction model was suggested and it was found that the prediction values using the model corresponded relatively well with the experimental results.
Integration based profile likelihood calculation for PDE constrained parameter estimation problems
Boiger, R.; Hasenauer, J.; Hroß, S.; Kaltenbacher, B.
2016-12-01
Partial differential equation (PDE) models are widely used in engineering and natural sciences to describe spatio-temporal processes. The parameters of the considered processes are often unknown and have to be estimated from experimental data. Due to partial observations and measurement noise, these parameter estimates are subject to uncertainty. This uncertainty can be assessed using profile likelihoods, a reliable but computationally intensive approach. In this paper, we present the integration based approach for the profile likelihood calculation developed by (Chen and Jennrich 2002 J. Comput. Graph. Stat. 11 714-32) and adapt it to inverse problems with PDE constraints. While existing methods for profile likelihood calculation in parameter estimation problems with PDE constraints rely on repeated optimization, the proposed approach exploits a dynamical system evolving along the likelihood profile. We derive the dynamical system for the unreduced estimation problem, prove convergence and study the properties of the integration based approach for the PDE case. To evaluate the proposed method, we compare it with state-of-the-art algorithms for a simple reaction-diffusion model for a cellular patterning process. We observe a good accuracy of the method as well as a significant speed up as compared to established methods. Integration based profile calculation facilitates rigorous uncertainty analysis for computationally demanding parameter estimation problems with PDE constraints.
Wu, L.; Tam, V. H.; Chow, D. S. L.; Putcha, L.
2014-01-01
An intranasal gel formulation of scopolamine (INSCOP) was developed for the treatment of Space Motion Sickness. The bioavailability and pharmacokinetics (PK) were evaluated under the Food and Drug Administration guidelines for clinical trials with an Investigative New Drug (IND) protocol. The aim of this project was to develop a PK model that can predict the relationship between plasma, saliva and urinary scopolamine concentrations using data collected from the IND clinical trials with INSCOP. Methods: Twelve healthy human subjects were administered three dose levels (0.1, 0.2 and 0.4 mg) of INSCOP. Serial blood, saliva and urine samples were collected between 5 min and 24 h after dosing and scopolamine concentrations were measured by using a validated LC-MS-MS assay. Pharmacokinetic Compartmental models, using actual dosing and sampling times, were built using Phoenix (version 1.2). Model selection was based on the likelihood ratio test on the difference of criteria (-2LL) and comparison of the quality of fit plots. Results: The best structural model for INSCOP (minimal -2LL= 502.8) was established. It consisted of one compartment each for plasma, saliva and urine, respectively, which were connected with linear transport processes except the nonlinear PK process from plasma to saliva compartment. The best-fit estimates of PK parameters from individual PK compartmental analysis and Population PK model analysis were shown in Tables 1 and 2, respectively. Conclusion: A population PK model that could predict population and individual PK of scopolamine in plasma, saliva and urine after dosing was developed and validated. Incorporating a non-linear transfer from plasma to saliva compartments resulted in a significantly improved model fitting. The model could be used to predict scopolamine plasma concentrations from salivary and urinary drug levels, allowing non-invasive therapeutic monitoring of scopolamine in space and other remote environments.
Harrison, Simon M; Whitton, R Chris; Kawcak, Chris E; Stover, Susan M; Pandy, Marcus G
2014-01-03
The equine metacarpophalangeal (MCP) joint is frequently injured, especially by racehorses in training. Most injuries result from repetitive loading of the subchondral bone and articular cartilage rather than from acute events. The likelihood of injury is multi-factorial but the magnitude of mechanical loading and the number of loading cycles are believed to play an important role. Therefore, an important step in understanding injury is to determine the distribution of load across the articular surface during normal locomotion. A subject-specific finite-element model of the MCP joint was developed (including deformable cartilage, elastic ligaments, muscle forces and rigid representations of bone), evaluated against measurements obtained from cadaver experiments, and then loaded using data from gait experiments. The sensitivity of the model to force inputs, cartilage stiffness, and cartilage geometry was studied. The FE model predicted MCP joint torque and sesamoid bone flexion angles within 5% of experimental measurements. Muscle-tendon forces, joint loads and cartilage stresses all increased as locomotion speed increased from walking to trotting and finally cantering. Perturbations to muscle-tendon forces resulted in small changes in articular cartilage stresses, whereas variations in joint torque, cartilage geometry and stiffness produced much larger effects. Non-subject-specific cartilage geometry changed the magnitude and distribution of pressure and the von Mises stress markedly. The mean and peak cartilage stresses generally increased with an increase in cartilage stiffness. Areas of peak stress correlated qualitatively with sites of common injury, suggesting that further modelling work may elucidate the types of loading that precede joint injury and may assist in the development of techniques for injury mitigation. © 2013 Published by Elsevier Ltd.
An isotonic partial credit model for ordering subjects on the basis of their sum scores
Ligtvoet, R.
2012-01-01
In practice, the sum of the item scores is often used as a basis for comparing subjects. For items that have more than two ordered score categories, only the partial credit model (PCM) and special cases of this model imply that the subjects are stochastically ordered on the common latent variable.
DEFF Research Database (Denmark)
Iwankiewicz, R.; Nielsen, Søren R. K.; Skjærbæk, P. S.
The subject of the paper is the investigation of the sensitivity of structural reliability estimation by a reduced hysteretic model for a reinforced concrete frame under an earthquake excitation.......The subject of the paper is the investigation of the sensitivity of structural reliability estimation by a reduced hysteretic model for a reinforced concrete frame under an earthquake excitation....
MLDS: Maximum Likelihood Difference Scaling in R
Directory of Open Access Journals (Sweden)
Kenneth Knoblauch
2008-01-01
Full Text Available The MLDS package in the R programming language can be used to estimate perceptual scales based on the results of psychophysical experiments using the method of difference scaling. In a difference scaling experiment, observers compare two supra-threshold differences (a,b and (c,d on each trial. The approach is based on a stochastic model of how the observer decides which perceptual difference (or interval (a,b or (c,d is greater, and the parameters of the model are estimated using a maximum likelihood criterion. We also propose a method to test the model by evaluating the self-consistency of the estimated scale. The package includes an example in which an observer judges the differences in correlation between scatterplots. The example may be readily adapted to estimate perceptual scales for arbitrary physical continua.
Subject-specific musculoskeletal modeling in the evaluation of shoulder muscle and joint function.
Wu, Wen; Lee, Peter V S; Bryant, Adam L; Galea, Mary; Ackland, David C
2016-11-07
Upper limb muscle force estimation using Hill-type muscle models depends on musculotendon parameter values, which cannot be readily measured non-invasively. Generic and scaled-generic parameters may be quickly and easily employed, but these approaches do not account for an individual subject's joint torque capacity. The objective of the present study was to develop a subject-specific experimental testing and modeling framework to evaluate shoulder muscle and joint function during activities of daily living, and to assess the capacity of generic and scaled-generic musculotendon parameters to predict muscle and joint function. Three-dimensional musculoskeletal models of the shoulders of 6 healthy subjects were developed to calculate muscle and glenohumeral joint loading during abduction, flexion, horizontal flexion, nose touching and reaching using subject-specific, scaled-generic and generic musculotendon parameters. Muscle and glenohumeral joint forces calculated using generic and scaled-generic models were significantly different to those of subject-specific models (pMuscles in generic musculoskeletal models operated further from the plateau of their force-length curves than those of scaled-generic and subject-specific models, while muscles in subject-specific models operated over a wider region of their force length curves than those of the generic or scaled-generic models, reflecting diversity of subject shoulder strength. The findings of this study suggest that generic and scaled-generic musculotendon parameters may not provide sufficient accuracy in prediction of shoulder muscle and joint loading when compared to models that employ subject-specific parameter-estimation approaches. Copyright Â© 2016 Elsevier Ltd. All rights reserved.
DEFF Research Database (Denmark)
Vahidi, O; Kwok, K E; Gopaluni, R B
2016-01-01
We have expanded a former compartmental model of blood glucose regulation for healthy and type 2 diabetic subjects. The former model was a detailed physiological model which considered the interactions of three substances, glucose, insulin and glucagon on regulating the blood sugar. The main...... obtained during oral glucose tolerance test and isoglycemic intravenous glucose infusion test from both type 2 diabetic and healthy subjects to estimate the model parameters and to validate the model results. The estimation of model parameters is accomplished through solving a nonlinear optimization...
Diffusion Tensor Estimation by Maximizing Rician Likelihood.
Landman, Bennett; Bazin, Pierre-Louis; Prince, Jerry
2007-01-01
Diffusion tensor imaging (DTI) is widely used to characterize white matter in health and disease. Previous approaches to the estimation of diffusion tensors have either been statistically suboptimal or have used Gaussian approximations of the underlying noise structure, which is Rician in reality. This can cause quantities derived from these tensors - e.g., fractional anisotropy and apparent diffusion coefficient - to diverge from their true values, potentially leading to artifactual changes that confound clinically significant ones. This paper presents a novel maximum likelihood approach to tensor estimation, denoted Diffusion Tensor Estimation by Maximizing Rician Likelihood (DTEMRL). In contrast to previous approaches, DTEMRL considers the joint distribution of all observed data in the context of an augmented tensor model to account for variable levels of Rician noise. To improve numeric stability and prevent non-physical solutions, DTEMRL incorporates a robust characterization of positive definite tensors and a new estimator of underlying noise variance. In simulated and clinical data, mean squared error metrics show consistent and significant improvements from low clinical SNR to high SNR. DTEMRL may be readily supplemented with spatial regularization or a priori tensor distributions for Bayesian tensor estimation.
Subject-based discriminative sparse representation model for detection of concealed information.
Akhavan, Amir; Moradi, Mohammad Hassan; Vand, Safa Rafiei
2017-05-01
The use of machine learning approaches in concealed information test (CIT) plays a key role in the progress of this neurophysiological field. In this paper, we presented a new machine learning method for CIT in which each subject is considered independent of the others. The main goal of this study is to adapt the discriminative sparse models to be applicable for subject-based concealed information test. In order to provide sufficient discriminability between guilty and innocent subjects, we introduced a novel discriminative sparse representation model and its appropriate learning methods. For evaluation of the method forty-four subjects participated in a mock crime scenario and their EEG data were recorded. As the model input, in this study the recurrence plot features were extracted from single trial data of different stimuli. Then the extracted feature vectors were reduced using statistical dependency method. The reduced feature vector went through the proposed subject-based sparse model in which the discrimination power of sparse code and reconstruction error were applied simultaneously. Experimental results showed that the proposed approach achieved better performance than other competing discriminative sparse models. The classification accuracy, sensitivity and specificity of the presented sparsity-based method were about 93%, 91% and 95% respectively. Using the EEG data of a single subject in response to different stimuli types and with the aid of the proposed discriminative sparse representation model, one can distinguish guilty subjects from innocent ones. Indeed, this property eliminates the necessity of several subject EEG data in model learning and decision making for a specific subject. Copyright © 2017 Elsevier B.V. All rights reserved.
Optimized Large-scale CMB Likelihood and Quadratic Maximum Likelihood Power Spectrum Estimation
Gjerløw, E.; Colombo, L. P. L.; Eriksen, H. K.; Górski, K. M.; Gruppuso, A.; Jewell, J. B.; Plaszczynski, S.; Wehus, I. K.
2015-11-01
We revisit the problem of exact cosmic microwave background (CMB) likelihood and power spectrum estimation with the goal of minimizing computational costs through linear compression. This idea was originally proposed for CMB purposes by Tegmark et al., and here we develop it into a fully functioning computational framework for large-scale polarization analysis, adopting WMAP as a working example. We compare five different linear bases (pixel space, harmonic space, noise covariance eigenvectors, signal-to-noise covariance eigenvectors, and signal-plus-noise covariance eigenvectors) in terms of compression efficiency, and find that the computationally most efficient basis is the signal-to-noise eigenvector basis, which is closely related to the Karhunen-Loeve and Principal Component transforms, in agreement with previous suggestions. For this basis, the information in 6836 unmasked WMAP sky map pixels can be compressed into a smaller set of 3102 modes, with a maximum error increase of any single multipole of 3.8% at ℓ ≤ 32 and a maximum shift in the mean values of a joint distribution of an amplitude-tilt model of 0.006σ. This compression reduces the computational cost of a single likelihood evaluation by a factor of 5, from 38 to 7.5 CPU seconds, and it also results in a more robust likelihood by implicitly regularizing nearly degenerate modes. Finally, we use the same compression framework to formulate a numerically stable and computationally efficient variation of the Quadratic Maximum Likelihood implementation, which requires less than 3 GB of memory and 2 CPU minutes per iteration for ℓ ≤ 32, rendering low-ℓ QML CMB power spectrum analysis fully tractable on a standard laptop.
Modeling of failure and response to laminated composites subjected to in-plane loads
Shahid, Iqbal; Chang, Fu-Kuo
1993-01-01
An analytical model was developed for predicting the response of laminated composites with or without a cutout and subjected to in-plane tensile and shear loads. Material damage resulting from the loads in terms of matrix cracking, fiber-matrix shearing, and fiber breakage was considered in the model. Delamination, an out-of-plane failure mode, was excluded from the model.
A Panel Data Model for Subjective Information on Household Income Growth
Das, J.W.M.; van Soest, A.H.O.
1996-01-01
Subjective expectations about future income changes are analyzed, using household panel data.The models used are extensions of existing binary choice panel data models to the case of ordered response.We consider both random and fixed individual effects.The random effects model is estimated by
A panel data model for subjective information on household income growth
Das, J.W.M.; van Soest, A.H.O.
1996-01-01
Subjective expectations about future income changes are analyzed, using household panel data. The models used are extensions of existing binary choice panel data models to the case of ordered response. We consider static models with random and fixed individual effects. We also look at a dynamic
Robust Binary Hypothesis Testing Under Contaminated Likelihoods
Wei, Dennis; Varshney, Kush R.
2014-01-01
In hypothesis testing, the phenomenon of label noise, in which hypothesis labels are switched at random, contaminates the likelihood functions. In this paper, we develop a new method to determine the decision rule when we do not have knowledge of the uncontaminated likelihoods and contamination probabilities, but only have knowledge of the contaminated likelihoods. In particular we pose a minimax optimization problem that finds a decision rule robust against this lack of knowledge. The method...
Improved maximum likelihood reconstruction of complex multi-generational pedigrees.
Sheehan, Nuala A; Bartlett, Mark; Cussens, James
2014-11-01
The reconstruction of pedigrees from genetic marker data is relevant to a wide range of applications. Likelihood-based approaches aim to find the pedigree structure that gives the highest probability to the observed data. Existing methods either entail an exhaustive search and are hence restricted to small numbers of individuals, or they take a more heuristic approach and deliver a solution that will probably have high likelihood but is not guaranteed to be optimal. By encoding the pedigree learning problem as an integer linear program we can exploit efficient optimisation algorithms to construct pedigrees guaranteed to have maximal likelihood for the standard situation where we have complete marker data at unlinked loci and segregation of genes from parents to offspring is Mendelian. Previous work demonstrated efficient reconstruction of pedigrees of up to about 100 individuals. The modified method that we present here is not so restricted: we demonstrate its applicability with simulated data on a real human pedigree structure of over 1600 individuals. It also compares well with a very competitive approximate approach in terms of solving time and accuracy. In addition to identifying a maximum likelihood pedigree, we can obtain any number of pedigrees in decreasing order of likelihood. This is useful for assessing the uncertainty of a maximum likelihood solution and permits model averaging over high likelihood pedigrees when this would be appropriate. More importantly, when the solution is not unique, as will often be the case for large pedigrees, it enables investigation into the properties of maximum likelihood pedigree estimates which has not been possible up to now. Crucially, we also have a means of assessing the behaviour of other approximate approaches which all aim to find a maximum likelihood solution. Our approach hence allows us to properly address the question of whether a reasonably high likelihood solution that is easy to obtain is practically as
Mayotte, Jean-Marc; Grabs, Thomas; Sutliff-Johansson, Stacy; Bishop, Kevin
2017-06-01
This study examined how the inactivation of bacteriophage MS2 in water was affected by ionic strength (IS) and dissolved organic carbon (DOC) using static batch inactivation experiments at 4 °C conducted over a period of 2 months. Experimental conditions were characteristic of an operational managed aquifer recharge (MAR) scheme in Uppsala, Sweden. Experimental data were fit with constant and time-dependent inactivation models using two methods: (1) traditional linear and nonlinear least-squares techniques; and (2) a Monte-Carlo based parameter estimation technique called generalized likelihood uncertainty estimation (GLUE). The least-squares and GLUE methodologies gave very similar estimates of the model parameters and their uncertainty. This demonstrates that GLUE can be used as a viable alternative to traditional least-squares parameter estimation techniques for fitting of virus inactivation models. Results showed a slight increase in constant inactivation rates following an increase in the DOC concentrations, suggesting that the presence of organic carbon enhanced the inactivation of MS2. The experiment with a high IS and a low DOC was the only experiment which showed that MS2 inactivation may have been time-dependent. However, results from the GLUE methodology indicated that models of constant inactivation were able to describe all of the experiments. This suggested that inactivation time-series longer than 2 months were needed in order to provide concrete conclusions regarding the time-dependency of MS2 inactivation at 4 °C under these experimental conditions.
Quantifying functional connectivity in multi-subject fMRI data using component models.
Madsen, Kristoffer H; Churchill, Nathan W; Mørup, Morten
2017-02-01
Functional magnetic resonance imaging (fMRI) is increasingly used to characterize functional connectivity between brain regions. Given the vast number of between-voxel interactions in high-dimensional fMRI data, it is an ongoing challenge to detect stable and generalizable functional connectivity in the brain among groups of subjects. Component models can be used to define subspace representations of functional connectivity that are more interpretable. It is, however, unclear which component model provides the optimal representation of functional networks for multi-subject fMRI datasets. A flexible cross-validation approach that assesses the ability of the models to predict voxel-wise covariance in new data, using three different measures of generalization was proposed. This framework is used to compare a range of component models with varying degrees of flexibility in their representation of functional connectivity, evaluated on both simulated and experimental resting-state fMRI data. It was demonstrated that highly flexible subject-specific component subspaces, as well as very constrained average models, are poor predictors of whole-brain functional connectivity, whereas the best-generalizing models account for subject variability within a common spatial subspace. Within this set of models, spatial Independent Component Analysis (sICA) on concatenated data provides more interpretable brain patterns, whereas a consistent-covariance model that accounts for subject-specific network scaling (PARAFAC2) provides greater stability in functional connectivity relationships between components and their spatial representations. The proposed evaluation framework is a promising quantitative approach to evaluating component models, and reveals important differences between subspace models in terms of predictability, robustness, characterization of subject variability, and interpretability of the model parameters. Hum Brain Mapp 38:882-899, 2017. © 2016 Wiley Periodicals, Inc
MARGINAL EMPIRICAL LIKELIHOOD AND SURE INDEPENDENCE FEATURE SCREENING.
Chang, Jinyuan; Tang, Cheng Yong; Wu, Yichao
2013-08-01
We study a marginal empirical likelihood approach in scenarios when the number of variables grows exponentially with the sample size. The marginal empirical likelihood ratios as functions of the parameters of interest are systematically examined, and we find that the marginal empirical likelihood ratio evaluated at zero can be used to differentiate whether an explanatory variable is contributing to a response variable or not. Based on this finding, we propose a unified feature screening procedure for linear models and the generalized linear models. Different from most existing feature screening approaches that rely on the magnitudes of some marginal estimators to identify true signals, the proposed screening approach is capable of further incorporating the level of uncertainties of such estimators. Such a merit inherits the self-studentization property of the empirical likelihood approach, and extends the insights of existing feature screening methods. Moreover, we show that our screening approach is less restrictive to distributional assumptions, and can be conveniently adapted to be applied in a broad range of scenarios such as models specified using general moment conditions. Our theoretical results and extensive numerical examples by simulations and data analysis demonstrate the merits of the marginal empirical likelihood approach.
An improved likelihood model for eye tracking
DEFF Research Database (Denmark)
Hammoud, Riad I.; Hansen, Dan Witzner
2007-01-01
While existing eye detection and tracking algorithms can work reasonably well in a controlled environment, they tend to perform poorly under real world imaging conditions where the lighting produces shadows and the person's eyes can be occluded by e.g. glasses or makeup. As a result, pixel clusters...... associated with the eyes tend to be grouped together with background-features. This problem occurs both for eye detection and eye tracking. Problems that especially plague eye tracking include head movement, eye blinking and light changes, all of which can cause the eyes to suddenly disappear. The usual...... approach in such cases is to abandon the tracking routine and re-initialize eye detection. Of course this may be a difficult process due to missed data problem. Accordingly, what is needed is an efficient method of reliably tracking a person's eyes between successively produced video image frames, even...
Marra, M.A.; Vanheule, V.; Fluit, R.; Koopman, B.H.; Rasmussen, J.; Verdonschot, N.J.; Andersen, M.S.
2015-01-01
Musculoskeletal (MS) models should be able to integrate patient-specific MS architecture and undergo thorough validation prior to their introduction into clinical practice. We present a methodology to develop subject-specific models able to simultaneously predict muscle, ligament, and knee joint
Winkelmann, Rainer
2004-01-01
The previous literature on the determinants of individual well-being has failed to fully account for the interdependencies in well-being at the family level. This paper develops an ordered probit model with multiple random effects that allows to identify the intrafamily correlation in well-being. The parameters of the model can be estimated with panel data using Maximum Marginal Likelihood. The approach is illustrated in an application using panel data for the period 1984-1997 from the German...
Employee subjective well-being and physiological functioning: An integrative model
Directory of Open Access Journals (Sweden)
Lauren Kuykendall
2015-06-01
Full Text Available Research shows that worker subjective well-being influences physiological functioning—an early signal of poor health outcomes. While several theoretical perspectives provide insights on this relationship, the literature lacks an integrative framework explaining the relationship. We develop a conceptual model explaining the link between subjective well-being and physiological functioning in the context of work. Integrating positive psychology and occupational stress perspectives, our model explains the relationship between subjective well-being and physiological functioning as a result of the direct influence of subjective well-being on physiological functioning and of their common relationships with work stress and personal resources, both of which are influenced by job conditions.
Hysteretic MDOF Model to Quantify Damage for RC Shear Frames Subject to Earthquakes
DEFF Research Database (Denmark)
Köylüoglu, H. Ugur; Nielsen, Søren R.K.; Cakmak, Ahmet S.
A hysteretic mechanical formulation is derived to quantify local, modal and overall damage in reinforced concrete (RC) shear frames subject to seismic excitation. Each interstorey is represented by a Clough and Johnston (1966) hysteretic constitutive relation with degrading elastic fraction of th...... shear frame is subject to simulated earthquake excitations, which are modelled as a stationary Gaussian stochastic process with Kanai-Tajimi spectrum, multiplied by an envelope function. The relationship between local, modal and overall damage indices is investigated statistically....
Quantifying functional connectivity in multi-subject fMRI data using component models
DEFF Research Database (Denmark)
Madsen, Kristoffer Hougaard; Churchill, Nathan William; Mørup, Morten
2017-01-01
Functional magnetic resonance imaging (fMRI) is increasingly used to characterize functional connectivity between brain regions. Given the vast number of between-voxel interactions in high-dimensional fMRI data, it is an ongoing challenge to detect stable and generalizable functional connectivity...... in the brain among groups of subjects. Component models can be used to define subspace representations of functional connectivity that are more interpretable. It is, however, unclear which component model provides the optimal representation of functional networks for multi-subject fMRI datasets. A flexible...... of functional connectivity, evaluated on both simulated and experimental resting-state fMRI data. It was demonstrated that highly flexible subject-specific component subspaces, as well as very constrained average models, are poor predictors of whole-brain functional connectivity, whereas the best...
Likelihood Analysis of Supersymmetric SU(5) GUTs
Bagnaschi, E.
2017-01-01
We perform a likelihood analysis of the constraints from accelerator experiments and astrophysical observations on supersymmetric (SUSY) models with SU(5) boundary conditions on soft SUSY-breaking parameters at the GUT scale. The parameter space of the models studied has 7 parameters: a universal gaugino mass $m_{1/2}$, distinct masses for the scalar partners of matter fermions in five- and ten-dimensional representations of SU(5), $m_5$ and $m_{10}$, and for the $\\mathbf{5}$ and $\\mathbf{\\bar 5}$ Higgs representations $m_{H_u}$ and $m_{H_d}$, a universal trilinear soft SUSY-breaking parameter $A_0$, and the ratio of Higgs vevs $\\tan \\beta$. In addition to previous constraints from direct sparticle searches, low-energy and flavour observables, we incorporate constraints based on preliminary results from 13 TeV LHC searches for jets + MET events and long-lived particles, as well as the latest PandaX-II and LUX searches for direct Dark Matter detection. In addition to previously-identified mechanisms for bringi...
On the Relationships between Sum Score Based Estimation and Joint Maximum Likelihood Estimation
del Pino, Guido; San Martin, Ernesto; Gonzalez, Jorge; De Boeck, Paul
2008-01-01
This paper analyzes the sum score based (SSB) formulation of the Rasch model, where items and sum scores of persons are considered as factors in a logit model. After reviewing the evolution leading to the equality between their maximum likelihood estimates, the SSB model is then discussed from the point of view of pseudo-likelihood and of…
Penalized maximum likelihood estimation for generalized linear point processes
DEFF Research Database (Denmark)
Hansen, Niels Richard
2010-01-01
-likelihood. Of particular interest is when the intensity is expressed in terms of a linear filter parametrized by a Sobolev space. Using that the Sobolev spaces are reproducing kernel Hilbert spaces we derive results on the representation of the penalized maximum likelihood estimator in a special case and the gradient......A generalized linear point process is specified in terms of an intensity that depends upon a linear predictor process through a fixed non-linear function. We present a framework where the linear predictor is parametrized by a Banach space and give results on Gateaux differentiability of the log...... of the negative log-likelihood in general. The latter is used to develop a descent algorithm in the Sobolev space. We conclude the paper by extensions to multivariate and additive model specifications. The methods are implemented in the R-package ppstat....
Nonparametric likelihood based estimation of linear filters for point processes
DEFF Research Database (Denmark)
Hansen, Niels Richard
2015-01-01
We consider models for multivariate point processes where the intensity is given nonparametrically in terms of functions in a reproducing kernel Hilbert space. The likelihood function involves a time integral and is consequently not given in terms of a finite number of kernel evaluations. The main...
Likelihood-based inference for clustered line transect data
DEFF Research Database (Denmark)
Waagepetersen, Rasmus; Schweder, Tore
2006-01-01
The uncertainty in estimation of spatial animal density from line transect surveys depends on the degree of spatial clustering in the animal population. To quantify the clustering we model line transect data as independent thinnings of spatial shot-noise Cox processes. Likelihood-based inference...
A Rayleigh Doppler frequency estimator derived from maximum likelihood theory
DEFF Research Database (Denmark)
Hansen, Henrik; Affes, Sofiéne; Mermelstein, Paul
1999-01-01
capacities in low and high speed situations. We derive a Doppler frequency estimator using the maximum likelihood method and Jakes model (1974) of a Rayleigh fading channel. This estimator requires an FFT and simple post-processing only. Its performance is verified through simulations and found to yield good...
GPU accelerated likelihoods for stereo-based articulated tracking
DEFF Research Database (Denmark)
Friborg, Rune Møllegaard; Hauberg, Søren; Erleben, Kenny
2010-01-01
For many years articulated tracking has been an active research topic in the computer vision community. While working solutions have been suggested, computational time is still problematic. We present a GPU implementation of a ray-casting based likelihood model that is orders of magnitude faster...
Likelihood-based inference for clustered line transect data
DEFF Research Database (Denmark)
Waagepetersen, Rasmus Plenge; Schweder, Tore
The uncertainty in estimation of spatial animal density from line transect surveys depends on the degree of spatial clustering in the animal population. To quantify the clustering we model line transect data as independent thinnings of spatial shot-noise Cox processes. Likelihood-based inference...
Carbon flux bias estimation employing Maximum Likelihood Ensemble Filter (MLEF)
Zupanski, Dusanka; Denning, A. Scott; Uliasz, Marek; Zupanski, Milija; Schuh, Andrew E.; Rayner, Peter J.; Peters, Wouter; Corbin, Katherine D.
2007-01-01
We evaluate the capability of an ensemble based data assimilation approach, referred to as Maximum Likelihood Ensemble Filter (MLEF), to estimate biases in the CO2 photosynthesis and respiration fluxes. We employ an off-line Lagrangian Particle Dispersion Model (LPDM), which is driven by the carbon
Asymptotic behavior of the likelihood function of covariance matrices of spatial Gaussian processes
DEFF Research Database (Denmark)
Zimmermann, Ralf
2010-01-01
The covariance structure of spatial Gaussian predictors (aka Kriging predictors) is generally modeled by parameterized covariance functions; the associated hyperparameters in turn are estimated via the method of maximum likelihood. In this work, the asymptotic behavior of the maximum likelihood...... of spatial Gaussian predictor models as a function of its hyperparameters is investigated theoretically. Asymptotic sandwich bounds for the maximum likelihood function in terms of the condition number of the associated covariance matrix are established. As a consequence, the main result is obtained...
Cox, Murray P.; Mendez, Fernando L.; Karafet, Tatiana M.; Pilkington, Maya Metni; Kingan, Sarah B.; Destro-Bisol, Giovanni; Strassmann, Beverly I.; Hammer, Michael F.
2008-01-01
A 2.4-kb stretch within the RRM2P4 region of the X chromosome, previously sequenced in a sample of 41 globally distributed humans, displayed both an ancient time to the most recent common ancestor (e.g., a TMRCA of ∼2 million years) and a basal clade composed entirely of Asian sequences. This pattern was interpreted to reflect a history of introgressive hybridization from archaic hominins (most likely Asian Homo erectus) into the anatomically modern human genome. Here, we address this hypothesis by resequencing the 2.4-kb RRM2P4 region in 131 African and 122 non-African individuals and by extending the length of sequence in a window of 16.5 kb encompassing the RRM2P4 pseudogene in a subset of 90 individuals. We find that both the ancient TMRCA and the skew in non-African representation in one of the basal clades are essentially limited to the central 2.4-kb region. We define a new summary statistic called the minimum clade proportion (pmc), which quantifies the proportion of individuals from a specified geographic region in each of the two basal clades of a binary gene tree, and then employ coalescent simulations to assess the likelihood of the observed central RRM2P4 genealogy under two alternative views of human evolutionary history: recent African replacement (RAR) and archaic admixture (AA). A molecular-clock-based TMRCA estimate of 2.33 million years is a statistical outlier under the RAR model; however, the large variance associated with this estimate makes it difficult to distinguish the predictions of the human origins models tested here. The pmc summary statistic, which has improved power with larger samples of chromosomes, yields values that are significantly unlikely under the RAR model and fit expectations better under a range of archaic admixture scenarios. PMID:18202385
Maximum likelihood convolutional decoding (MCD) performance due to system losses
Webster, L.
1976-01-01
A model for predicting the computational performance of a maximum likelihood convolutional decoder (MCD) operating in a noisy carrier reference environment is described. This model is used to develop a subroutine that will be utilized by the Telemetry Analysis Program to compute the MCD bit error rate. When this computational model is averaged over noisy reference phase errors using a high-rate interpolation scheme, the results are found to agree quite favorably with experimental measurements.
Sensitivity of subject-specific models to Hill muscle-tendon model parameters in simulations of gait
Carbone, Vincenzo; van der Krogt, Marjolein; Koopman, Hubertus F.J.M.; Verdonschot, Nicolaas Jacobus Joseph
2016-01-01
Subject-specific musculoskeletal (MS) models of the lower extremity are essential for applications such as predicting the effects of orthopedic surgery. We performed an extensive sensitivity analysis to assess the effects of potential errors in Hill muscle–tendon (MT) model parameters for each of
Sensitivity of subject-specific models to Hill muscle-tendon model parameters in simulations of gait
Carbone, V.; Krogt, M.M. van der; Koopman, H.F.J.M.; Verdonschot, N.J.
2016-01-01
Subject-specific musculoskeletal (MS) models of the lower extremity are essential for applications such as predicting the effects of orthopedic surgery. We performed an extensive sensitivity analysis to assess the effects of potential errors in Hill muscle-tendon (MT) model parameters for each of
Image driven subject-specific finite element models of spinal biomechanics.
Zanjani-Pour, Sahand; Winlove, C Peter; Smith, Christopher W; Meakin, Judith R
2016-04-11
Finite element (FE) modelling is an established technique for investigating spinal biomechanics. Using image data to produce FE models with subject-specific geometry and displacement boundary conditions may help extend their use to the assessment spinal loading in individuals. Lumbar spine magnetic resonance images from nine participants in the supine, standing and sitting postures were obtained and 2D poroelastic FE models of the lumbar spine were created from the supine data. The rigid body translation and rotation of the vertebral bodies as the participant moved to standing or sitting were applied to the model. The resulting pore pressure in the centre of the L4/L5 disc was determined and the sensitivity to the material properties and vertebral body displacements was assessed. Although the limitations of using a 2D model mean the predicted pore pressures are unlikely to be accurate, the results showed that subject-specific variation in geometry and motion during postural change leads to variation in pore pressure. The model was sensitive to the Young׳s modulus of the annulus matrix, the permeability of the nucleus, and the vertical translation of the vertebrae. This study demonstrates the feasibility of using image data to drive subject-specific lumbar spine FE models and indicates where further development is required to provide a method for assessing spinal biomechanics in a wide range of individuals. Copyright © 2016 Elsevier Ltd. All rights reserved.
Vector model for mapping of visual space to subjective 4-D sphere
Matuzevicius, Dalius; Vaitkevicius, Henrikas
2014-03-01
Here we present a mathematical model of binocular vision that maps a visible physical world to a subjective perception of it. The subjective space is a set of 4-D vectors whose components are outputs of four monocular neurons from each of the two eyes. Monocular neurons have one of the four types of concentric receptive fields with Gabor-like weighting coefficients. Next this vector representation of binocular vision is implemented as a pool of neurons where each of them is selective to the object's particular location in a 3-D visual space. Formally each point of the visual space is being projected onto a 4-D sphere. Proposed model allows determination of subjective distances in depth and direction, provides computational means for determination of Panum's area and explains diplopia and allelotropia.
Jackknife empirical likelihood method for copulas
Peng, Liang; Qi, Yongcheng; Van Keilegom, Ingrid
2012-01-01
Copulas are used to depict dependence among several random variables. Both parametric and non-parametric estimation methods have been studied in the literature. Moreover, profile empirical likelihood methods based on either empirical copula estimation or smoothed copula estimation have been proposed to construct confidence intervals of a copula. In this paper, a jackknife empirical likelihood method is proposed to reduce the computation with respect to the existing profile empirical likelihoo...
National Research Council Canada - National Science Library
Rodríguez-Fernández, Arantzazu; Ramos-Díaz, Estibaliz; Fernández-Zabala, Arantza; Goñi, Eider; Goñi, Alfredo; Esnaola, Igor
2016-01-01
...), aged between 12 and 15 years (M = 13.72, SD =1.09), randomly selected. We used a structural equation model to analyze the effects of perceived social support, self-concept and resilience on subjective well-being and school engagement. Results...
Hybrid neural network model for the design of beam subjected to ...
Indian Academy of Sciences (India)
This paper demonstrates the applicability of Artiﬁcial Neural Networks (ANN) and Genetic Algorithms (GA) for the design of beams subjected to moment and shear. A hybrid neural network model which combines the features of feed forward neural networks and genetic algorithms has been developed for the design of beam ...
Sensitivity of subject-specific models to errors in musculo-skeletal geometry
Carbone, Vincenzo; van der Krogt, Marjolein; Koopman, Hubertus F.J.M.; Verdonschot, Nicolaas Jacobus Joseph
2012-01-01
Subject-specific musculo-skeletal models of the lower extremity are an important tool for investigating various biomechanical problems, for instance the results of surgery such as joint replacements and tendon transfers. The aim of this study was to assess the potential effects of errors in
Directory of Open Access Journals (Sweden)
Kang-Wook Lee
2017-05-01
Full Text Available An important issue for international businesses and academia is selecting countries in which to expand in order to achieve entrepreneurial sustainability. This study develops a country selection model for sustainable construction businesses using both objective and subjective information. The objective information consists of 14 variables related to country risk and project performance in 32 countries over 25 years. This hybrid model applies subjective weighting from industrial experts to objective information using a fuzzy LinPreRa-based Analytic Hierarchy Process. The hybrid model yields a more accurate country selection compared to a purely objective information-based model in experienced countries. Interestingly, the hybrid model provides some different predictions with only subjective opinions in unexperienced countries, which implies that expert opinion is not always reliable. In addition, feedback from five experts in top international companies is used to validate the model’s completeness, effectiveness, generality, and applicability. The model is expected to aid decision makers in selecting better candidate countries that lead to sustainable business success.
Experiential Learning Model on Entrepreneurship Subject to Improve Students’ Soft Skills
Directory of Open Access Journals (Sweden)
Lina Rifda Naufalin
2016-06-01
Full Text Available This research aims to improve students’ soft skills on entrepreneurship subject by using experiential learning model. It was expected that the learning model could upgrade students’ soft skills which were indicated by the higher confidence, result and job oriented, being courageous to take risks, leadership, originality, and future-oriented. It was a class action research using Kemmis and Mc Tagart’s design model. The research was conducted for two cycles. The subject of the study was economics education students in the year of 2015/2016. Findings show that the experiential learning model could improve students’ soft skills. The research showed that there is increased at the dimension of confidence by 52.1%, result-oriented by 22.9%, being courageous to take risks by 10.4%, leadership by 12.5%, originality by 10.4%, and future-oriented by 18.8%. It could be concluded that the experiential learning model is effective model to improve students’ soft skills on entrepreneurship subject. Dimension of confidence has the highest rise. Students’ soft skills are shaped through the continuous stimulus when they get involved at the implementation.
Likelihood analysis of supersymmetric SU(5) GUTs
Energy Technology Data Exchange (ETDEWEB)
Bagnaschi, E.; Weiglein, G. [DESY, Hamburg (Germany); Costa, J.C.; Buchmueller, O.; Citron, M.; Richards, A.; De Vries, K.J. [Imperial College, High Energy Physics Group, Blackett Laboratory, London (United Kingdom); Sakurai, K. [University of Durham, Science Laboratories, Department of Physics, Institute for Particle Physics Phenomenology, Durham (United Kingdom); University of Warsaw, Faculty of Physics, Institute of Theoretical Physics, Warsaw (Poland); Borsato, M.; Chobanova, V.; Lucio, M.; Martinez Santos, D. [Universidade de Santiago de Compostela, Santiago de Compostela (Spain); Cavanaugh, R. [Fermi National Accelerator Laboratory, Batavia, IL (United States); University of Illinois at Chicago, Physics Department, Chicago, IL (United States); Roeck, A. de [CERN, Experimental Physics Department, Geneva (Switzerland); Antwerp University, Wilrijk (Belgium); Dolan, M.J. [University of Melbourne, ARC Centre of Excellence for Particle Physics at the Terascale, School of Physics, Parkville (Australia); Ellis, J.R. [King' s College London, Theoretical Particle Physics and Cosmology Group, Department of Physics, London (United Kingdom); Theoretical Physics Department, CERN, Geneva 23 (Switzerland); Flaecher, H. [University of Bristol, H.H. Wills Physics Laboratory, Bristol (United Kingdom); Heinemeyer, S. [Campus of International Excellence UAM+CSIC, Cantoblanco, Madrid (Spain); Instituto de Fisica Teorica UAM-CSIC, Madrid (Spain); Instituto de Fisica de Cantabria (CSIC-UC), Santander (Spain); Isidori, G. [Universitaet Zuerich, Physik-Institut, Zurich (Switzerland); Olive, K.A. [University of Minnesota, William I. Fine Theoretical Physics Institute, School of Physics and Astronomy, Minneapolis, MN (United States)
2017-02-15
We perform a likelihood analysis of the constraints from accelerator experiments and astrophysical observations on supersymmetric (SUSY) models with SU(5) boundary conditions on soft SUSY-breaking parameters at the GUT scale. The parameter space of the models studied has seven parameters: a universal gaugino mass m{sub 1/2}, distinct masses for the scalar partners of matter fermions in five- and ten-dimensional representations of SU(5), m{sub 5} and m{sub 10}, and for the 5 and anti 5 Higgs representations m{sub H{sub u}} and m{sub H{sub d}}, a universal trilinear soft SUSY-breaking parameter A{sub 0}, and the ratio of Higgs vevs tan β. In addition to previous constraints from direct sparticle searches, low-energy and flavour observables, we incorporate constraints based on preliminary results from 13 TeV LHC searches for jets + E{sub T} events and long-lived particles, as well as the latest PandaX-II and LUX searches for direct Dark Matter detection. In addition to previously identified mechanisms for bringing the supersymmetric relic density into the range allowed by cosmology, we identify a novel u{sub R}/c{sub R} - χ{sup 0}{sub 1} coannihilation mechanism that appears in the supersymmetric SU(5) GUT model and discuss the role of ν{sub τ} coannihilation. We find complementarity between the prospects for direct Dark Matter detection and SUSY searches at the LHC. (orig.)
Maximum likelihood optimal and robust Support Vector Regression with lncosh loss function.
Karal, Omer
2017-10-01
In this paper, a novel and continuously differentiable convex loss function based on natural logarithm of hyperbolic cosine function, namely lncosh loss, is introduced to obtain Support Vector Regression (SVR) models which are optimal in the maximum likelihood sense for the hyper-secant error distributions. Most of the current regression models assume that the distribution of error is Gaussian, which corresponds to the squared loss function and has helpful analytical properties such as easy computation and analysis. However, in many real world applications, most observations are subject to unknown noise distributions, so the Gaussian distribution may not be a useful choice. The developed SVR model with the parameterized lncosh loss provides a possibility of learning a loss function leading to a regression model which is maximum likelihood optimal for a specific input-output data. The SVR models obtained with different parameter choices of lncosh loss with ε-insensitiveness feature, possess most of the desirable characteristics of well-known loss functions such as Vapnik's loss, the Squared loss, and Huber's loss function as special cases. In other words, it is observed in the extensive simulations that the mentioned lncosh loss function is entirely controlled by a single adjustable λ parameter and as a result, it allows switching between different losses depending on the choice of λ. The effectiveness and feasibility of lncosh loss function are validated through a number of synthetic and real world benchmark data sets for various types of additive noise distributions. Copyright © 2017 Elsevier Ltd. All rights reserved.
Zoraghi, Nima; Amiri, Maghsoud; Talebi, Golnaz; Zowghi, Mahdi
2013-12-01
This paper presents a fuzzy multi-criteria decision-making (FMCDM) model by integrating both subjective and objective weights for ranking and evaluating the service quality in hotels. The objective method selects weights of criteria through mathematical calculation, while the subjective method uses judgments of decision makers. In this paper, we use a combination of weights obtained by both approaches in evaluating service quality in hotel industries. A real case study that considered ranking five hotels is illustrated. Examples are shown to indicate capabilities of the proposed method.
The Patient-Worker: A Model for Human Research Subjects and Gestational Surrogates.
Ryman, Emma; Fulfer, Katy
2017-01-13
We propose the 'patient-worker' as a theoretical construct that responds to moral problems that arise with the globalization of healthcare and medical research. The patient-worker model recognizes that some participants in global medical industries are workers and are owed worker's rights. Further, these participants are patient-like insofar as they are beneficiaries of fiduciary relationships with healthcare professionals. We apply the patient-worker model to human subjects research and commercial gestational surrogacy. In human subjects research, subjects are usually characterized as either patients or as workers. Through questioning this dichotomy, we argue that some subject populations fit into both categories. With respect to commercial surrogacy, we enrich feminist discussions of embodied labor by describing how surrogates are beneficiaries of fiduciary obligations. They are not just workers, but patient-workers. Through these applications, the patient-worker model offers a helpful normative framework for exploring what globalized medical industries owe to the individuals who bear the bodily burdens of medical innovation. © 2017 John Wiley & Sons Ltd.
An improved synchronization likelihood method for quantifying neuronal synchrony.
Khanmohammadi, Sina
2017-12-01
Indirect quantification of the synchronization between two dynamical systems from measured experimental data has gained much attention in recent years, especially in the computational neuroscience community where the exact model of the neuronal dynamics is unknown. In this regard, one of the most promising methods for quantifying the interrelationship between nonlinear non-stationary systems is known as Synchronization Likelihood (SL), which is based on the likelihood of the auto-recurrence of embedding vectors (similar patterns) in multiple dynamical systems. However, synchronization likelihood method uses the Euclidean distance to determine the similarity of two patterns, which is known to be sensitive to outliers. In this study, we propose a discrete synchronization likelihood (DSL) method to overcome this limitation by using the Manhattan distance in the discrete domain (l1 norm on discretized signals) to identify the auto-recurrence of embedding vectors. The proposed method was tested using unidirectional and bidirectional identical/non-identical coupled Hénon Maps, a Watts-Strogatz small-world network with nonlinearly coupled nodes based on Kuramoto model and the real-world ADHD-200 fMRI benchmark dataset. According to the results, the proposed method shows comparable and in some cases better performance than the conventional SL method, especially when the underlying highly connected coupled dynamical system goes through subtle changes in the bivariate case or sudden shifts in the multivariate case. Copyright © 2017 Elsevier Ltd. All rights reserved.
Robust multipoint water-fat separation using fat likelihood analysis.
Yu, Huanzhou; Reeder, Scott B; Shimakawa, Ann; McKenzie, Charles A; Brittain, Jean H
2012-04-01
Fat suppression is an essential part of routine MRI scanning. Multiecho chemical-shift based water-fat separation methods estimate and correct for Bo field inhomogeneity. However, they must contend with the intrinsic challenge of water-fat ambiguity that can result in water-fat swapping. This problem arises because the signals from two chemical species, when both are modeled as a single discrete spectral peak, may appear indistinguishable in the presence of Bo off-resonance. In conventional methods, the water-fat ambiguity is typically removed by enforcing field map smoothness using region growing based algorithms. In reality, the fat spectrum has multiple spectral peaks. Using this spectral complexity, we introduce a novel concept that identifies water and fat for multiecho acquisitions by exploiting the spectral differences between water and fat. A fat likelihood map is produced to indicate if a pixel is likely to be water-dominant or fat-dominant by comparing the fitting residuals of two different signal models. The fat likelihood analysis and field map smoothness provide complementary information, and we designed an algorithm (Fat Likelihood Analysis for Multiecho Signals) to exploit both mechanisms. It is demonstrated in a wide variety of data that the Fat Likelihood Analysis for Multiecho Signals algorithm offers highly robust water-fat separation for 6-echo acquisitions, particularly in some previously challenging applications. Copyright © 2011 Wiley-Liss, Inc.
Shakedown modeling of unsaturated expansive soils subjected to wetting and drying cycles
Directory of Open Access Journals (Sweden)
Nowamooz Hossein
2016-01-01
Full Text Available It is important to model the behavior of unsaturated expansive soils subjected to wetting and drying cycles because they alter significantly their hydro-mechanical behavior and therefore cause a huge differential settlement on shallow foundations of the structure. A simplified model based on the shakedown theory (Zarka method has been developed in this study for unsaturated expansive soils subjected to wetting and drying cycles. This method determines directly the stabilized limit state and consequently saves the calculation time. The parameters of the proposed shakedown-based model are calibrated by the suction-controlled oedometer tests obtained for an expansive soil compacted at loose and dense initial states, and then validated for the same soil compacted at intermediate initial state by comparing the model predictions with the experimental results. Finally, the finite element equations for the proposed shakedown model are developed and these equations are implemented in the finite element code CAST3M to carry out the full-scale calculations. A 2D geometry made up of the expansive soil compacted at the intermediate state is subjected to successive extremely dry and wet seasons for the different applied vertical loads. The results show the swelling plastic deformations for the lower vertical stresses and the shrinkage deformations for the higher vertical stresses.
Gerpott, Fabiola H; Balliet, Daniel; Columbus, Simon; Molho, Catherine; de Vries, Reinout E
2017-09-04
Interdependence is a fundamental characteristic of social interactions. Interdependence Theory states that 6 dimensions describe differences between social situations. Here we examine if these 6 dimensions describe how people think about their interdependence with others in a situation. We find that people (in situ and ex situ) can reliably differentiate situations according to 5, but not 6, dimensions of interdependence: (a) mutual dependence, (b) power, (c) conflict, (d) future interdependence, and (e) information certainty. This model offers a unique framework for understanding how people think about social situations compared to another recent model of situation construal (DIAMONDS). Furthermore, we examine factors that are theorized to shape perceptions of interdependence, such as situational cues (e.g., nonverbal behavior) and personality (e.g., HEXACO and Social Value Orientation). We also study the implications of subjective interdependence for emotions and cooperative behavior during social interactions. This model of subjective interdependence explains substantial variation in the emotions people experience in situations (i.e., happiness, sadness, anger, and disgust), and explains 24% of the variance in cooperation, above and beyond the DIAMONDS model. Throughout these studies, we develop and validate a multidimensional measure of subjective outcome interdependence that can be used in diverse situations and relationships-the Situational Interdependence Scale (SIS). We discuss how this model of interdependence can be used to better understand how people think about social situations encountered in close relationships, organizations, and society. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Quantification of Subjective Scaling of Friction Using a Fingertip Biomechanical Model
Directory of Open Access Journals (Sweden)
Mohammad Abdolvahab
2012-01-01
Full Text Available Subjective scaling of friction is important in many applications in haptic technology. A nonhomogeneous biomechanical finite element model of fingertip is proposed in order to predict neural response of sensitive mechanoreceptors to frictional stimuli (Slowly Adapting SAII receptors under the glabrous skin. In a guided psychophysical experiment, ten human subjects were asked to scale several standard surfaces based on the perception of their frictional properties. Contact forces deployed during the exploratory time of one of the participants were captured in order to estimate required parameters for the model of contact in the simulation procedure. Consequently, the strain energy density at the location of a selective mechanoreceptor in the finite element model as a measure of discharge rate of the neural unit was compared to the subject’s perceptual evaluation of the relevant stimuli. It was observed that the subject’s scores correlate with the discharge rate of the given receptor.
Mantel, Claire; Bech, Søren; Korhonen, Jari; Forchhammer, Søren; Pedersen, Jesper Melgaard
2015-02-01
Local backlight dimming is a technology aiming at both saving energy and improving visual quality on television sets. As the rendition of the image is specified locally, the numerical signal corresponding to the displayed image needs to be computed through a model of the display. This simulated signal can then be used as input to objective quality metrics. The focus of this paper is on determining which characteristics of locally backlit displays influence quality assessment. A subjective experiment assessing the quality of highly contrasted videos displayed with various local backlight-dimming algorithms is set up. Subjective results are then compared with both objective measures and objective quality metrics using different display models. The first analysis indicates that the most significant objective features are temporal variations, power consumption (probably representing leakage), and a contrast measure. The second analysis shows that modeling of leakage is necessary for objective quality assessment of sequences displayed with local backlight dimming.
A Longitudinal Item Response Theory Model to Characterize Cognition Over Time in Elderly Subjects
Bornkamp, Björn; Krahnke, Tillmann; Mielke, Johanna; Monsch, Andreas; Quarg, Peter
2017-01-01
For drug development in neurodegenerative diseases such as Alzheimer's disease, it is important to understand which cognitive domains carry the most information on the earliest signs of cognitive decline, and which subject characteristics are associated with a faster decline. A longitudinal Item Response Theory (IRT) model was developed for the Basel Study on the Elderly, in which the Consortium to Establish a Registry for Alzheimer's Disease – Neuropsychological Assessment Battery (with additions) and the California Verbal Learning Test were measured on 1,750 elderly subjects for up to 13.9 years. The model jointly captured the multifaceted nature of cognition and its longitudinal trajectory. The word list learning and delayed recall tasks carried the most information. Greater age at baseline, fewer years of education, and positive APOEɛ4 carrier status were associated with a faster cognitive decline. Longitudinal IRT modeling is a powerful approach for progressive diseases with multifaceted endpoints. PMID:28643388
Krause, Andreas; Dingemanse, Jasper; Mathis, Alexandre; Marquart, Louise; Möhrle, Jörg J; McCarthy, James S
2016-08-01
The aim of this study was to use data from an experimental induced blood stage malaria clinical trial to characterize the antimalarial activity of the new compound Actelion-451840 using pharmacokinetic/pharmacodynamic (PK/PD) modelling. Then, using simulations from the model, the dose and dosing regimen necessary to achieve cure of infection were derived. Eight healthy male subjects were infected with blood stage P. falciparum. After 7 days, a single dose of 500 mg of Actelion-451840 was administered under fed conditions. Parasite and drug concentrations were sampled frequently. Parasite growth and the relation to drug exposure were estimated using PK/PD modelling. Simulations were then undertaken to derive estimates of the likelihood of achieving cure in different scenarios. Actelion-451840 was safe and well tolerated. Single dose treatment markedly reduced the level of P. falciparum parasitaemia, with a weighted average parasite reduction rate of 73.6 (95% CI 56.1, 96.5) and parasite clearance half-life of 7.7 h (95% CI 7.3, 8.3). A two compartment PK/PD model with a steep concentration-kill effect predicted maximum effect with a sustained concentration of 10-15 ng ml(-1) and cure achieved in 90% of subjects with six once daily doses of 300 mg once daily. Actelion-451840 shows clinical efficacy against P. falciparum infections. The PK/PD model developed from a single proof-of-concept study with eight healthy subjects enabled prediction of therapeutic effects, with cure rates with seven daily doses predicted to be equivalent to artesunate monotherapy. Larger doses or more frequent dosing are not predicted to achieve more rapid cure. © 2016 The British Pharmacological Society.
On the maximum likelihood method for estimating molecular trees: uniqueness of the likelihood point.
Fukami, K; Tateno, Y
1989-05-01
Studies are carried out on the uniqueness of the stationary point on the likelihood function for estimating molecular phylogenetic trees, yielding proof that there exists at most one stationary point, i.e., the maximum point, in the parameter range for the one parameter model of nucleotide substitution. The proof is simple yet applicable to any type of tree topology with an arbitrary number of operational taxonomic units (OTUs). The proof ensures that any valid approximation algorithm be able to reach the unique maximum point under the conditions mentioned above. An algorithm developed incorporating Newton's approximation method is then compared with the conventional one by means of computer simulation. The results show that the newly developed algorithm always requires less CPU time than the conventional one, whereas both algorithms lead to identical molecular phylogenetic trees in accordance with the proof.
Modeling the time--varying subjective quality of HTTP video streams with rate adaptations.
Chen, Chao; Choi, Lark Kwon; de Veciana, Gustavo; Caramanis, Constantine; Heath, Robert W; Bovik, Alan C
2014-05-01
Newly developed hypertext transfer protocol (HTTP)-based video streaming technologies enable flexible rate-adaptation under varying channel conditions. Accurately predicting the users' quality of experience (QoE) for rate-adaptive HTTP video streams is thus critical to achieve efficiency. An important aspect of understanding and modeling QoE is predicting the up-to-the-moment subjective quality of a video as it is played, which is difficult due to hysteresis effects and nonlinearities in human behavioral responses. This paper presents a Hammerstein-Wiener model for predicting the time-varying subjective quality (TVSQ) of rate-adaptive videos. To collect data for model parameterization and validation, a database of longer duration videos with time-varying distortions was built and the TVSQs of the videos were measured in a large-scale subjective study. The proposed method is able to reliably predict the TVSQ of rate adaptive videos. Since the Hammerstein-Wiener model has a very simple structure, the proposed method is suitable for online TVSQ prediction in HTTP-based streaming.
Directory of Open Access Journals (Sweden)
Anatoly N. Vetrov
2017-01-01
Full Text Available Abstract. Objectives To increase the functional efficiency of information and educational environments created by automated training systems by realising individually oriented formation of knowledge using adaptive generation of heterogeneous educational influences based on an innovative block of parametric cognitive models and a set of programs to support the automation of research tasks. Method System analysis and modeling of the information and educational environment. In the process of automating the diagnosis of the individual personality characteristics of the subject of education, each method of investigation determines the input: localisation of research method, name of block of questions (subtest, textual explanatory content, formulation of question and answer variants, nominal value of the time interval for displaying the formulation of the question, as well as the graphical accompaniment of a specific question and answers thereto. Results The applied diagnostic module acts as a component of the automated learning system with adaptation properties on the basis of the innovative block of parametric cognitive models. The training system implements the generation of an ordered sequence of informational and educational influences that reflect the content of the subject of a study. Conclusion The applied diagnostic module is designed to automate the study of physiological, psychological and linguistic parameters of the cognitive model of the subject of education to provide a systematic analysis of the information and educational environment and the realisation of adaptive generation of educational influences by using training automation approaches that allow the individual characteristics of trainees to be taken into account.
Composite likelihood and two-stage estimation in family studies
DEFF Research Database (Denmark)
Andersen, Elisabeth Anne Wreford
2004-01-01
In this paper register based family studies provide the motivation for linking a two-stage estimation procedure in copula models for multivariate failure time data with a composite likelihood approach. The asymptotic properties of the estimators in both parametric and semi-parametric models...... in this last case where very flexible modelling is possible. The suggested method is also studied in simulations and found to be efficient compared to maximum likelihood. Finally, the suggested method is applied to a family study of deep venous thromboembolism where it is seen that the association between ages...... are derived, combining the approaches of Parner (2001) and Andersen (2003). The method is mainly studied when the families consist of groups of exchangeable members (e.g. siblings) or members at different levels (e.g. parents and children). The advantages of the proposed method are especially clear...
Maximum likelihood method and Fisher's information in physics and econophysics
Syska, Jacek
2012-01-01
Three steps in the development of the maximum likelihood (ML) method are presented. At first, the application of the ML method and Fisher information notion in the model selection analysis is described (Chapter 1). The fundamentals of differential geometry in the construction of the statistical space are introduced, illustrated also by examples of the estimation of the exponential models. At second, the notions of the relative entropy and the information channel capacity are introduced (Chapter 2). The observed and expected structural information principle (IP) and the variational IP of the modified extremal physical information (EPI) method of Frieden and Soffer are presented and discussed (Chapter 3). The derivation of the structural IP based on the analyticity of the logarithm of the likelihood function and on the metricity of the statistical space of the system is given. At third, the use of the EPI method is developed (Chapters 4-5). The information channel capacity is used for the field theory models cl...
Statistical damage constitutive model for rocks subjected to cyclic stress and cyclic temperature
Zhou, Shu-Wei; Xia, Cai-Chu; Zhao, Hai-Bin; Mei, Song-Hua; Zhou, Yu
2017-10-01
A constitutive model of rocks subjected to cyclic stress-temperature was proposed. Based on statistical damage theory, the damage constitutive model with Weibull distribution was extended. Influence of model parameters on the stress-strain curve for rock reloading after stress-temperature cycling was then discussed. The proposed model was initially validated by rock tests for cyclic stress-temperature and only cyclic stress. Finally, the total damage evolution induced by stress-temperature cycling and reloading after cycling was explored and discussed. The proposed constitutive model is reasonable and applicable, describing well the stress-strain relationship during stress-temperature cycles and providing a good fit to the test results. Elastic modulus in the reference state and the damage induced by cycling affect the shape of reloading stress-strain curve. Total damage induced by cycling and reloading after cycling exhibits three stages: initial slow increase, mid-term accelerated increase, and final slow increase.
Rayleigh-maximum-likelihood bilateral filter for ultrasound image enhancement.
Li, Haiyan; Wu, Jun; Miao, Aimin; Yu, Pengfei; Chen, Jianhua; Zhang, Yufeng
2017-04-17
Ultrasound imaging plays an important role in computer diagnosis since it is non-invasive and cost-effective. However, ultrasound images are inevitably contaminated by noise and speckle during acquisition. Noise and speckle directly impact the physician to interpret the images and decrease the accuracy in clinical diagnosis. Denoising method is an important component to enhance the quality of ultrasound images; however, several limitations discourage the results because current denoising methods can remove noise while ignoring the statistical characteristics of speckle and thus undermining the effectiveness of despeckling, or vice versa. In addition, most existing algorithms do not identify noise, speckle or edge before removing noise or speckle, and thus they reduce noise and speckle while blurring edge details. Therefore, it is a challenging issue for the traditional methods to effectively remove noise and speckle in ultrasound images while preserving edge details. To overcome the above-mentioned limitations, a novel method, called Rayleigh-maximum-likelihood switching bilateral filter (RSBF) is proposed to enhance ultrasound images by two steps: noise, speckle and edge detection followed by filtering. Firstly, a sorted quadrant median vector scheme is utilized to calculate the reference median in a filtering window in comparison with the central pixel to classify the target pixel as noise, speckle or noise-free. Subsequently, the noise is removed by a bilateral filter and the speckle is suppressed by a Rayleigh-maximum-likelihood filter while the noise-free pixels are kept unchanged. To quantitatively evaluate the performance of the proposed method, synthetic ultrasound images contaminated by speckle are simulated by using the speckle model that is subjected to Rayleigh distribution. Thereafter, the corrupted synthetic images are generated by the original image multiplied with the Rayleigh distributed speckle of various signal to noise ratio (SNR) levels and
Subjective modelling of supply and demand—the minimum of Fisher information solution
Piotrowski, Edward W.; Sładkowski, Jan; Syska, Jacek
2010-11-01
Two of the present authors have put forward a projective geometry based model of rational trading that implies a model for subjective demand/supply profiles if one considers closing of a position as a random process. We would like to present the analysis of a subjectivity in such trading models. In our model, the trader gets the maximal profit intensity when the probability of transaction is ∼0.5853. We also present a comparison with the model based on the Maximum of Entropy Principle. To the best of our knowledge, this is one of the first analyses that show a concrete situation in which trader profit optimal value is in the class of price-negotiating algorithms (strategies) resulting in non-monotonic demand (supply) curves of the Rest of the World (a collective opponent). Our model suggests that there might be a new class of rational trader strategies that (almost) neglects the supply-demand profile of the market. This class emerges when one tries to minimize the information that strategies reveal.
Directory of Open Access Journals (Sweden)
Rambiritch V
2016-07-01
Full Text Available Virendra Rambiritch,1 Poobalan Naidoo,2 Breminand Maharaj,1 Goonaseelan Pillai3 1University of KwaZulu-Natal, Durban, 2Department of Internal Medicine, RK Khan Regional Hospital, Chatsworth, South Africa; 3Novartis Pharma AG, Basel, Switzerland Aim: The aim of this study was to describe the pharmacokinetics (PK of glibenclamide in poorly controlled South African type 2 diabetic subjects using noncompartmental and model-based methods. Methods: A total of 24 subjects with type 2 diabetes were administered increasing doses (0 mg/d, 2.5 mg/d, 5 mg/d, 10 mg/d, and 20 mg/d of glibenclamide daily at 2-week intervals. Plasma glibenclamide, glucose, and insulin determinations were performed. Blood sampling times were 0 minute, 30 minutes, 60 minutes, 90 minutes, and 120 minutes (post breakfast sampling and 240 minutes, 270 minutes, 300 minutes, 330 minutes, 360 minutes, and 420 minutes (post lunch sampling on days 14, 28, 42, 56, and 70 for doses of 0 mg, 2.5 mg, 5.0 mg, 10 mg, and 20 mg, respectively. Blood sampling was performed after the steady state was reached. A total of 24 individuals in the data set contributed to a total of 841 observation records. The PK was analyzed using noncompartmental analysis methods, which were implemented in WinNonLin®, and population PK analysis using NONMEM®. Glibenclamide concentration data were log transformed prior to fitting. Results: A two-compartmental disposition model was selected after evaluating one-, two-, and three-compartmental models to describe the time course of glibenclamide plasma concentration data. The one-compartment model adequately described the data; however, the two-compartment model provided a better fit. The three-compartment model failed to achieve successful convergence. A more complex model, to account for enterohepatic recirculation that was observed in the data, was unsuccessful. Conclusion: In South African diabetic subjects, glibenclamide demonstrates linear PK and was best
Applied mathematical problem solving, modelling, applications, and links to other subjects
Blum, Werner; Niss, Mogens
1991-01-01
The paper will consist of three parts. In part I we shall present some background considerations which are necessary as a basis for what follows. We shall try to clarify some basic concepts and notions, and we shall collect the most important arguments (and related goals) in favour of problem solving, modelling and applications to other subjects in mathematics instruction. In the main part II we shall review the present state, recent trends, and prospective lines of development, both...
In-human subject-specific evaluation of a control-theoretic plasma volume regulation model.
Bighamian, Ramin; Kinsky, Michael; Kramer, George; Hahn, Jin-Oh
2017-12-01
The goal of this study was to conduct a subject-specific evaluation of a control-theoretic plasma volume regulation model in humans. We employed a set of clinical data collected from nine human subjects receiving fluid bolus with and without co-administration of an inotrope agent, including fluid infusion rate, plasma volume, and urine output. Once fitted to the data associated with each subject, the model accurately reproduced the fractional plasma volume change responses in all subjects: the error between actual versus model-reproduced fractional plasma volume change responses was only 1.4 ± 1.6% and 1.2 ± 0.3% of the average fractional plasma volume change responses in the absence and presence of inotrope co-administration. In addition, the model parameters determined by the subject-specific fitting assumed physiologically plausible values: (i) initial plasma volume was estimated to be 36 ± 11 mL/kg and 37 ± 10 mL/kg in the absence and presence of inotrope infusion, respectively, which was comparable to its actual counterpart of 37 ± 4 mL/kg and 43 ± 6 mL/kg; (ii) volume distribution ratio, specifying the ratio with which the inputted fluid is distributed in the intra- and extra-vascular spaces, was estimated to be 3.5 ± 2.4 and 1.9 ± 0.5 in the absence and presence of inotrope infusion, respectively, which accorded with the experimental observation that inotrope could enhance plasma volume expansion in response to fluid infusion. We concluded that the model was equipped with the ability to reproduce plasma volume response to fluid infusion in humans with physiologically plausible model parameters, and its validity may persist even under co-administration of inotropic agents. Copyright © 2017 Elsevier Ltd. All rights reserved.
Yang, Minglei; Ding, Hui; Zhu, Lei; Wang, Guangzhi
2016-12-01
Ultrasound fusion imaging is an emerging tool and benefits a variety of clinical applications, such as image-guided diagnosis and treatment of hepatocellular carcinoma and unresectable liver metastases. However, respiratory liver motion-induced misalignment of multimodal images (i.e., fusion error) compromises the effectiveness and practicability of this method. The purpose of this paper is to develop a subject-specific liver motion model and automatic registration-based method to correct the fusion error. An online-built subject-specific motion model and automatic image registration method for 2D ultrasound-3D magnetic resonance (MR) images were combined to compensate for the respiratory liver motion. The key steps included: 1) Build a subject-specific liver motion model for current subject online and perform the initial registration of pre-acquired 3D MR and intra-operative ultrasound images; 2) During fusion imaging, compensate for liver motion first using the motion model, and then using an automatic registration method to further correct the respiratory fusion error. Evaluation experiments were conducted on liver phantom and five subjects. In the phantom study, the fusion error (superior-inferior axis) was reduced from 13.90±2.38mm to 4.26±0.78mm by using the motion model only. The fusion error further decreased to 0.63±0.53mm by using the registration method. The registration method also decreased the rotation error from 7.06±0.21° to 1.18±0.66°. In the clinical study, the fusion error was reduced from 12.90±9.58mm to 6.12±2.90mm by using the motion model alone. Moreover, the fusion error decreased to 1.96±0.33mm by using the registration method. The proposed method can effectively correct the respiration-induced fusion error to improve the fusion image quality. This method can also reduce the error correction dependency on the initial registration of ultrasound and MR images. Overall, the proposed method can improve the clinical practicability of
Castruccio, Stefano
2016-01-01
In multivariate or spatial extremes, inference for max-stable processes observed at a large collection of points is a very challenging problem and current approaches typically rely on less expensive composite likelihoods constructed from small subsets of data. In this work, we explore the limits of modern state-of-the-art computational facilities to perform full likelihood inference and to efficiently evaluate high-order composite likelihoods. With extensive simulations, we assess the loss of information of composite likelihood estimators with respect to a full likelihood approach for some widely used multivariate or spatial extreme models, we discuss how to choose composite likelihood truncation to improve the efficiency, and we also provide recommendations for practitioners. This article has supplementary material online.
High-order Composite Likelihood Inference for Max-Stable Distributions and Processes
Castruccio, Stefano
2015-09-29
In multivariate or spatial extremes, inference for max-stable processes observed at a large collection of locations is a very challenging problem in computational statistics, and current approaches typically rely on less expensive composite likelihoods constructed from small subsets of data. In this work, we explore the limits of modern state-of-the-art computational facilities to perform full likelihood inference and to efficiently evaluate high-order composite likelihoods. With extensive simulations, we assess the loss of information of composite likelihood estimators with respect to a full likelihood approach for some widely-used multivariate or spatial extreme models, we discuss how to choose composite likelihood truncation to improve the efficiency, and we also provide recommendations for practitioners. This article has supplementary material online.
Maximum Likelihood Estimation and Inference With Examples in R, SAS and ADMB
Millar, Russell B
2011-01-01
This book takes a fresh look at the popular and well-established method of maximum likelihood for statistical estimation and inference. It begins with an intuitive introduction to the concepts and background of likelihood, and moves through to the latest developments in maximum likelihood methodology, including general latent variable models and new material for the practical implementation of integrated likelihood using the free ADMB software. Fundamental issues of statistical inference are also examined, with a presentation of some of the philosophical debates underlying the choice of statis
Rambiritch, Virendra; Naidoo, Poobalan; Maharaj, Breminand; Pillai, Goonaseelan
2016-01-01
The aim of this study was to describe the pharmacokinetics (PK) of glibenclamide in poorly controlled South African type 2 diabetic subjects using noncompartmental and model-based methods. A total of 24 subjects with type 2 diabetes were administered increasing doses (0 mg/d, 2.5 mg/d, 5 mg/d, 10 mg/d, and 20 mg/d) of glibenclamide daily at 2-week intervals. Plasma glibenclamide, glucose, and insulin determinations were performed. Blood sampling times were 0 minute, 30 minutes, 60 minutes, 90 minutes, and 120 minutes (post breakfast sampling) and 240 minutes, 270 minutes, 300 minutes, 330 minutes, 360 minutes, and 420 minutes (post lunch sampling) on days 14, 28, 42, 56, and 70 for doses of 0 mg, 2.5 mg, 5.0 mg, 10 mg, and 20 mg, respectively. Blood sampling was performed after the steady state was reached. A total of 24 individuals in the data set contributed to a total of 841 observation records. The PK was analyzed using noncompartmental analysis methods, which were implemented in WinNonLin(®), and population PK analysis using NONMEM(®). Glibenclamide concentration data were log transformed prior to fitting. A two-compartmental disposition model was selected after evaluating one-, two-, and three-compartmental models to describe the time course of glibenclamide plasma concentration data. The one-compartment model adequately described the data; however, the two-compartment model provided a better fit. The three-compartment model failed to achieve successful convergence. A more complex model, to account for enterohepatic recirculation that was observed in the data, was unsuccessful. In South African diabetic subjects, glibenclamide demonstrates linear PK and was best described by a two-compartmental model. Except for the absorption rate constant, the other PK parameters reported in this study are comparable to those reported in the scientific literature. The study is limited by the small study sample size and inclusion of poorly controlled type 2 diabetic
Gentz, Steven J.; Ordway, David O; Parsons, David S.; Garrison, Craig M.; Rodgers, C. Steven; Collins, Brian W.
2015-01-01
The NASA Engineering and Safety Center (NESC) received a request to develop an analysis model based on both frequency response and wave propagation analyses for predicting shock response spectrum (SRS) on composite materials subjected to pyroshock loading. The model would account for near-field environment (approx. 9 inches from the source) dominated by direct wave propagation, mid-field environment (approx. 2 feet from the source) characterized by wave propagation and structural resonances, and far-field environment dominated by lower frequency bending waves in the structure. This report documents the outcome of the assessment.
Gentz, Steven J.; Ordway, David O.; Parsons, David S.; Garrison, Craig M.; Rodgers, C. Steven; Collins, Brian W.
2015-01-01
The NASA Engineering and Safety Center (NESC) received a request to develop an analysis model based on both frequency response and wave propagation analyses for predicting shock response spectrum (SRS) on composite materials subjected to pyroshock loading. The model would account for near-field environment (approx. 9 inches from the source) dominated by direct wave propagation, mid-field environment (approx. 2 feet from the source) characterized by wave propagation and structural resonances, and far-field environment dominated by lower frequency bending waves in the structure. This document contains appendices to the Volume I report.
Gentz, Steven J.; Ordway, David O.; Parsons, David S.; Garrison, Craig M.; Rodgers, C. Steven; Collins, Brian W.
2015-01-01
The NASA Engineering and Safety Center (NESC) received a request to develop an analysis model based on both frequency response and wave propagation analyses for predicting shock response spectrum (SRS) on composite materials subjected to pyroshock loading. The model would account for near-field environment (9 inches from the source) dominated by direct wave propagation, mid-field environment (approximately 2 feet from the source) characterized by wave propagation and structural resonances, and far-field environment dominated by lower frequency bending waves in the structure. This document contains appendices to the Volume I report.
Messineo, Ludovico; Taranto-Montemurro, Luigi; Sands, Scott A; Oliveira Marques, Melania D; Azabarzin, Ali; Wellman, David Andrew
2017-01-01
Insomnia is a major public health problem in western countries. Previous small pilot studies showed that the administration of constant white noise can improve sleep quality, increase acoustic arousal threshold, and reduce sleep onset latency. In this randomized controlled trial, we tested the effect of surrounding broadband sound administration on sleep onset latency, sleep architecture, and subjective sleep quality in healthy subjects. Eighteen healthy subjects were studied with two overnight sleep studies approximately one week apart. They were exposed in random order to normal environmental noise (40.1 [1.3] dB) or to broadband sound administration uniformly distributed in the room by two speakers (46.0 [0.9] dB). To model transient insomnia, subjects went to bed ("lights out") 90 min before usual bedtime. Broadband sound administration reduced sleep onset latency to stage 2 sleep (time from lights out to first epoch of non-rapid eye movement-sleep stage 2) (19 [16] vs. 13 [23] min, p = 0.011; median reduction 38% baseline). In a subgroup reporting trouble initiating sleep at home (Pittsburgh Sleep Quality Index section 2 score ≥ 1), sound administration improved subjective sleep quality (p = 0.037) and the frequency of arousals from sleep (p = 0.03). In an experimental model of transient insomnia in young healthy individuals, broadband sound administration significantly reduced sleep onset latency by 38% compared to normal environmental noise. These findings suggest that broadband sound administration might be helpful to minimize insomnia symptoms in selected individuals.
Directory of Open Access Journals (Sweden)
Ludovico Messineo
2017-12-01
Full Text Available BackgroundInsomnia is a major public health problem in western countries. Previous small pilot studies showed that the administration of constant white noise can improve sleep quality, increase acoustic arousal threshold, and reduce sleep onset latency. In this randomized controlled trial, we tested the effect of surrounding broadband sound administration on sleep onset latency, sleep architecture, and subjective sleep quality in healthy subjects.MethodsEighteen healthy subjects were studied with two overnight sleep studies approximately one week apart. They were exposed in random order to normal environmental noise (40.1 [1.3] dB or to broadband sound administration uniformly distributed in the room by two speakers (46.0 [0.9] dB. To model transient insomnia, subjects went to bed (“lights out” 90 min before usual bedtime.ResultsBroadband sound administration reduced sleep onset latency to stage 2 sleep (time from lights out to first epoch of non-rapid eye movement-sleep stage 2 (19 [16] vs. 13 [23] min, p = 0.011; median reduction 38% baseline. In a subgroup reporting trouble initiating sleep at home (Pittsburgh Sleep Quality Index section 2 score ≥ 1, sound administration improved subjective sleep quality (p = 0.037 and the frequency of arousals from sleep (p = 0.03.ConclusionIn an experimental model of transient insomnia in young healthy individuals, broadband sound administration significantly reduced sleep onset latency by 38% compared to normal environmental noise. These findings suggest that broadband sound administration might be helpful to minimize insomnia symptoms in selected individuals.
Directory of Open Access Journals (Sweden)
Liangsuo Ma
2015-01-01
Full Text Available Cocaine dependence is associated with increased impulsivity in humans. Both cocaine dependence and impulsive behavior are under the regulatory control of cortico-striatal networks. One behavioral laboratory measure of impulsivity is response inhibition (ability to withhold a prepotent response in which altered patterns of regional brain activation during executive tasks in service of normal performance are frequently found in cocaine dependent (CD subjects studied with functional magnetic resonance imaging (fMRI. However, little is known about aberrations in specific directional neuronal connectivity in CD subjects. The present study employed fMRI-based dynamic causal modeling (DCM to study the effective (directional neuronal connectivity associated with response inhibition in CD subjects, elicited under performance of a Go/NoGo task with two levels of NoGo difficulty (Easy and Hard. The performance on the Go/NoGo task was not significantly different between CD subjects and controls. The DCM analysis revealed that prefrontal–striatal connectivity was modulated (influenced during the NoGo conditions for both groups. The effective connectivity from left (L anterior cingulate cortex (ACC to L caudate was similarly modulated during the Easy NoGo condition for both groups. During the Hard NoGo condition in controls, the effective connectivity from right (R dorsolateral prefrontal cortex (DLPFC to L caudate became more positive, and the effective connectivity from R ventrolateral prefrontal cortex (VLPFC to L caudate became more negative. In CD subjects, the effective connectivity from L ACC to L caudate became more negative during the Hard NoGo conditions. These results indicate that during Hard NoGo trials in CD subjects, the ACC rather than DLPFC or VLPFC influenced caudate during response inhibition.
Modal analysis of human body vibration model for Indian subjects under sitting posture.
Singh, Ishbir; Nigam, S P; Saran, V H
2015-01-01
Need and importance of modelling in human body vibration research studies are well established. The study of biodynamic responses of human beings can be classified into experimental and analytical methods. In the past few decades, plenty of mathematical models have been developed based on the diverse field measurements to describe the biodynamic responses of human beings. In this paper, a complete study on lumped parameter model derived from 50th percentile anthropometric data for a seated 54- kg Indian male subject without backrest support under free un-damped conditions has been carried out considering human body segments to be of ellipsoidal shape. Conventional lumped parameter modelling considers the human body as several rigid masses interconnected by springs and dampers. In this study, concept of mass of interconnecting springs has been incorporated and eigenvalues thus obtained are found to be closer to the values reported in the literature. Results obtained clearly establish decoupling of vertical and fore-and-aft oscillations. The mathematical modelling of human body vibration studies help in validating the experimental investigations for ride comfort of a sitting subject. This study clearly establishes the decoupling of vertical and fore-and-aft vibrations and helps in better understanding of possible human response to single and multi-axial excitations.
CONSTITUTIVE MODEL OF STEEL FIBRE REINFORCED CONCRETE SUBJECTED TO HIGH TEMPERATURES
Directory of Open Access Journals (Sweden)
Lukas Blesak
2016-12-01
Full Text Available Research on structural load-bearing systems exposed to elevated temperatures is an active topic in civil engineering. Carrying out a full-size experiment of a specimen exposed to fire is a challenging task considering not only the preparation labour but also the necessary costs. Therefore, such experiments are simulated using various software and computational models in order to predict the structural behaviour as exactly as possible. In this paper such a procedure, focusing on software simulation, is described in detail. The proposed constitutive model is based on the stress-strain curve and allows predicting SFRC material behaviour in bending at ambient and elevated temperature. SFRC material is represented by the initial linear behaviour, an instantaneous drop of stress after the initial crack occurs and its consequent specific ductility, which influences the overall modelled specimen behaviour under subjected loading. The model is calibrated with ATENA FEM software using experimental results.
Experiential learning model on entrepreneurship subject for improving students’ soft skills
Directory of Open Access Journals (Sweden)
Lina Rifda Naufalin
2017-01-01
Full Text Available The objective of the research was to improve students’ soft skills on entrepreneurship subject by using experiential learning model. It was expected that the learning model could upgrade students’ soft skills which were indicated by the higher confidence, result and job oriented, being courageous to take risks, leadership, originality, and future-oriented. It was a class action research using Kemmis and Mc Tagart’s design model. The research was conducted for two cycles. The subject of the study was economics education students in 2015/2016. The result of the research showed that the experiential learning model could improve students’ soft skills. The research showed that there were increases at the dimension of confidence, (52.1%, result-oriented (22.9%, being courageous to take risks (10.4%, leadership (12.5%, originality (10.4%, and future-oriented (18.8%. It could be concluded that the experiential learning model was effective to improve students’ soft skills on entrepreneurship subject. It also showed that the dimension of confidence had the highest rise. Students’ soft skills were shaped through the continuous stimulus when they got involved at the implementation.Penelitian ini bertujuan untuk meningkatkan soft skills mahasiswa dalam mata kuliah kewirausahaan dengan menggunakan model experietial learning. Diharapkan dengan model pembelajaran ini terjadi peningkatan soft skills mahasiswa yang ditandai dengan peningkatan rasa percaya diri, berorientasi tugas dan hasil, berani mengambil resiko, kepemimpinan, keorisinilan, dan berorientasi masa depan. Penelitian ini menggunakan metode penelitian tindakan kelas dengan menggunakan model desain menurut Kemmis dan Mc Tagart. Penelitian ini dilakukan dalam dua siklus, yaitu siklus I dan siklus II. Penelitian ini dilaksanakan di kelas pendidikan ekonomi angkatan 2015/2016. Hasil penelitian ini menunjukkan bahwa penggunaan model experiential learning dapat meningkatkan soft skills
The likelihood for supernova neutrino analyses
Ianni, A; Strumia, A; Torres, F R; Villante, F L; Vissani, F
2009-01-01
We derive the event-by-event likelihood that allows to extract the complete information contained in the energy, time and direction of supernova neutrinos, and specify it in the case of SN1987A data. We resolve discrepancies in the previous literature, numerically relevant already in the concrete case of SN1987A data.
Numerical likelihood analysis of cosmic ray anisotropies
Energy Technology Data Exchange (ETDEWEB)
Carlos Hojvat et al.
2003-07-02
A numerical likelihood approach to the determination of cosmic ray anisotropies is presented which offers many advantages over other approaches. It allows a wide range of statistically meaningful hypotheses to be compared even when full sky coverage is unavailable, can be readily extended in order to include measurement errors, and makes maximum unbiased use of all available information.
Bayesian unit root tests and marginal likelihood
de Vos, A.F.; Francke, M.K.
2008-01-01
Unit root tests based on classical marginal likelihood are practically uniformly most powerful (Francke and de Vos, 2007). Bayesian unit root tests can be constructed that are very similar, however in the Bayesian analysis the classical size is determined by prior considerations. A fundamental
Maintaining symmetry of simulated likelihood functions
DEFF Research Database (Denmark)
Andersen, Laura Mørch
This paper suggests solutions to two different types of simulation errors related to Quasi-Monte Carlo integration. Likelihood functions which depend on standard deviations of mixed parameters are symmetric in nature. This paper shows that antithetic draws preserve this symmetry and thereby...
Predicting the likelihood of purchase by big data
Zhao, P. Y.; Shi, Y. M.
2017-04-01
Big data has changed our way of life and learning, for example, information extraction and information analysis methods have been radically changed, which we usually classify into data mining. Big data analytics is used to find the possibilities for consumers to purchase the specific product. In this paper, we constructed models for estimating existing products and predicting the likelihood of purchasing new products. And the results indicated that our methods is feasible and effective.
Consumer Likelihood to Purchase Chickens with Novel Production Attributes
Bernard, John C.; Pesek, John D., Jr.; Pan, Xiqian
2007-01-01
Typical supermarket chickens are produced with novel or controversial attributes. This continues despite contrasting growth in consumer interest in organic and natural foods. This study surveyed Delaware consumersâ€™ likelihood to purchase chicken given different attributes: free range, given antibiotics, irradiated, fed genetically modified (GM) fee, GM chicken, and price. Examining conjoint analysis data with a heteroskedastic two-limit tobit model, GM chicken and other novel attributes wer...
Yu, Lei; Kang, Jian
2009-09-01
This research aims to explore the feasibility of using computer-based models to predict the soundscape quality evaluation of potential users in urban open spaces at the design stage. With the data from large scale field surveys in 19 urban open spaces across Europe and China, the importance of various physical, behavioral, social, demographical, and psychological factors for the soundscape evaluation has been statistically analyzed. Artificial neural network (ANN) models have then been explored at three levels. It has been shown that for both subjective sound level and acoustic comfort evaluation, a general model for all the case study sites is less feasible due to the complex physical and social environments in urban open spaces; models based on individual case study sites perform well but the application range is limited; and specific models for certain types of location/function would be reliable and practical. The performance of acoustic comfort models is considerably better than that of sound level models. Based on the ANN models, soundscape quality maps can be produced and this has been demonstrated with an example.
A simple scoring model for advanced colorectal neoplasm in asymptomatic subjects aged 40-49 years.
Park, Yoo Mi; Kim, Hee Sun; Park, Jae Jun; Baik, Su Jung; Youn, Young Hoon; Kim, Jie-Hyun; Park, Hyojin
2017-01-09
Limited data are available for advanced colorectal neoplasm in asymptomatic individuals aged 40-49 years. We aimed to identify risk factors and develop a simple prediction model for advanced colorectal neoplasm in these persons. Clinical data were collected on 2781 asymptomatic subjects aged 40-49 years who underwent colonoscopy for routine health examination. Subjects were randomly allocated to a development or validation set. Logistic regression analysis was used to determine predictors of advanced colorectal neoplasm. The prevalence of overall and advanced colorectal neoplasm was 20.2 and 2.5% respectively. Older age (45-49 years), male sex, positive serology of Helicobacter pylori, and high triglyceride and low high-density lipoprotein (HDL) levels were independently associated with an increased risk of advanced colorectal neoplasm. BMI (body mass index) was not significant in multivariable analysis. We developed a simple scoring model for advanced colorectal neoplasm (range 0-9). A cutoff of ≥4 defined 43% of subjects as high risk for advanced colorectal neoplasm (sensitivity, 79%; specificity, 58%; area under the receiver operating curve = 0.72) in the validation datasets. Older age (45-49 years), male sex, positive serology of H. pylori, high triglyceride level, and low HDL level were identified as independent risk factors for advanced colorectal neoplasm.
Directory of Open Access Journals (Sweden)
Driscoll Mark
2011-05-01
Full Text Available Abstract Background The etiology of AIS remains unclear, thus various hypotheses concerning its pathomechanism have been proposed. To date, biomechanical modeling has not been used to thoroughly study the influence of the abnormal growth profile (i.e., the growth rate of the vertebral body during the growth period on the pathomechanism of curve progression in AIS. This study investigated the hypothesis that AIS progression is associated with the abnormal growth profiles of the anterior column of the spine. Methods A finite element model of the spinal column including growth dynamics was utilized. The initial geometric models were constructed from the bi-planar radiographs of a normal subject. Based on this model, five other geometric models were generated to emulate different coronal and sagittal curves. The detailed modeling integrated vertebral body growth plates and growth modulation spinal biomechanics. Ten years of spinal growth was simulated using AIS and normal growth profiles. Sequential measures of spinal alignments were compared. Results (1 Given the initial lateral deformity, the AIS growth profile induced a significant Cobb angle increase, which was roughly between three to five times larger compared to measures utilizing a normal growth profile. (2 Lateral deformities were absent in the models containing no initial coronal curvature. (3 The presence of a smaller kyphosis did not produce an increase lateral deformity on its own. (4 Significant reduction of the kyphosis was found in simulation results of AIS but not when using the growth profile of normal subjects. Conclusion Results from this analysis suggest that accelerated growth profiles may encourage supplementary scoliotic progression and, thus, may pose as a progressive risk factor.
Subject-specific computational modeling of DBS in the PPTg area
Zitella, Laura M.; Teplitzky, Benjamin A.; Yager, Paul; Hudson, Heather M.; Brintz, Katelynn; Duchin, Yuval; Harel, Noam; Vitek, Jerrold L.; Baker, Kenneth B.; Johnson, Matthew D.
2015-01-01
Deep brain stimulation (DBS) in the pedunculopontine tegmental nucleus (PPTg) has been proposed to alleviate medically intractable gait difficulties associated with Parkinson's disease. Clinical trials have shown somewhat variable outcomes, stemming in part from surgical targeting variability, modulating fiber pathways implicated in side effects, and a general lack of mechanistic understanding of DBS in this brain region. Subject-specific computational models of DBS are a promising tool to investigate the underlying therapy and side effects. In this study, a parkinsonian rhesus macaque was implanted unilaterally with an 8-contact DBS lead in the PPTg region. Fiber tracts adjacent to PPTg, including the oculomotor nerve, central tegmental tract, and superior cerebellar peduncle, were reconstructed from a combination of pre-implant 7T MRI, post-implant CT, and post-mortem histology. These structures were populated with axon models and coupled with a finite element model simulating the voltage distribution in the surrounding neural tissue during stimulation. This study introduces two empirical approaches to evaluate model parameters. First, incremental monopolar cathodic stimulation (20 Hz, 90 μs pulse width) was evaluated for each electrode, during which a right eyelid flutter was observed at the proximal four contacts (−1.0 to −1.4 mA). These current amplitudes followed closely with model predicted activation of the oculomotor nerve when assuming an anisotropic conduction medium. Second, PET imaging was collected OFF-DBS and twice during DBS (two different contacts), which supported the model predicted activation of the central tegmental tract and superior cerebellar peduncle. Together, subject-specific models provide a framework to more precisely predict pathways modulated by DBS. PMID:26236229
Subject-specific computational modeling of DBS in the PPTg area
Directory of Open Access Journals (Sweden)
Laura M. Zitella
2015-07-01
Full Text Available Deep brain stimulation (DBS in the pedunculopontine tegmental nucleus (PPTg has been proposed to alleviate medically intractable gait difficulties associated with Parkinson’s disease. Clinical trials have shown somewhat variable outcomes, stemming in part from surgical targeting variability, modulating fiber pathways implicated in side effects, and a general lack of mechanistic understanding of DBS in this brain region. Subject-specific computational models of DBS are a promising tool to investigate the underlying therapy and side effects. In this study, a parkinsonian rhesus macaque was implanted unilaterally with an 8-contact DBS lead in the PPTg region. Fiber tracts adjacent to PPTg, including the oculomotor nerve, central tegmental tract, and superior cerebellar peduncle, were reconstructed from a combination of pre-implant 7T MRI, post-implant CT, and post-mortem histology. These structures were populated with axon models and coupled with a finite element model simulating the voltage distribution in the surrounding neural tissue during stimulation. This study introduces two empirical approaches to evaluate model parameters. First, incremental monopolar cathodic stimulation (20Hz, 90µs pulse width was evaluated for each electrode, during which a right eyelid flutter was observed at the proximal four contacts (-1.0 to -1.4mA. These current amplitudes followed closely with model predicted activation of the oculomotor nerve when assuming an anisotropic conduction medium. Second, PET imaging was collected OFF-DBS and twice during DBS (two different contacts, which supported the model predicted activation of the central tegmental tract and superior cerebellar peduncle. Together, subject-specific models provide a framework to more precisely predict pathways modulated by DBS.
Validation of subject-specific cardiovascular system models from porcine measurements.
Revie, James A; Stevenson, David J; Chase, J Geoffrey; Hann, Christopher E; Lambermont, Bernard C; Ghuysen, Alexandre; Kolh, Philippe; Shaw, Geoffrey M; Heldmann, Stefan; Desaive, Thomas
2013-02-01
A previously validated mathematical model of the cardiovascular system (CVS) is made subject-specific using an iterative, proportional gain-based identification method. Prior works utilised a complete set of experimentally measured data that is not clinically typical or applicable. In this paper, parameters are identified using proportional gain-based control and a minimal, clinically available set of measurements. The new method makes use of several intermediary steps through identification of smaller compartmental models of CVS to reduce the number of parameters identified simultaneously and increase the convergence stability of the method. This new, clinically relevant, minimal measurement approach is validated using a porcine model of acute pulmonary embolism (APE). Trials were performed on five pigs, each inserted with three autologous blood clots of decreasing size over a period of four to five hours. All experiments were reviewed and approved by the Ethics Committee of the Medical Faculty at the University of Liege, Belgium. Continuous aortic and pulmonary artery pressures (P(ao), P(pa)) were measured along with left and right ventricle pressure and volume waveforms. Subject-specific CVS models were identified from global end diastolic volume (GEDV), stroke volume (SV), P(ao), and P(pa) measurements, with the mean volumes and maximum pressures of the left and right ventricles used to verify the accuracy of the fitted models. The inputs (GEDV, SV, P(ao), P(pa)) used in the identification process were matched by the CVS model to errors pressures not used to fit the model compared experimental measurements to median absolute errors of 4.3% and 4.4%, which are equivalent to the measurement errors of currently used monitoring devices in the ICU (∼5-10%). These results validate the potential for implementing this approach in the intensive care unit. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.
Mumford, Michael D.; Hester, Kimberly S.; Robledo, Issac C.; Peterson, David R.; Day, Eric A.; Hougen, Dean F.; Barrett, Jamie D.
2012-01-01
Knowledge, or expertise, has been held to contribute to creative problem-solving. In this effort, the relationship of one form of knowledge, mental models, to creative problem-solving was assessed. Undergraduates were asked to solve either a marketing or an education problem calling for creative thought. Prior to generating solutions to these…
Geng, Yuan
2016-11-01
This study investigated the relationship among emotional intelligence, gratitude, and subjective well-being in a sample of university students. A total of 365 undergraduates completed the emotional intelligence scale, the gratitude questionnaire, and the subjective well-being measures. The results of the structural equation model showed that emotional intelligence is positively associated with gratitude and subjective well-being, that gratitude is positively associated with subjective well-being, and that gratitude partially mediates the positive relationship between emotional intelligence and subjective well-being. Bootstrap test results also revealed that emotional intelligence has a significant indirect effect on subjective well-being through gratitude.
Ting, Chih-Chung; Yu, Chia-Chen; Maloney, Laurence T.
2015-01-01
In Bayesian decision theory, knowledge about the probabilities of possible outcomes is captured by a prior distribution and a likelihood function. The prior reflects past knowledge and the likelihood summarizes current sensory information. The two combined (integrated) form a posterior distribution that allows estimation of the probability of different possible outcomes. In this study, we investigated the neural mechanisms underlying Bayesian integration using a novel lottery decision task in which both prior knowledge and likelihood information about reward probability were systematically manipulated on a trial-by-trial basis. Consistent with Bayesian integration, as sample size increased, subjects tended to weigh likelihood information more compared with prior information. Using fMRI in humans, we found that the medial prefrontal cortex (mPFC) correlated with the mean of the posterior distribution, a statistic that reflects the integration of prior knowledge and likelihood of reward probability. Subsequent analysis revealed that both prior and likelihood information were represented in mPFC and that the neural representations of prior and likelihood in mPFC reflected changes in the behaviorally estimated weights assigned to these different sources of information in response to changes in the environment. Together, these results establish the role of mPFC in prior-likelihood integration and highlight its involvement in representing and integrating these distinct sources of information. PMID:25632152
Modelling the behaviour of composite sandwich structures when subject to air-blast loading
Directory of Open Access Journals (Sweden)
H Arora
2016-09-01
Full Text Available Large-scale glass fibre reinforced polymer (GFRP and carbon fibre reinforced polymer (CFRP sandwich structures (1.6 m x 1.3 m were subject to explosive air blast (100 kg TNT equivalent at stand-off distances of 14 m. Digital image correlation (DIC was used to obtain full-field data for the rear-face of each deforming target. A steel plate of comparable mass per unit area was also subjected to the same blast conditions for comparison. The experimental data was then verified with finite element models generated in Abaqus/Explicit. Close agreement was obtained between the numerical and experimental results, confirming that the CFRP panels had a superior blast performance to the GFRP panels. Moreover all composite targets sustained localised failures (that were more severe in the GFRP targets but retained their original shape post blast. The rear-skins remained intact for each composite target with core shear failure present.
Modeling self on others: An import theory of subjectivity and selfhood.
Prinz, Wolfgang
2017-03-01
This paper outlines an Import Theory of subjectivity and selfhood. Import theory claims that subjectivity is initially perceived as a key feature of other minds before it then becomes imported from other minds to own minds whereby it lays the ground for mental selfhood. Import theory builds on perception-production matching, which in turn draws on both representational mechanisms and social practices. Representational mechanisms rely on common coding of perception and production. Social practices rely on action mirroring in dyadic interactions. The interplay between mechanisms and practices gives rise to model self on others. Individuals become intentional agents in virtue of perceiving others mirroring themselves. The outline of the theory is preceded by an introductory section that locates import theory in the broader context of competing approaches, and it is followed by a concluding section that assesses import theory in terms of empirical evidence and explanatory power. Copyright © 2017 Elsevier Inc. All rights reserved.
González-Del Castillo, J; Teja-Marina, J; Candel, F J; Barberán, J; Moreno-Cuervo, A; Chiarella, F; López-González, L; Ramos-Cordero, P; Martín-Sánchez, F J
2017-11-06
Pneumonia is most frequently produced by the microaspiration of flora that colonizes the oropharynx. Etiological diagnosis of pneumonia is infrequent in clinical practise and empirical treatment should be prescribed. The aims of the present study were to determine the factors associated with oropharynx colonization by uncommon microorganisms (UM) and to develop a predictive model. A cross-sectional study that included all pa-tients living in one long-term care facilities was developed. Demographic, comorbidities, basal functional status and clinical data were collected. To determinate the oropharyngeal colonization, a single sample of pharynx was obtained for each subject using a cotton swab. A total of 221 subjects were included, mean age 86.27 (SD 8.05) years and 157 (71%) were female. In 32 (14.5%) subjects UM flora was isolated, Gram-negative bacilli in 16 (7.2%) residents, and Staphylococcus aureus in 16 (7.2%). The predictive model included the presence of hypertension, neuromuscular disease, Barthel <90 and use of PEG. The BAHNG score (BArthel, Hypertension, Neuromuscular, Gastrostomy), showed an area under the curve of 0.731 (CI 95% 0.643-0.820; p<0.001). We have classified patients according to this score in low (0-2 points), intermediate (3-5 points) and high risk (≥ 6). The probability of UM colonization in the oropharyngeal based on this classification is 4.1%, 15.8% and 57.1% for low, intermediate and high risk, respectively. The BAHNG score could help in the identifications of elderly patients with high risk of colonization by UM. In case of pneumonia the evaluation of the subject through this score could help in the initial decisions concerning antibiotic treatment.
Modeling of Melting and Resolidification in Domain of Metal Film Subjected to a Laser Pulse
Directory of Open Access Journals (Sweden)
Majchrzak E.
2016-03-01
Full Text Available Thermal processes in domain of thin metal film subjected to a strong laser pulse are discussed. The heating of domain considered causes the melting and next (after the end of beam impact the resolidification of metal superficial layer. The laser action (a time dependent bell-type function is taken into account by the introduction of internal heat source in the energy equation describing the heat transfer in domain of metal film. Taking into account the extremely short duration, extreme temperature gradients and very small geometrical dimensions of the domain considered, the mathematical model of the process is based on the dual phase lag equation supplemented by the suitable boundary-initial conditions. To model the phase transitions the artificial mushy zone is introduced. At the stage of numerical modeling the Control Volume Method is used. The examples of computations are also presented.
Wei, Xiaoding; de Vaucorbeil, Alban; Tran, Phuong; Espinosa, Horacio D.
2013-06-01
In this study, we developed a finite element fluid-structure interaction model to understand the deformation and failure mechanisms of both monolithic and sandwich composite panels. A new failure criterion that includes strain-rate effects was formulated and implemented to simulate different damage modes in unidirectional glass fiber/matrix composites. The laminate model uses Hashin's fiber failure criterion and a modified Tsai-Wu matrix failure criterion. The composite moduli are degraded using five damage variables, which are updated in the post-failure regime by means of a linear softening law governed by an energy release criterion. A key feature in the formulation is the distinction between fiber rupture and pull-out by introducing a modified fracture toughness, which varies from a fiber tensile toughness to a matrix tensile toughness as a function of the ratio of longitudinal normal stress to effective shear stress. The delamination between laminas is modeled by a strain-rate sensitive cohesive law. In the case of sandwich panels, core compaction is modeled by a crushable foam plasticity model with volumetric hardening and strain-rate sensitivity. These constitutive descriptions were used to predict deformation histories, fiber/matrix damage patterns, and inter-lamina delamination, for both monolithic and sandwich composite panels subjected to underwater blast. The numerical predictions were compared with experimental observations. We demonstrate that the new rate dependent composite damage model captures the spatial distribution and magnitude of damage significantly more accurately than previously developed models.
Tryfonidis, Michail
It has been observed that during orbital spaceflight the absence of gravitation related sensory inputs causes incongruence between the expected and the actual sensory feedback resulting from voluntary movements. This incongruence results in a reinterpretation or neglect of gravity-induced sensory input signals. Over time, new internal models develop, gradually compensating for the loss of spatial reference. The study of adaptation of goal-directed movements is the main focus of this thesis. The hypothesis is that during the adaptive learning process the neural connections behave in ways that can be described by an adaptive control method. The investigation presented in this thesis includes two different sets of experiments. A series of dart throwing experiments took place onboard the space station Mir. Experiments also took place at the Biomechanics lab at MIT, where the subjects performed a series of continuous trajectory tracking movements while a planar robotic manipulandum exerted external torques on the subjects' moving arms. The experimental hypothesis for both experiments is that during the first few trials the subjects will perform poorly trying to follow a prescribed trajectory, or trying to hit a target. A theoretical framework is developed that is a modification of the sliding control method used in robotics. The new control framework is an attempt to explain the adaptive behavior of the subjects. Numerical simulations of the proposed framework are compared with experimental results and predictions from competitive models. The proposed control methodology extends the results of the sliding mode theory to human motor control. The resulting adaptive control model of the motor system is robust to external dynamics, even those of negative gain, uses only position and velocity feedback, and achieves bounded steady-state error without explicit knowledge of the system's nonlinearities. In addition, the experimental and modeling results demonstrate that
Improved Likelihood Function in Particle-based IR Eye Tracking
DEFF Research Database (Denmark)
Satria, R.; Sorensen, J.; Hammoud, R.
2005-01-01
In this paper we propose a log likelihood-ratio function of foreground and background models used in a particle filter to track the eye region in dark-bright pupil image sequences. This model fuses information from both dark and bright pupil images and their difference image into one model. Our...... enhanced tracker overcomes the issues of prior selection of static thresholds during the detection of feature observations in the bright-dark difference images. The auto-initialization process is performed using cascaded classifier trained using adaboost and adapted to IR eye images. Experiments show good...
LMI-Based Fuzzy Optimal Variance Control of Airfoil Model Subject to Input Constraints
Swei, Sean S.M.; Ayoubi, Mohammad A.
2017-01-01
This paper presents a study of fuzzy optimal variance control problem for dynamical systems subject to actuator amplitude and rate constraints. Using Takagi-Sugeno fuzzy modeling and dynamic Parallel Distributed Compensation technique, the stability and the constraints can be cast as a multi-objective optimization problem in the form of Linear Matrix Inequalities. By utilizing the formulations and solutions for the input and output variance constraint problems, we develop a fuzzy full-state feedback controller. The stability and performance of the proposed controller is demonstrated through its application to the airfoil flutter suppression.
Malherbe, Tiaan K; Hanekom, Tania; Hanekom, Johan J
2013-07-01
This article investigates whether prediction of subject-specific physiological data is viable through an individualised computational model of a cochlear implant. Subject-specific predictions could be particularly useful to assess and quantify the peripheral factors that cause inter-subject variations in perception. The results of such model predictions could potentially be translated to clinical application through optimisation of mapping parameters for individual users, since parameters that affect perception would be reflected in the model structure and parameters. A method to create a subject-specific computational model of a guinea pig with a cochlear implant is presented. The objectives of the study are to develop a method to construct subject-specific models considering translation of the method to in vivo human models and to assess the effectiveness of subject-specific models to predict peripheral neural excitation on subject level. Neural excitation patterns predicted by the model are compared with single-fibre electrically evoked auditory brainstem responses obtained from the inferior colliculus in the same animal. Results indicate that the model can predict threshold frequency location, spatial spread of bipolar and tripolar stimulation and electrode thresholds relative to one another where electrodes are located in different cochlear structures. Absolute thresholds and spatial spread using monopolar stimulation are not predicted accurately. Improvements to the model should address this. Copyright © 2012 IPEM. Published by Elsevier Ltd. All rights reserved.
Moment Conditions Selection Based on Adaptive Penalized Empirical Likelihood
Directory of Open Access Journals (Sweden)
Yunquan Song
2014-01-01
Full Text Available Empirical likelihood is a very popular method and has been widely used in the fields of artificial intelligence (AI and data mining as tablets and mobile application and social media dominate the technology landscape. This paper proposes an empirical likelihood shrinkage method to efficiently estimate unknown parameters and select correct moment conditions simultaneously, when the model is defined by moment restrictions in which some are possibly misspecified. We show that our method enjoys oracle-like properties; that is, it consistently selects the correct moment conditions and at the same time its estimator is as efficient as the empirical likelihood estimator obtained by all correct moment conditions. Moreover, unlike the GMM, our proposed method allows us to carry out confidence regions for the parameters included in the model without estimating the covariances of the estimators. For empirical implementation, we provide some data-driven procedures for selecting the tuning parameter of the penalty function. The simulation results show that the method works remarkably well in terms of correct moment selection and the finite sample properties of the estimators. Also, a real-life example is carried out to illustrate the new methodology.
Empirical likelihood method for non-ignorable missing data problems.
Guan, Zhong; Qin, Jing
2017-01-01
Missing response problem is ubiquitous in survey sampling, medical, social science and epidemiology studies. It is well known that non-ignorable missing is the most difficult missing data problem where the missing of a response depends on its own value. In statistical literature, unlike the ignorable missing data problem, not many papers on non-ignorable missing data are available except for the full parametric model based approach. In this paper we study a semiparametric model for non-ignorable missing data in which the missing probability is known up to some parameters, but the underlying distributions are not specified. By employing Owen (1988)'s empirical likelihood method we can obtain the constrained maximum empirical likelihood estimators of the parameters in the missing probability and the mean response which are shown to be asymptotically normal. Moreover the likelihood ratio statistic can be used to test whether the missing of the responses is non-ignorable or completely at random. The theoretical results are confirmed by a simulation study. As an illustration, the analysis of a real AIDS trial data shows that the missing of CD4 counts around two years are non-ignorable and the sample mean based on observed data only is biased.
A likelihood ratio test for genomewide association under genetic heterogeneity*
Qian, Meng; Shao, Yongzhao
2013-01-01
Summary Most existing association tests for genome-wide association studies (GWAS) fail to account for genetic heterogeneity. Zhou and Pan proposed a binomial mixture model based association test to account for the possible genetic heterogeneity in case-control studies. The idea is elegant, however, the proposed test requires an EM-type iterative algorithm to identify the penalized maximum likelihood estimates and a permutation method to assess p-values. The intensive computational burden induced by the EM-algorithm and the permutation becomes prohibitive for direct applications to genome-wide association studies. This paper develops a likelihood ratio test (LRT) for genome-wide association studies under genetic heterogeneity based on a more general alternative mixture model. In particular, a closed-form formula for the likelihood ratio test statistic is derived to avoid the EM-type iterative numerical evaluation. Moreover, an explicit asymptotic null distribution is also obtained which avoids using the permutation to obtain p-values. Thus, the proposed LRT is easy to implement for genome-wide association studies (GWAS). Furthermore, numerical studies demonstrate that the LRT has power advantages over the commonly used Armitage trend test and other existing association tests under genetic heterogeneity. A breast cancer GWAS data set is used to illustrate the newly proposed LRT. PMID:23362943
Composite likelihood method for inferring local pedigrees
DEFF Research Database (Denmark)
Ko, Amy; Nielsen, Rasmus
2017-01-01
Pedigrees contain information about the genealogical relationships among individuals and are of fundamental importance in many areas of genetic studies. However, pedigrees are often unknown and must be inferred from genetic data. Despite the importance of pedigree inference, existing methods...... are limited to inferring only close relationships or analyzing a small number of individuals or loci. We present a simulated annealing method for estimating pedigrees in large samples of otherwise seemingly unrelated individuals using genome-wide SNP data. The method supports complex pedigree structures...... such as polygamous families, multi-generational families, and pedigrees in which many of the member individuals are missing. Computational speed is greatly enhanced by the use of a composite likelihood function which approximates the full likelihood. We validate our method on simulated data and show that it can...
Double coupling: modeling subjectivity and asymmetric organization in social-ecological systems
Directory of Open Access Journals (Sweden)
David Manuel-Navarrete
2015-09-01
Full Text Available Social-ecological organization is a multidimensional phenomenon that combines material and symbolic processes. However, the coupling between social and ecological subsystem is often conceptualized as purely material, thus reducing the symbolic dimension to its behavioral and actionable expressions. In this paper I conceptualize social-ecological systems as doubly coupled. On the one hand, material expressions of socio-cultural processes affect and are affected by ecological dynamics. On the other hand, coupled social-ecological material dynamics are concurrently coupled with subjective dynamics via coding, decoding, personal experience, and human agency. This second coupling operates across two organizationally heterogeneous dimensions: material and symbolic. Although resilience thinking builds on the recognition of organizational asymmetry between living and nonliving systems, it has overlooked the equivalent asymmetry between ecological and socio-cultural subsystems. Three guiding concepts are proposed to formalize double coupling. The first one, social-ecological asymmetry, expands on past seminal work on ecological self-organization to incorporate reflexivity and subjectivity in social-ecological modeling. Organizational asymmetry is based in the distinction between social rules, which are symbolically produced and changed through human agents' reflexivity and purpose, and biophysical rules, which are determined by functional relations between ecological components. The second guiding concept, conscious power, brings to the fore human agents' distinctive capacity to produce our own subjective identity and the consequences of this capacity for social-ecological organization. The third concept, congruence between subjective and objective dynamics, redefines sustainability as contingent on congruent relations between material and symbolic processes. Social-ecological theories and analyses based on these three guiding concepts would support the
Sampling of systematic errors to estimate likelihood weights in nuclear data uncertainty propagation
Helgesson, P.; Sjöstrand, H.; Koning, A. J.; Rydén, J.; Rochman, D.; Alhassan, E.; Pomp, S.
2016-01-01
In methodologies for nuclear data (ND) uncertainty assessment and propagation based on random sampling, likelihood weights can be used to infer experimental information into the distributions for the ND. As the included number of correlated experimental points grows large, the computational time for the matrix inversion involved in obtaining the likelihood can become a practical problem. There are also other problems related to the conventional computation of the likelihood, e.g., the assumption that all experimental uncertainties are Gaussian. In this study, a way to estimate the likelihood which avoids matrix inversion is investigated; instead, the experimental correlations are included by sampling of systematic errors. It is shown that the model underlying the sampling methodology (using univariate normal distributions for random and systematic errors) implies a multivariate Gaussian for the experimental points (i.e., the conventional model). It is also shown that the likelihood estimates obtained through sampling of systematic errors approach the likelihood obtained with matrix inversion as the sample size for the systematic errors grows large. In studied practical cases, it is seen that the estimates for the likelihood weights converge impractically slowly with the sample size, compared to matrix inversion. The computational time is estimated to be greater than for matrix inversion in cases with more experimental points, too. Hence, the sampling of systematic errors has little potential to compete with matrix inversion in cases where the latter is applicable. Nevertheless, the underlying model and the likelihood estimates can be easier to intuitively interpret than the conventional model and the likelihood function involving the inverted covariance matrix. Therefore, this work can both have pedagogical value and be used to help motivating the conventional assumption of a multivariate Gaussian for experimental data. The sampling of systematic errors could also
Sampling of systematic errors to estimate likelihood weights in nuclear data uncertainty propagation
Energy Technology Data Exchange (ETDEWEB)
Helgesson, P., E-mail: petter.helgesson@physics.uu.se [Department of Physics and Astronomy, Uppsala University, Box 516, 751 20 Uppsala (Sweden); Nuclear Research and Consultancy Group NRG, Petten (Netherlands); Sjöstrand, H. [Department of Physics and Astronomy, Uppsala University, Box 516, 751 20 Uppsala (Sweden); Koning, A.J. [Nuclear Research and Consultancy Group NRG, Petten (Netherlands); Department of Physics and Astronomy, Uppsala University, Box 516, 751 20 Uppsala (Sweden); Rydén, J. [Department of Mathematics, Uppsala University, Uppsala (Sweden); Rochman, D. [Paul Scherrer Institute PSI, Villigen (Switzerland); Alhassan, E.; Pomp, S. [Department of Physics and Astronomy, Uppsala University, Box 516, 751 20 Uppsala (Sweden)
2016-01-21
In methodologies for nuclear data (ND) uncertainty assessment and propagation based on random sampling, likelihood weights can be used to infer experimental information into the distributions for the ND. As the included number of correlated experimental points grows large, the computational time for the matrix inversion involved in obtaining the likelihood can become a practical problem. There are also other problems related to the conventional computation of the likelihood, e.g., the assumption that all experimental uncertainties are Gaussian. In this study, a way to estimate the likelihood which avoids matrix inversion is investigated; instead, the experimental correlations are included by sampling of systematic errors. It is shown that the model underlying the sampling methodology (using univariate normal distributions for random and systematic errors) implies a multivariate Gaussian for the experimental points (i.e., the conventional model). It is also shown that the likelihood estimates obtained through sampling of systematic errors approach the likelihood obtained with matrix inversion as the sample size for the systematic errors grows large. In studied practical cases, it is seen that the estimates for the likelihood weights converge impractically slowly with the sample size, compared to matrix inversion. The computational time is estimated to be greater than for matrix inversion in cases with more experimental points, too. Hence, the sampling of systematic errors has little potential to compete with matrix inversion in cases where the latter is applicable. Nevertheless, the underlying model and the likelihood estimates can be easier to intuitively interpret than the conventional model and the likelihood function involving the inverted covariance matrix. Therefore, this work can both have pedagogical value and be used to help motivating the conventional assumption of a multivariate Gaussian for experimental data. The sampling of systematic errors could also
Exploring predictive and reproducible modeling with the single-subject FIAC dataset.
Chen, Xu; Pereira, Francisco; Lee, Wayne; Strother, Stephen; Mitchell, Tom
2006-05-01
Predictive modeling of functional magnetic resonance imaging (fMRI) has the potential to expand the amount of information extracted and to enhance our understanding of brain systems by predicting brain states, rather than emphasizing the standard spatial mapping. Based on the block datasets of Functional Imaging Analysis Contest (FIAC) Subject 3, we demonstrate the potential and pitfalls of predictive modeling in fMRI analysis by investigating the performance of five models (linear discriminant analysis, logistic regression, linear support vector machine, Gaussian naive Bayes, and a variant) as a function of preprocessing steps and feature selection methods. We found that: (1) independent of the model, temporal detrending and feature selection assisted in building a more accurate predictive model; (2) the linear support vector machine and logistic regression often performed better than either of the Gaussian naive Bayes models in terms of the optimal prediction accuracy; and (3) the optimal prediction accuracy obtained in a feature space using principal components was typically lower than that obtained in a voxel space, given the same model and same preprocessing. We show that due to the existence of artifacts from different sources, high prediction accuracy alone does not guarantee that a classifier is learning a pattern of brain activity that might be usefully visualized, although cross-validation methods do provide fairly unbiased estimates of true prediction accuracy. The trade-off between the prediction accuracy and the reproducibility of the spatial pattern should be carefully considered in predictive modeling of fMRI. We suggest that unless the experimental goal is brain-state classification of new scans on well-defined spatial features, prediction alone should not be used as an optimization procedure in fMRI data analysis.
Airflow in a Multiscale Subject-Specific Breathing Human Lung Model
Choi, Jiwoong; Hoffman, Eric A; Tawhai, Merryn H; Lin, Ching-Long
2013-01-01
The airflow in a subject-specific breathing human lung is simulated with a multiscale computational fluid dynamics (CFD) lung model. The three-dimensional (3D) airway geometry beginning from the mouth to about 7 generations of airways is reconstructed from the multi-detector row computed tomography (MDCT) image at the total lung capacity (TLC). Along with the segmented lobe surfaces, we can build an anatomically-consistent one-dimensional (1D) airway tree spanning over more than 20 generations down to the terminal bronchioles, which is specific to the CT resolved airways and lobes (J Biomech 43(11): 2159-2163, 2010). We then register two lung images at TLC and the functional residual capacity (FRC) to specify subject-specific CFD flow boundary conditions and deform the airway surface mesh for a breathing lung simulation (J Comput Phys 244:168-192, 2013). The 1D airway tree bridges the 3D CT-resolved airways and the registration-derived regional ventilation in the lung parenchyma, thus a multiscale model. Larg...
Tomas-Fernandez, Xavier; Warfield, Simon K
2015-06-01
White matter (WM) lesions are thought to play an important role in multiple sclerosis (MS) disease burden. Recent work in the automated segmentation of white matter lesions from magnetic resonance imaging has utilized a model in which lesions are outliers in the distribution of tissue signal intensities across the entire brain of each patient. However, the sensitivity and specificity of lesion detection and segmentation with these approaches have been inadequate. In our analysis, we determined this is due to the substantial overlap between the whole brain signal intensity distribution of lesions and normal tissue. Inspired by the ability of experts to detect lesions based on their local signal intensity characteristics, we propose a new algorithm that achieves lesion and brain tissue segmentation through simultaneous estimation of a spatially global within-the-subject intensity distribution and a spatially local intensity distribution derived from a healthy reference population. We demonstrate that MS lesions can be segmented as outliers from this intensity model of population and subject. We carried out extensive experiments with both synthetic and clinical data, and compared the performance of our new algorithm to those of state-of-the art techniques. We found this new approach leads to a substantial improvement in the sensitivity and specificity of lesion detection and segmentation.
Tomas-Fernandez, Xavier; Warfield, Simon K.
2015-01-01
White matter (WM) lesions are thought to play an important role in multiple sclerosis (MS) disease burden. Recent work in the automated segmentation of white matter lesions from MRI has utilized a model in which lesions are outliers in the distribution of tissue signal intensities across the entire brain of each patient. However, the sensitivity and specificity of lesion detection and segmentation with these approaches have been inadequate. In our analysis, we determined this is due to the substantial overlap between the whole brain signal intensity distribution of lesions and normal tissue. Inspired by the ability of experts to detect lesions based on their local signal intensity characteristics, we propose a new algorithm that achieves lesion and brain tissue segmentation through simultaneous estimation of a spatially global within-the-subject intensity distribution and a spatially local intensity distribution derived from a healthy reference population. We demonstrate that MS lesions can be segmented as outliers from this intensity model of population and subject (MOPS). We carried out extensive experiments with both synthetic and clinical data, and compared the performance of our new algorithm to those of state-of-the art techniques. We found this new approach leads to a substantial improvement in the sensitivity and specificity of lesion detection and segmentation. PMID:25616008
Li, Xiang; Kuk, Anthony Y C; Xu, Jinfeng
2014-12-10
Human biomonitoring of exposure to environmental chemicals is important. Individual monitoring is not viable because of low individual exposure level or insufficient volume of materials and the prohibitive cost of taking measurements from many subjects. Pooling of samples is an efficient and cost-effective way to collect data. Estimation is, however, complicated as individual values within each pool are not observed but are only known up to their average or weighted average. The distribution of such averages is intractable when the individual measurements are lognormally distributed, which is a common assumption. We propose to replace the intractable distribution of the pool averages by a Gaussian likelihood to obtain parameter estimates. If the pool size is large, this method produces statistically efficient estimates, but regardless of pool size, the method yields consistent estimates as the number of pools increases. An empirical Bayes (EB) Gaussian likelihood approach, as well as its Bayesian analog, is developed to pool information from various demographic groups by using a mixed-effect formulation. We also discuss methods to estimate the underlying mean-variance relationship and to select a good model for the means, which can be incorporated into the proposed EB or Bayes framework. By borrowing strength across groups, the EB estimator is more efficient than the individual group-specific estimator. Simulation results show that the EB Gaussian likelihood estimates outperform a previous method proposed for the National Health and Nutrition Examination Surveys with much smaller bias and better coverage in interval estimation, especially after correction of bias. Copyright © 2014 John Wiley & Sons, Ltd.
Maximum likelihood conjoint measurement of lightness and chroma.
Rogers, Marie; Knoblauch, Kenneth; Franklin, Anna
2016-03-01
Color varies along dimensions of lightness, hue, and chroma. We used maximum likelihood conjoint measurement to investigate how lightness and chroma influence color judgments. Observers judged lightness and chroma of stimuli that varied in both dimensions in a paired-comparison task. We modeled how changes in one dimension influenced judgment of the other. An additive model best fit the data in all conditions except for judgment of red chroma where there was a small but significant interaction. Lightness negatively contributed to perception of chroma for red, blue, and green hues but not for yellow. The method permits quantification of lightness and chroma contributions to color appearance.
Subject-specific modelling of lower limb muscles in children with cerebral palsy.
Oberhofer, K; Stott, N S; Mithraratne, K; Anderson, I A
2010-01-01
Recent studies suggest that the architecture of spastic muscles in children with cerebral palsy is considerably altered; however, only little is known about the structural changes that occur other than in the gastrocnemius muscle. In the present study, Magnetic Resonance Imaging (MRI) and subject-specific modelling techniques were used to compare the lengths and volumes of six lower limb muscles between children with cerebral palsy and typically developing children. MRI scans of the lower limbs of two children with spastic hemiplegia cerebral palsy, four children with spastic diplegia cerebral palsy (mean age 9.6 years) and a group of typically developing children (mean age 10.2 years) were acquired. Subject-specific models of six lower limb muscles were developed from the MRI data using a technique called Face Fitting. Muscle volumes and muscle lengths were derived from the models and normalised to body mass and segmental lengths, respectively. Normalised muscle volumes in the children with cerebral palsy were smaller than in the control group with the difference being 22% in the calf muscles, 26% in the hamstrings and 22% in the quadriceps, respectively. Only the differences in the hamstrings and the quadriceps were statistically significant (P=0.036, P=0.038). Normalised muscle lengths in the children with cerebral palsy were significantly shorter (Pmuscle in either group. The present results show that lower limb muscles in ambulatory children with cerebral palsy are significantly altered, suggesting an overall mechanical deficit due to predominant muscle atrophy. Further investigations of the underlying causes of the muscle atrophy are required to better define management and treatment strategies for children with cerebral palsy.
Gupta, Sumeet; Soellinger, Michaela; Boesiger, Peter; Poulikakos, Dimos; Kurtcuoglu, Vartan
2009-02-01
This study aims at investigating three-dimensional subject-specific cerebrospinal fluid (CSF) dynamics in the inferior cranial space, the superior spinal subarachnoid space (SAS), and the fourth cerebral ventricle using a combination of a finite-volume computational fluid dynamics (CFD) approach and magnetic resonance imaging (MRI) experiments. An anatomically accurate 3D model of the entire SAS of a healthy volunteer was reconstructed from high resolution T2 weighted MRI data. Subject-specific pulsatile velocity boundary conditions were imposed at planes in the pontine cistern, cerebellomedullary cistern, and in the spinal subarachnoid space. Velocimetric MRI was used to measure the velocity field at these boundaries. A constant pressure boundary condition was imposed at the interface between the aqueduct of Sylvius and the fourth ventricle. The morphology of the SAS with its complex trabecula structures was taken into account through a novel porous media model with anisotropic permeability. The governing equations were solved using finite-volume CFD. We observed a total pressure variation from -42 Pa to 40 Pa within one cardiac cycle in the investigated domain. Maximum CSF velocities of about 15 cms occurred in the inferior section of the aqueduct, 14 cms in the left foramen of Luschka, and 9 cms in the foramen of Magendie. Flow velocities in the right foramen of Luschka were found to be significantly lower than in the left, indicating three-dimensional brain asymmetries. The flow in the cerebellomedullary cistern was found to be relatively diffusive with a peak Reynolds number (Re)=72, while the flow in the pontine cistern was primarily convective with a peak Re=386. The net volumetric flow rate in the spinal canal was found to be negligible despite CSF oscillation with substantial amplitude with a maximum volumetric flow rate of 109 mlmin. The observed transient flow patterns indicate a compliant behavior of the cranial subarachnoid space. Still, the estimated
Five-Factor Model Personality Traits and the Objective and Subjective Experience of Body Weight.
Sutin, Angelina R; Terracciano, Antonio
2016-02-01
Research on personality and adiposity has focused primarily on middle-aged and older adults. The present research sought to (a) replicate these associations in a young adult sample, (b) examine whether sex, race, or ethnicity moderate these associations, and (c) test whether personality is associated with the subjective experience of body weight and discrepancies between perceived and actual weight. Participants (N = 15,669; M(age) = 29; 53% female; ∼40% ethnic/racial minority) from Wave 4 of the National Longitudinal Study of Adolescent Health completed a Five-Factor Model personality measure and reported their weight, height, and perception of weight category (e.g., overweight); trained staff measured participants' height, weight, and waist circumference. Conscientiousness was associated with healthier weight, with a nearly 5 kg difference between the top and bottom quartiles. Neuroticism among women and Extraversion among men were associated with higher adiposity. Neuroticism was also associated with misperceived heavier weight, whereas Extraversion was associated with misperceived taller and leaner. The associations were similar across race/ethnic groups. Personality is associated with objective and subjective adiposity in young adulthood. Although modest, the effects are consistent with life span theories of personality, and the misperceptions are consistent with the conceptual worldviews associated with the traits. © 2014 Wiley Periodicals, Inc.
Students attitude towards calculus subject: A case-study using structural equation modeling
Awang, Noorehan; Hamid, Nur Nadiah Abd.
2015-10-01
This study was designed to assess the attitude of Bumiputera students towards mathematics. The instrument used to measure the attitude was Test of Mathematics Related Attitude (TOMRA). This test measures students' attitudes in four criteria: normality of mathematics (N), attitudes towards mathematics inquiry (I), adoption of mathematics attitude (A) and enjoyment of mathematics lessons (E). The target population of this study was all computer science and quantitative science students who enrolled in a Calculus subject at UiTM Negeri Sembilan. Confirmatory Factor Analysis was carried out and the inter-relationship among the four criteria was analyzed using Structural Equation Modeling. The students scored high in E, moderately in A and relatively low in N and I.
Kadum, Hawwa; Rockel, Stanislav; Holling, Michael; Peinke, Joachim; Cal, Raul Bayon
2017-11-01
The wake behind a floating model horizontal axis wind turbine during pitch motion is investigated and compared to a fixed wind turbine wake. An experiment is conducted in an acoustic wind tunnel where hot-wire data are acquired at five downstream locations. At each downstream location, a rake of 16 hot-wires was used with placement of the probes increasing radially in the vertical, horizontal, and diagonally at 45 deg. In addition, the effect of turbulence intensity on the floating wake is examined by subjecting the wind turbine to different inflow conditions controlled through three settings in the wind tunnel grid, a passive and two active protocols, thus varying in intensity. The wakes are inspected by statistics of the point measurements, where the various length/time scales are considered. The wake characteristics for a floating wind turbine are compared to a fixed turbine, and uncovering its features; relevant as the demand for exploiting deep waters in wind energy is increasing.
Abo Sabah, Saddam Hussein; Kueh, Ahmad Beng Hong
2014-01-01
This paper investigates the effects of localized interface progressive delamination on the behavior of two-layer laminated composite plates when subjected to low velocity impact loading for various fiber orientations. By means of finite element approach, the laminae stiffnesses are constructed independently from their interface, where a well-defined virtually zero-thickness interface element is discreetly adopted for delamination simulation. The present model has the advantage of simulating a localized interfacial condition at arbitrary locations, for various degeneration areas and intensities, under the influence of numerous boundary conditions since the interfacial description is expressed discretely. In comparison, the model shows good agreement with existing results from the literature when modeled in a perfectly bonded state. It is found that as the local delamination area increases, so does the magnitude of the maximum displacement history. Also, as top and bottom fiber orientations deviation increases, both central deflection and energy absorption increase although the relative maximum displacement correspondingly decreases when in contrast to the laminates perfectly bonded state.
Directory of Open Access Journals (Sweden)
Saddam Hussein Abo Sabah
2014-01-01
Full Text Available This paper investigates the effects of localized interface progressive delamination on the behavior of two-layer laminated composite plates when subjected to low velocity impact loading for various fiber orientations. By means of finite element approach, the laminae stiffnesses are constructed independently from their interface, where a well-defined virtually zero-thickness interface element is discreetly adopted for delamination simulation. The present model has the advantage of simulating a localized interfacial condition at arbitrary locations, for various degeneration areas and intensities, under the influence of numerous boundary conditions since the interfacial description is expressed discretely. In comparison, the model shows good agreement with existing results from the literature when modeled in a perfectly bonded state. It is found that as the local delamination area increases, so does the magnitude of the maximum displacement history. Also, as top and bottom fiber orientations deviation increases, both central deflection and energy absorption increase although the relative maximum displacement correspondingly decreases when in contrast to the laminates perfectly bonded state.
Basafa, Ehsan; Armand, Mehran
2014-07-18
A potential effective treatment for prevention of osteoporotic hip fractures is augmentation of the mechanical properties of the femur by injecting it with agents such as (PMMA) bone cement - femoroplasty. The operation, however, is only in research stage and can benefit substantially from computer planning and optimization. We report the results of computational planning and optimization of the procedure for biomechanical evaluation. An evolutionary optimization method was used to optimally place the cement in finite element (FE) models of seven osteoporotic bone specimens. The optimization, with some inter-specimen variations, suggested that areas close to the cortex in the superior and inferior of the neck and supero-lateral aspect of the greater trochanter will benefit from augmentation. We then used a particle-based model for bone cement diffusion simulation to match the optimized pattern, taking into account the limitations of the actual surgery, including limited volume of injection to prevent thermal necrosis. Simulations showed that the yield load can be significantly increased by more than 30%, using only 9 ml of bone cement. This increase is comparable to previous literature reports where gross filling of the bone was employed instead, using more than 40 ml of cement. These findings, along with the differences in the optimized plans between specimens, emphasize the need for subject-specific models for effective planning of femoral augmentation. Copyright © 2014 Elsevier Ltd. All rights reserved.
Pipelines subject to slow landslide movements: Structural modeling vs field measurement
Energy Technology Data Exchange (ETDEWEB)
Bruschi, R.; Glavina, S.; Spinazze, M.; Tomassini, D. [Snamprogetti S.p.A., Fano (Italy); Bonanni, S.; Cuscuna, S. [Snam S.p.A., Milan (Italy)
1996-12-01
In recent years finite element techniques have been increasingly used to investigate the behavior of buried pipelines subject to soil movements. The use of these tools provides a rational basis for the definition of minimum wall thickness requirements in landslide crossings. Furthermore the design of mitigation measures or monitoring systems which control the development of undesirable strains in the pipe wall over time, requires a detailed structural modeling. The scope of this paper is to discuss the use of dedicated structural modeling with relevant calibration to field measurements. The strain measurements used were regularly gathered from pipe sections, in two different sites over a period of time long enough to record changes of axial strain due to soil movement. Detailed structural modeling of pipeline layout in both sites and for operating conditions, is applied. Numerical simulations show the influence of the distribution of soil movement acting on the pipeline with regards to the state of strain which can be developed in certain locations. The role of soil nature and direction of relative movements in the definition of loads transferred to the pipeline, is also discussed.
Damage and failure modeling of lotus-type porous material subjected to low-cycle fatigue
Directory of Open Access Journals (Sweden)
J. Kramberger
2016-01-01
Full Text Available The investigation of low-cycle fatigue behaviour of lotus-type porous material is presented in this paper. Porous materials exhibit some unique features which are useful for a number of various applications. This paper evaluates a numerical approach for determining of damage initiation and evolution of lotus-type porous material with computational simulations, where the considered computational models have different pore topology patterns. The low-cycle fatigue analysis was performed by using a damage evolution law. The damage state was calculated and updated based on the inelastic hysteresis energy for stabilized cycle. Degradation of the elastic stifness was modeled using scalar damage variable. In order to examine crack propagation path finite elements with severe damage were deleted and removed from the mesh during simulation. The direct cyclic analysis capability in Abaqus/Standard was used for low-cycle fatigue analysis to obtain the stabilized response of a model subjected to the periodic loading. The computational results show a qualitative understanding of pores topology influence on low-cycle fatigue under transversal loading conditions in relation to pore orientation.
Nonlinear dynamic modeling of a simple flexible rotor system subjected to time-variable base motions
Chen, Liqiang; Wang, Jianjun; Han, Qinkai; Chu, Fulei
2017-09-01
Rotor systems carried in transportation system or under seismic excitations are considered to have a moving base. To study the dynamic behavior of flexible rotor systems subjected to time-variable base motions, a general model is developed based on finite element method and Lagrange's equation. Two groups of Euler angles are defined to describe the rotation of the rotor with respect to the base and that of the base with respect to the ground. It is found that the base rotations would cause nonlinearities in the model. To verify the proposed model, a novel test rig which could simulate the base angular-movement is designed. Dynamic experiments on a flexible rotor-bearing system with base angular motions are carried out. Based upon these, numerical simulations are conducted to further study the dynamic response of the flexible rotor under harmonic angular base motions. The effects of base angular amplitude, rotating speed and base frequency on response behaviors are discussed by means of FFT, waterfall, frequency response curve and orbits of the rotor. The FFT and waterfall plots of the disk horizontal and vertical vibrations are marked with multiplications of the base frequency and sum and difference tones of the rotating frequency and the base frequency. Their amplitudes will increase remarkably when they meet the whirling frequencies of the rotor system.
Abo Sabah, Saddam Hussein; Kueh, Ahmad Beng Hong
2014-01-01
This paper investigates the effects of localized interface progressive delamination on the behavior of two-layer laminated composite plates when subjected to low velocity impact loading for various fiber orientations. By means of finite element approach, the laminae stiffnesses are constructed independently from their interface, where a well-defined virtually zero-thickness interface element is discreetly adopted for delamination simulation. The present model has the advantage of simulating a localized interfacial condition at arbitrary locations, for various degeneration areas and intensities, under the influence of numerous boundary conditions since the interfacial description is expressed discretely. In comparison, the model shows good agreement with existing results from the literature when modeled in a perfectly bonded state. It is found that as the local delamination area increases, so does the magnitude of the maximum displacement history. Also, as top and bottom fiber orientations deviation increases, both central deflection and energy absorption increase although the relative maximum displacement correspondingly decreases when in contrast to the laminates perfectly bonded state. PMID:24696668
Intelligence's likelihood and evolutionary time frame
Bogonovich, Marc
2011-04-01
This paper outlines hypotheses relevant to the evolution of intelligent life and encephalization in the Phanerozoic. If general principles are inferable from patterns of Earth life, implications could be drawn for astrobiology. Many of the outlined hypotheses, relevant data, and associated evolutionary and ecological theory are not frequently cited in astrobiological journals. Thus opportunity exists to evaluate reviewed hypotheses with an astrobiological perspective. A quantitative method is presented for testing one of the reviewed hypotheses (hypothesis i; the diffusion hypothesis). Questions are presented throughout, which illustrate that the question of intelligent life's likelihood can be expressed as multiple, broadly ranging, more tractable questions.
Dimension-Independent Likelihood-Informed MCMC
Cui, Tiangang
2015-01-07
Many Bayesian inference problems require exploring the posterior distribution of high-dimensional parameters, which in principle can be described as functions. By exploiting low-dimensional structure in the change from prior to posterior [distributions], we introduce a suite of MCMC samplers that can adapt to the complex structure of the posterior distribution, yet are well-defined on function space. Posterior sampling in nonlinear inverse problems arising from various partial di erential equations and also a stochastic differential equation are used to demonstrate the e ciency of these dimension-independent likelihood-informed samplers.
Xu, Lei; Zhai, Wanming
2017-10-01
This paper devotes to develop a computational model for stochastic analysis and reliability assessment of vehicle-track systems subject to earthquakes and track random irregularities. In this model, the earthquake is expressed as non-stationary random process simulated by spectral representation and random function, and the track random irregularities with ergodic properties on amplitudes, wavelengths and probabilities are characterized by a track irregularity probabilistic model, and then the number theoretical method (NTM) is applied to effectively select representative samples of earthquakes and track random irregularities. Furthermore, a vehicle-track coupled model is presented to obtain the dynamic responses of vehicle-track systems due to the earthquakes and track random irregularities at time-domain, and the probability density evolution method (PDEM) is introduced to describe the evolutionary process of probability from excitation input to response output by assuming the vehicle-track system as a probabilistic conservative system, which lays the foundation on reliability assessment of vehicle-track systems. The effectiveness of the proposed model is validated by comparing to the results of Monte-Carlo method from statistical viewpoint. As an illustrative example, the random vibrations of a high-speed railway vehicle running on the track slabs excited by lateral seismic waves and track random irregularities are analyzed, from which some significant conclusions can be drawn, e.g., track irregularities will additionally promote the dynamic influence of earthquakes especially on maximum values and dispersion degree of responses; the characteristic frequencies or frequency ranges respectively governed by earthquakes and track random irregularities are greatly different, moreover, the lateral seismic waves will dominate or even change the characteristic frequencies of system responses of some lateral dynamic indices at low frequency.
Xie, Jiawen
Delamination is a common failure mode in composite (fiber reinforced and layered) structures subject to low-velocity impacts by foreign objects. To maximize the design capacity, it is important to have reliable tools to predict delamination evolution in laminated composites. The focus of this research is to analyze flexural responses and delamination evolution in laminated composites subject to flexural loading. Analytical solutions were derived from linear elasticity theory and structural mechanics of beam and plate configurations. Formulations and evaluations of the proposed analytical approaches were validated by comparing with results of finite element (FE) simulations in similar settings and published experiment data. Two-dimensional (2D) elasticity theory for laminated panels was extended to analyze elastodynamic responses of pristine panels and quasi-static responses of pre-delaminated panels. A highlight of the approach is exact solutions of displacement and stress fields it provides. Further investigations showed that the 2D elasticity theory is not amenable to a closed-form solution for laminates containing off-axis angle plies due to three-dimensional (3D) states of stress. Closed-form solutions of cohesive zone modeling (CZM) were developed for popular delamination toughness tests of laminated beams. A laminate was modeled as an assembly of two sub-laminates connected by a virtual deformable layer with infinitesimal thickness. Comprehensive parametric studies were performed, offering a deeper understanding of CZM. The studies were further simplified so that closed-form expressions can be obtained, serving as a quick estimation of the flexural responses and the process zone lengths. Analytical CZM solutions were extended analyze quasi-static impact tests of laminated composite plates with arbitrary stacking sequences, aiming to predict critical load, critical interfaces and extent of delamination at that interface. The Rayleigh-Ritz method was used to
Directory of Open Access Journals (Sweden)
Juri Taborri
2015-09-01
Full Text Available Gait-phase recognition is a necessary functionality to drive robotic rehabilitation devices for lower limbs. Hidden Markov Models (HMMs represent a viable solution, but they need subject-specific training, making data processing very time-consuming. Here, we validated an inter-subject procedure to avoid the intra-subject one in two, four and six gait-phase models in pediatric subjects. The inter-subject procedure consists in the identification of a standardized parameter set to adapt the model to measurements. We tested the inter-subject procedure both on scalar and distributed classifiers. Ten healthy children and ten hemiplegic children, each equipped with two Inertial Measurement Units placed on shank and foot, were recruited. The sagittal component of angular velocity was recorded by gyroscopes while subjects performed four walking trials on a treadmill. The goodness of classifiers was evaluated with the Receiver Operating Characteristic. The results provided a goodness from good to optimum for all examined classifiers (0 < G < 0.6, with the best performance for the distributed classifier in two-phase recognition (G = 0.02. Differences were found among gait partitioning models, while no differences were found between training procedures with the exception of the shank classifier. Our results raise the possibility of avoiding subject-specific training in HMM for gait-phase recognition and its implementation to control exoskeletons for the pediatric population.
Eryilmaz, Ali
2011-01-01
The aim of this study is to develop and test a subjective well-being model for adolescents in high school. A total of 326 adolescents in high school (176 female and 150 male) participated in this study. The data was collected by using the general needs satisfaction questionnaire, which is for the adolescents' subjective well-being, and determining…
Likelihood free inference for Markov processes: a comparison.
Owen, Jamie; Wilkinson, Darren J; Gillespie, Colin S
2015-04-01
Approaches to Bayesian inference for problems with intractable likelihoods have become increasingly important in recent years. Approximate Bayesian computation (ABC) and "likelihood free" Markov chain Monte Carlo techniques are popular methods for tackling inference in these scenarios but such techniques are computationally expensive. In this paper we compare the two approaches to inference, with a particular focus on parameter inference for stochastic kinetic models, widely used in systems biology. Discrete time transition kernels for models of this type are intractable for all but the most trivial systems yet forward simulation is usually straightforward. We discuss the relative merits and drawbacks of each approach whilst considering the computational cost implications and efficiency of these techniques. In order to explore the properties of each approach we examine a range of observation regimes using two example models. We use a Lotka-Volterra predator-prey model to explore the impact of full or partial species observations using various time course observations under the assumption of known and unknown measurement error. Further investigation into the impact of observation error is then made using a Schlögl system, a test case which exhibits bi-modal state stability in some regions of parameter space.
Maximum Likelihood Analysis in the PEN Experiment
Lehman, Martin
2013-10-01
The experimental determination of the π+ -->e+ ν (γ) decay branching ratio currently provides the most accurate test of lepton universality. The PEN experiment at PSI, Switzerland, aims to improve the present world average experimental precision of 3 . 3 ×10-3 to 5 ×10-4 using a stopped beam approach. During runs in 2008-10, PEN has acquired over 2 ×107 πe 2 events. The experiment includes active beam detectors (degrader, mini TPC, target), central MWPC tracking with plastic scintillator hodoscopes, and a spherical pure CsI electromagnetic shower calorimeter. The final branching ratio will be calculated using a maximum likelihood analysis. This analysis assigns each event a probability for 5 processes (π+ -->e+ ν , π+ -->μ+ ν , decay-in-flight, pile-up, and hadronic events) using Monte Carlo verified probability distribution functions of our observables (energies, times, etc). A progress report on the PEN maximum likelihood analysis will be presented. Work supported by NSF grant PHY-0970013.
Schlatter, E.; Bredeweg, B.; Drie, J.P. van; Jong, P.F. de
2017-01-01
Modelling can help understanding dynamic systems, but learning how to model is a difficult and time-consuming task. The challenge is to foster modelling skills, while not limiting the learning of regular subject matter, or better, to also improve this learning. We investigate how learning by
Katsube, Takayuki; Ishibashi, Toru; Kano, Takeshi; Wajima, Toshihiro
2016-11-01
The aim of this study was to develop a population pharmacokinetic (PK)/pharmacodynamic (PD) model for describing plasma lusutrombopag concentrations and platelet response following oral lusutrombopag dosing and for evaluating covariates in the PK/PD profiles. A population PK/PD model was developed using a total of 2539 plasma lusutrombopag concentration data and 1408 platelet count data from 78 healthy adult subjects following oral single and multiple (14-day once-daily) dosing. Covariates in PK and PK/PD models were explored for subject age, body weight, sex, and ethnicity. A three-compartment model with first-order rate and lag time for absorption was selected as a PK model. A three-transit and one-platelet compartment model with a sigmoid E max model for drug effect and feedback of platelet production was selected as the PD model. The PK and PK/PD models well described the plasma lusutrombopag concentrations and the platelet response, respectively. Body weight was a significant covariate in PK. The bioavailability of non-Japanese subjects (White and Black/African American subjects) was 13 % lower than that of Japanese subjects, while the simulated platelet response profiles using the PK/PD model were similar between Japanese and non-Japanese subjects. There were no significant covariates of the tested background data including age, sex, and ethnicity (Japanese or non-Japanese) for the PD sensitivity. A population PK/PD model was developed for lusutrombopag and shown to provide good prediction for the PK/PD profiles. The model could be used as a basic PK/PD model in the drug development of lusutrombopag.
Asymptotic formulae for likelihood-based tests of new physics
Cowan, Glen; Cranmer, Kyle; Gross, Eilam; Vitells, Ofer
2011-02-01
We describe likelihood-based statistical tests for use in high energy physics for the discovery of new phenomena and for construction of confidence intervals on model parameters. We focus on the properties of the test procedures that allow one to account for systematic uncertainties. Explicit formulae for the asymptotic distributions of test statistics are derived using results of Wilks and Wald. We motivate and justify the use of a representative data set, called the "Asimov data set", which provides a simple method to obtain the median experimental sensitivity of a search or measurement as well as fluctuations about this expectation.
Asymptotic formulae for likelihood-based tests of new physics
Energy Technology Data Exchange (ETDEWEB)
Cowan, Glen [Royal Holloway, University of London, Physics Department, Egham (United Kingdom); Cranmer, Kyle [New York University, Physics Department, New York, NY (United States); Gross, Eilam; Vitells, Ofer [Weizmann Institute of Science, Rehovot (Israel)
2011-02-15
We describe likelihood-based statistical tests for use in high energy physics for the discovery of new phenomena and for construction of confidence intervals on model parameters. We focus on the properties of the test procedures that allow one to account for systematic uncertainties. Explicit formulae for the asymptotic distributions of test statistics are derived using results of Wilks and Wald. We motivate and justify the use of a representative data set, called the ''Asimov data set'', which provides a simple method to obtain the median experimental sensitivity of a search or measurement as well as fluctuations about this expectation. (orig.)
A Rayleigh Doppler Frequency Estimator Derived from Maximum Likelihood Theory
DEFF Research Database (Denmark)
Hansen, Henrik; Affes, Sofiene; Mermelstein, Paul
1999-01-01
Reliable estimates of Rayleigh Doppler frequency are useful for the optimization of adaptive multiple access wireless receivers.The adaptation parameters of such receivers are sensitive to the amount of Doppler and automatic reconfiguration to the speed of terminalmovement can optimize cell...... capacities in low and high speed situations.We derive a Doppler frequency estimatorusing the maximum likelihood method and Jakes model [\\ref{Jakes}] of a Rayleigh fading channel. This estimator requires an FFT and simple post-processing only. Its performance is verifiedthrough simulations and found to yield...
A path model of sarcopenia on bone mass loss in elderly subjects.
Rondanelli, M; Guido, D; Opizzi, A; Faliva, M A; Perna, S; Grassi, M
2014-01-01
Aging is associated with decreases in muscle mass, strength, power (sarcopenia) and bone mineral density (BMD). The aims of this study were to investigate in elderly the role of sarcopenia on BMD loss by a path model, including adiposity, inflammation, and malnutrition associations. Body composition and BMD were measured by dual X-ray absorptiometry in 159 elderly subjects (52 male/107 female; mean age 80.3 yrs). Muscle strength was determined with dynamometer. Serum albumin and PCR were also assessed. Structural equations examined the effect of sarcopenia (measured by Relative Skeletal Muscle Mass, Total Muscle Mass, Handgrip, Muscle Quality Score) on osteoporosis (measured by Vertebral and Femoral T-scores) in a latent variable model including adiposity (measured by Total Fat Mass, BMI, Ginoid/Android Fat), inflammation (PCR), and malnutrition (serum albumin). The sarcopenia assumed a role of moderator in the adiposity-osteoporosis relationship. Specifically, increasing the sarcopenia, the relationship adiposity-osteoporosis (β: -0.58) decrease in intensity. Adiposity also influences sarcopenia (β: -0.18). Malnutrition affects the inflammatory and the adiposity states (β: +0.61, and β: -0.30, respectively), while not influencing the sarcopenia. Thus, adiposity has a role as a mediator of the effect of malnutrition on both sarcopenia and osteoporosis. Malnutrition decreases adiposity; decreasing adiposity, in turn, increase the sarcopenia and osteoporosis. This study suggests such as in a group of elderly sarcopenia affects the link between adiposity and BMD, but not have a pure independent effect on osteoporosis.
Energy Technology Data Exchange (ETDEWEB)
Bakry, A. [King Abdulaziz University, 80203, Department of Physics, Faculty of Science (Saudi Arabia); Abdulrhmann, S. [Jazan University, 114, Department of Physics, Faculty of Sciences (Saudi Arabia); Ahmed, M., E-mail: mostafa.farghal@mu.edu.eg [King Abdulaziz University, 80203, Department of Physics, Faculty of Science (Saudi Arabia)
2016-06-15
We theoretically model the dynamics of semiconductor lasers subject to the double-reflector feedback. The proposed model is a new modification of the time-delay rate equations of semiconductor lasers under the optical feedback to account for this type of the double-reflector feedback. We examine the influence of adding the second reflector to dynamical states induced by the single-reflector feedback: periodic oscillations, period doubling, and chaos. Regimes of both short and long external cavities are considered. The present analyses are done using the bifurcation diagram, temporal trajectory, phase portrait, and fast Fourier transform of the laser intensity. We show that adding the second reflector attracts the periodic and perioddoubling oscillations, and chaos induced by the first reflector to a route-to-continuous-wave operation. During this operation, the periodic-oscillation frequency increases with strengthening the optical feedback. We show that the chaos induced by the double-reflector feedback is more irregular than that induced by the single-reflector feedback. The power spectrum of this chaos state does not reflect information on the geometry of the optical system, which then has potential for use in chaotic (secure) optical data encryption.
Chong, Song Hun
2016-08-09
Geosystems often experience numerous loading cycles. Plastic strain accumulation during repetitive mechanical loads can lead to shear shakedown or continued shear ratcheting; in all cases, volumetric strains diminish as the specimen evolves towards terminal density. Previously suggested models and new functions are identified to fit plastic strain accumulation data. All accumulation models are formulated to capture terminal density (volumetric strain) and either shakedown or ratcheting (shear strain). Repetitive vertical loading tests under zero lateral strain conditions are conducted using three different sands packed at initially low and high densities. Test results show that plastic strain accumulation for all sands and density conditions can be captured in the same dimensionless plot defined in terms of the initial relative density, terminal density, and ratio between the amplitude of the repetitive load and the initial static load. This observation allows us to advance a simple but robust procedure to estimate the maximum one-dimensional settlement that a foundation could experience if subjected to repetitive loads. © 2016, Canadian Science Publishing. All rights reserved.
Bonne, F.; Bonnay, P.; Girard, A.; Hoa, C.; Lacroix, B.; Le Coz, Q.; Nicollet, S.; Poncet, J.-M.; Zani, L.
2017-12-01
Supercritical helium loops at 4.2 K are the baseline cooling strategy of tokamaks superconducting magnets (JT-60SA, ITER, DEMO, etc.). This loops work with cryogenic circulators that force a supercritical helium flow through the superconducting magnets in order that the temperature stay below the working range all along their length. This paper shows that a supercritical helium loop associated with a saturated liquid helium bath can satisfy temperature constraints in different ways (playing on bath temperature and on the supercritical flow), but that only one is optimal from an energy point of view (every Watt consumed at 4.2 K consumes at least 220 W of electrical power). To find the optimal operational conditions, an algorithm capable of minimizing an objective function (energy consumption at 5 bar, 5 K) subject to constraints has been written. This algorithm works with a supercritical loop model realized with the Simcryogenics [2] library. This article describes the model used and the results of constrained optimization. It will be possible to see that the changes in operating point on the temperature of the magnet (e.g. in case of a change in the plasma configuration) involves large changes on the cryodistribution optimal operating point. Recommendations will be made to ensure that the energetic consumption is kept as low as possible despite the changing operating point. This work is partially supported by EUROfusion Consortium through the Euratom Research and Training Program 20142018 under Grant 633053.
Independent component model for cognitive functions of multiple subjects using [15O]H2O PET images.
Park, Hae-Jeong; Kim, Jae-Jin; Youn, Tak; Lee, Dong Soo; Lee, Myung Chul; Kwon, Jun Soo
2003-04-01
An independent component model of multiple subjects' positron emission tomography (PET) images is proposed to explore the overall functional components involved in a task and to explain subject specific variations of metabolic activities under altered experimental conditions utilizing the Independent component analysis (ICA) concept. As PET images represent time-compressed activities of several cognitive components, we derived a mathematical model to decompose functional components from cross-sectional images based on two fundamental hypotheses: (1) all subjects share basic functional components that are common to subjects and spatially independent of each other in relation to the given experimental task, and (2) all subjects share common functional components throughout tasks which are also spatially independent. The variations of hemodynamic activities according to subjects or tasks can be explained by the variations in the usage weight of the functional components. We investigated the plausibility of the model using serial cognitive experiments of simple object perception, object recognition, two-back working memory, and divided attention of a syntactic process. We found that the independent component model satisfactorily explained the functional components involved in the task and discuss here the application of ICA in multiple subjects' PET images to explore the functional association of brain activations. Copyright 2003 Wiley-Liss, Inc.
Harris, Adam
2014-05-01
The Intergovernmental Panel on Climate Change (IPCC) prescribes that the communication of risk and uncertainty information pertaining to scientific reports, model predictions etc. be communicated with a set of 7 likelihood expressions. These range from "Extremely likely" (intended to communicate a likelihood of greater than 99%) through "As likely as not" (33-66%) to "Extremely unlikely" (less than 1%). Psychological research has investigated the degree to which these expressions are interpreted as intended by the IPCC, both within and across cultures. I will present a selection of this research and demonstrate some problems associated with communicating likelihoods in this way, as well as suggesting some potential improvements.
A subjective supply-demand model: the maximum Boltzmann/Shannon entropy solution
Piotrowski, Edward W.; Sładkowski, Jan
2009-03-01
The present authors have put forward a projective geometry model of rational trading. The expected (mean) value of the time that is necessary to strike a deal and the profit strongly depend on the strategies adopted. A frequent trader often prefers maximal profit intensity to the maximization of profit resulting from a separate transaction because the gross profit/income is the adopted/recommended benchmark. To investigate activities that have different periods of duration we define, following the queuing theory, the profit intensity as a measure of this economic category. The profit intensity in repeated trading has a unique property of attaining its maximum at a fixed point regardless of the shape of demand curves for a wide class of probability distributions of random reverse transactions (i.e. closing of the position). These conclusions remain valid for an analogous model based on supply analysis. This type of market game is often considered in research aiming at finding an algorithm that maximizes profit of a trader who negotiates prices with the Rest of the World (a collective opponent), possessing a definite and objective supply profile. Such idealization neglects the sometimes important influence of an individual trader on the demand/supply profile of the Rest of the World and in extreme cases questions the very idea of demand/supply profile. Therefore we put forward a trading model in which the demand/supply profile of the Rest of the World induces the (rational) trader to (subjectively) presume that he/she lacks (almost) all knowledge concerning the market but his/her average frequency of trade. This point of view introduces maximum entropy principles into the model and broadens the range of economic phenomena that can be perceived as a sort of thermodynamical system. As a consequence, the profit intensity has a fixed point with an astonishing connection with Fibonacci classical works and looking for the quickest algorithm for obtaining the extremum of a
Likelihood based testing for no fractional cointegration
DEFF Research Database (Denmark)
Lasak, Katarzyna
We consider two likelihood ratio tests, so-called maximum eigenvalue and trace tests, for the null of no cointegration when fractional cointegration is allowed under the alternative, which is a first step to generalize the so-called Johansen's procedure to the fractional cointegration case....... The standard cointegration analysis only considers the assumption that deviations from equilibrium can be integrated of order zero, which is very restrictive in many cases and may imply an important loss of power in the fractional case. We consider the alternative hypotheses with equilibrium deviations...... that can be mean reverting with order of integration possibly greater than zero. Moreover, the degree of fractional cointegration is not assumed to be known, and the asymptotic null distribution of both tests is found when considering an interval of possible values. The power of the proposed tests under...
Emitter frequency refinement based on maximum likelihood
Xu, Xin; Wang, Huijuan
2015-07-01
Frequency estimation via signal sorting is widely recognized as one of the most practical technologies in signal processing. However, the estimated frequencies via signal sorting may be inaccurate and biased due to signal fluctuation under different emitter working modes, problems of transmitter circuit, environmental noises or certain unknown interference sources. Therefore, it has become an important issue to further analyze and refine signal frequencies after signal sorting. To address the above problem, we have brought forward an iterative frequency refinement method based on maximum likelihood. Iteratively, the initial estimated signal frequency values are refined. Experimental results indicate that the refined signal frequencies are more informative than the initial ones. As another advantage of our method, noises and interference sources could be filtered out simultaneously. The efficiency and flexibility enables our method to apply in a wide application area, i.e., communication, electronic reconnaissance and radar intelligence analysis.
Subtracting and Fitting Histograms using Profile Likelihood
D'Almeida, F M L
2008-01-01
It is known that many interesting signals expected at LHC are of unknown shape and strongly contaminated by background events. These signals will be dif cult to detect during the rst years of LHC operation due to the initial low luminosity. In this work, one presents a method of subtracting histograms based on the pro le likelihood function when the background is previously estimated by Monte Carlo events and one has low statistics. Estimators for the signal in each bin of the histogram difference are calculated so as limits for the signals with 68.3% of Con dence Level in a low statistics case when one has a exponential background and a Gaussian signal. The method can also be used to t histograms when the signal shape is known. Our results show a good performance and avoid the problem of negative values when subtracting histograms.
Dishonestly increasing the likelihood of winning
Directory of Open Access Journals (Sweden)
Shaul Shalvi
2012-05-01
Full Text Available People not only seek to avoid losses or secure gains; they also attempt to create opportunities for obtaining positive outcomes. When distributing money between gambles with equal probabilities, people often invest in turning negative gambles into positive ones, even at a cost of reduced expected value. Results of an experiment revealed that (1 the preference to turn a negative outcome into a positive outcome exists when people's ability to do so depends on their performance levels (rather than merely on their choice, (2 this preference is amplified when the likelihood to turn negative into positive is high rather than low, and (3 this preference is attenuated when people can lie about their performance levels, allowing them to turn negative into positive not by performing better but rather by lying about how well they performed.
Accelerated maximum likelihood parameter estimation for stochastic biochemical systems
Directory of Open Access Journals (Sweden)
Daigle Bernie J
2012-05-01
Full Text Available Abstract Background A prerequisite for the mechanistic simulation of a biochemical system is detailed knowledge of its kinetic parameters. Despite recent experimental advances, the estimation of unknown parameter values from observed data is still a bottleneck for obtaining accurate simulation results. Many methods exist for parameter estimation in deterministic biochemical systems; methods for discrete stochastic systems are less well developed. Given the probabilistic nature of stochastic biochemical models, a natural approach is to choose parameter values that maximize the probability of the observed data with respect to the unknown parameters, a.k.a. the maximum likelihood parameter estimates (MLEs. MLE computation for all but the simplest models requires the simulation of many system trajectories that are consistent with experimental data. For models with unknown parameters, this presents a computational challenge, as the generation of consistent trajectories can be an extremely rare occurrence. Results We have developed Monte Carlo Expectation-Maximization with Modified Cross-Entropy Method (MCEM2: an accelerated method for calculating MLEs that combines advances in rare event simulation with a computationally efficient version of the Monte Carlo expectation-maximization (MCEM algorithm. Our method requires no prior knowledge regarding parameter values, and it automatically provides a multivariate parameter uncertainty estimate. We applied the method to five stochastic systems of increasing complexity, progressing from an analytically tractable pure-birth model to a computationally demanding model of yeast-polarization. Our results demonstrate that MCEM2 substantially accelerates MLE computation on all tested models when compared to a stand-alone version of MCEM. Additionally, we show how our method identifies parameter values for certain classes of models more accurately than two recently proposed computationally efficient methods
Alcohol's direct and indirect effects on men's self-reported sexual aggression likelihood.
Norris, Jeanette; Davis, Kelly Cue; George, William H; Martell, Joel; Heiman, Julia R
2002-11-01
This study investigated pathways through which alcohol's direct and indirect expectancy effects and direct physiological effects influenced men's self-reported sexual aggression likelihood after they read a violent pornographic story. The indirect effects of participants' affective responses and cognitive judgments of story characters were also examined. Male social drinkers (N = 135), recruited through newspaper ads in a large western city, were randomly assigned to one of three beverage conditions: alcohol, placebo or control. After completing pretest measures, subjects read a violent pornographic story and reported their sexual arousal, affect, cognitive judgments and sexual aggression likelihood. Pre-existing expectancies operated directly and interactively with alcohol consumption on reported sexual aggression likelihood. The influence of expectancies on sexual aggression likelihood also occurred indirectly through positive affect and cognitive judgments of assailant force and victim enjoyment. Situational consumption effects were influenced by cognitive judgments. The expectation of receiving alcohol indirectly affected sexual aggression likelihood through its effect on perception of the assailant's typicality. Among men who have contact with violent pornography, alcohol can have both direct and indirect effects on reported sexual aggression likelihood. In addition to the presence of situational myopia and expectancy effects, pre-existing expectancies can play a significant role both alone and interactively in affecting this tendency.
Simulation modelling analysis for small sets of single-subject data collected over time.
Borckardt, Jeffrey J; Nash, Michael R
2014-01-01
The behavioural data yielded by single subjects in naturalistic and controlled settings likely contain valuable information to scientists and practitioners alike. Although some of the properties unique to this data complicate statistical analysis, progress has been made in developing specialised techniques for rigorous data evaluation. There are no perfect tests currently available to analyse short autocorrelated data streams, but there are some promising approaches that warrant further development. Although many approaches have been proposed, and some appear better than others, they all have some limitations. When data sets are large enough (∼30 data points per phase), the researcher has a reasonably rich pallet of statistical tools from which to choose. However, when the data set is sparse, the analytical options dwindle. Simulation modelling analysis (SMA; described in this article) is a relatively new technique that appears to offer acceptable Type-I and Type-II error rate control with short streams of autocorrelated data. However, at this point, it is probably too early to endorse any specific statistical approaches for short, autocorrelated time-series data streams. While SMA shows promise, more work is needed to verify that it is capable of reliable Type-I and Type-II error performance with short serially dependent streams of data.
Kuster, T M; Schleppi, P; Hu, B; Schulin, R; Günthardt-Goerg, M S
2013-01-01
Being tolerant to heat and drought, oaks are promising candidates for future forestry in view of climate change in Central Europe. Air warming is expected to increase, and drought decrease soil N availability and thus N supply to trees. Here, we conducted a model ecosystem experiment, in which mixed stands of young oaks (Quercus robur, Q. petraea and Q. pubescens) were grown on two different soils and subjected to four climate treatments during three growing seasons: air warming by 1-2 °C, drought periods (average precipitation reduction of 43-60%), a combination of these two treatments, and a control. In contrast to our hypotheses, neither air warming nor drought significantly affected N availability, whereas total amounts, vertical distribution and availability of soil N showed substantial differences between the two soils. While air warming had no effect on tree growth and N accumulation, the drought treatment reduced tree growth and increased, or tended to increase, N accumulation in the reduced biomass, indicating that growth was not limited by N. Furthermore, (15) N-labelling revealed that this accumulation was associated with an increased uptake of nitrate. On the basis of our results, climate change effects on N dynamics are expected to be less important in oak stands than reduced soil water availability. © 2012 German Botanical Society and The Royal Botanical Society of the Netherlands.
Modeling Double Subjectivity for Gaining Programmable Insights: Framing the Case of Uber
Directory of Open Access Journals (Sweden)
Loretta Henderson Cheeks
2017-09-01
Full Text Available The Internet is the premier platform that enable the emergence of new technologies. Online news is unstructured narrative text that embeds facts, frames, and amplification that can influence society attitudes about technology adoption. Online news sources are carriers of voluminous amounts of news for reaching significantly large audience and have no geographical or time boundaries. The interplay of complex and dynamical forces among authors and readers allow for progressive emergent and latent properties to exhibit. Our concept of “Double subjectivity” provides a new paradigm for exploring complementary programmable insights of deeply buried meanings in a system. The ability to understand internal embeddedness in a large collection of related articles are beyond the reach of existing computational tools, and are hence left to human readers with unscalable results. This paper uncovers the potential to utilize advanced machine learning in a new way to automate the understanding of implicit structures and their associated latent meanings to give an early human-level insight into emergent technologies, with a concrete example of “Uber”. This paper establishes the new concept of double subjectivity as an instrument for large-scale machining of unstructured text and introduces a social influence model for the discovery of distinct pathways into emerging technology, and hence an insight. The programmable insight reveals early spatial and temporal opinion shift monitoring in complex networks in a structured way for computational treatment and visualization.
Ghezelbash, Farshid; Shirazi-Adl, Aboulfazl; Plamondon, André; Arjmand, Navid; Parnianpour, Mohamad
2017-10-01
Underlying mechanisms of obesity-related back pain remain unexplored. Thus, we aim to determine the effect of obesity and its shapes on the spinal loads and the associated risks of injury. Obesity shapes were initially constructed by principal component analysis based on datasets on 5852 obese individuals. Spinal loads, cycles to vertebral failure and trunk stability margin were estimated in a subject-specific trunk model taking account of personalized musculature, passive ligamentous spine, obesity shapes, segmental weights, spine kinematics and bone mineral density. Three obesity shapes (mean and extreme abdominal circumferences) at three body weights (BWs) of 86, 98 and 109 kg were analyzed. Additional BW (12 kg) increased spinal loads by ~11.8%. Higher waist circumferences at identical BW increased spinal forces to the tune of ~20 kg additional BW and the risk of vertebral fatigue compression fracture by 3-7 times when compared with smaller waist circumferences. Forward flexion, greater BW and load in hands increased the trunk stability margin. Spinal loads markedly increased with BW, especially at greater waist circumferences. The risk of vertebral fatigue fracture also substantially increased at greater waist circumferences though not at smaller ones. Obesity and its shape should be considered in spine biomechanics.
Directory of Open Access Journals (Sweden)
Frolov Aleksey
2017-01-01
Full Text Available Creating an innovation environment is shown in the context of interaction of economic agents in the creation and consumption of innovative value-based infrastructure approach. The problem of the complexity of collecting heterogeneous data on the formation and distribution of innovative value in the conditions of the dynamic nature of the research object and the environment is formulated. An information model providing a subject-independent representation of data on innovation value flows is proposed and allows to automate the processes of data collection and analysis with the minimization of time costs. The article was prepared in the course of carrying out research work within the framework of the project part of the state task in the field of scientific activity in accordance with the assignment 26.2758.2017 / 4.6 for 2017-2019. on the topic “System for analyzing the formation and distribution of the value of innovative products based on the infrastructure concept”.
CFD modeling of hydro-biochemical behavior of MSW subjected to leachate recirculation.
Feng, Shi-Jin; Cao, Ben-Yi; Li, An-Zheng; Chen, Hong-Xin; Zheng, Qi-Teng
2017-12-08
The most commonly used method of operating landfills more sustainably is to promote rapid biodegradation and stabilization of municipal solid waste (MSW) by leachate recirculation. The present study is an application of computational fluid dynamics (CFD) to the 3D modeling of leachate recirculation in bioreactor landfills using vertical wells. The objective is to model and investigate the hydrodynamic and biochemical behavior of MSW subject to leachate recirculation. The results indicate that the maximum recirculated leachate volume can be reached when vertical wells are set at the upper middle part of a landfill (H W/H T = 0.4), and increasing the screen length can be more helpful in enlarging the influence radius than increasing the well length (an increase in H S/H W from 0.4 to 0.6 results in an increase in influence radius from 6.5 to 7.7 m). The time to reach steady state of leachate recirculation decreases with the increase in pressure head; however, the time for leachate to drain away increases with the increase in pressure head. It also showed that methanogenic biomass inoculum of 1.0 kg/m3 can accelerate the volatile fatty acid depletion and increase the peak depletion rate to 2.7 × 10-6 kg/m3/s. The degradation-induced void change parameter exerts an influence on the processes of MSW biodegradation because a smaller parameter value results in a greater increase in void space.
Ludovico Messineo; Ludovico Messineo; Ludovico Messineo; Luigi Taranto-Montemurro; Scott A. Sands; Scott A. Sands; Melania D. Oliveira Marques; Melania D. Oliveira Marques; Ali Azabarzin; David Andrew Wellman
2017-01-01
BackgroundInsomnia is a major public health problem in western countries. Previous small pilot studies showed that the administration of constant white noise can improve sleep quality, increase acoustic arousal threshold, and reduce sleep onset latency. In this randomized controlled trial, we tested the effect of surrounding broadband sound administration on sleep onset latency, sleep architecture, and subjective sleep quality in healthy subjects.MethodsEighteen healthy subjects were studied ...
tmle : An R Package for Targeted Maximum Likelihood Estimation
Directory of Open Access Journals (Sweden)
Susan Gruber
2012-11-01
Full Text Available Targeted maximum likelihood estimation (TMLE is a general approach for constructing an efficient double-robust semi-parametric substitution estimator of a causal effect parameter or statistical association measure. tmle is a recently developed R package that implements TMLE of the effect of a binary treatment at a single point in time on an outcome of interest, controlling for user supplied covariates, including an additive treatment effect, relative risk, odds ratio, and the controlled direct effect of a binary treatment controlling for a binary intermediate variable on the pathway from treatment to the out- come. Estimation of the parameters of a marginal structural model is also available. The package allows outcome data with missingness, and experimental units that contribute repeated records of the point-treatment data structure, thereby allowing the analysis of longitudinal data structures. Relevant factors of the likelihood may be modeled or fit data-adaptively according to user specifications, or passed in from an external estimation procedure. Effect estimates, variances, p values, and 95% confidence intervals are provided by the software.
DEFF Research Database (Denmark)
Christensen, Ole Fredslund; Frydenberg, Morten; Jensen, Jens Ledet
2005-01-01
The large deviation modified likelihood ratio statistic is studied for testing a variance component equal to a specified value. Formulas are presented in the general balanced case, whereas in the unbalanced case only the one-way random effects model is studied. Simulation studies are presented, s......, showing that the normal approximation to the large deviation modified likelihood ratio statistic gives confidence intervals for variance components with coverage probabilities very close to the nominal confidence coefficient....
Maximum Likelihood Approach for RFID Tag Set Cardinality Estimation with Detection Errors
DEFF Research Database (Denmark)
Nguyen, Chuyen T.; Hayashi, Kazunori; Kaneko, Megumi
2013-01-01
Abstract Estimation schemes of Radio Frequency IDentification (RFID) tag set cardinality are studied in this paper using Maximum Likelihood (ML) approach. We consider the estimation problem under the model of multiple independent reader sessions with detection errors due to unreliable radio...... is evaluated under dierent system parameters and compared with that of the conventional method via computer simulations assuming flat Rayleigh fading environments and framed-slotted ALOHA based protocol. Keywords RFID tag cardinality estimation maximum likelihood detection error...
Application of the Method of Maximum Likelihood to Identification of Bipedal Walking Robots
Czech Academy of Sciences Publication Activity Database
Dolinský, Kamil; Čelikovský, Sergej
(2017) ISSN 1063-6536 R&D Projects: GA ČR(CZ) GA17-04682S Institutional support: RVO:67985556 Keywords : Control * identification * maximum likelihood (ML) * walking robot s Subject RIV: BC - Control Systems Theory Impact factor: 3.882, year: 2016 http://ieeexplore.ieee.org/document/7954032/
Subjective element of events as іntentional basis of the discursive model of communication
Directory of Open Access Journals (Sweden)
Y. S. Kravtsov
2016-03-01
Full Text Available The article reveals the phenomenological aspects of the communication process. The importance of the more detailed analysis of developments in the subjective foundation of information space. A new approach to communication is associated with a new look at the world, it puts new emphasis on the methodology of knowledge. According to the author the specifics of postmodern social situation due to the fact that society has become transparent (clear through a radical change in the technology of mass communication. Existing induces derealization communicative reality, when the signal transmission rate in the range of our planet can be equated to instant, and the distance and time of the message are extremely small, approaching zero. Therefore, the conclusion is that there is another, more sophisticated communicative community that is virtual, which establishes a fundamentally different circuit, or a model of the communicative process. Investigated that in contrast to the current actually expresses integrity, stability and completeness, virtual reality is the source of the difference and diversity. Thus virtuality is a phenomenon, immanent in the very structure of existence, embodies the creative opportunity-generating activities. The article reveals that virtual reality is based on the principle of «feedback» that allows for the maximum entry of a person into the information space.The scale of the phenomenon of virtual manifestations in social and individual life suggests the «virtualization» of society and encourages researchers to develop a new understanding of social reality in its relation to the reality of the virtual. At the same time virtual model is the result of the synthesis of human sensory and mental abilities, but taken in their generality, the idea of the correlation between man and objects in the world. Hense, his model has a priori importance, because it incorporated all rational isolated situations where it may be. Innovation
Hottenstein, Kristi N.
2017-01-01
Regulations for research involving human subjects have long been a critical issue in higher education. Federal public policy for research involving human subjects impacts institutions of higher education by requiring all federally funded research to be passed by an Institutional Review Board (IRB). Undergraduate research is no exception. Given the…
Directory of Open Access Journals (Sweden)
Pauline Gerus
Full Text Available Neuromusculoskeletal models are a common method to estimate muscle forces. Developing accurate neuromusculoskeletal models is a challenging task due to the complexity of the system and large inter-subject variability. The estimation of muscles force is based on the mechanical properties of tendon-aponeurosis complex. Most neuromusculoskeletal models use a generic definition of the tendon-aponeurosis complex based on in vitro test, perhaps limiting their validity. Ultrasonography allows subject-specific estimates of the tendon-aponeurosis complex's mechanical properties. The aim of this study was to investigate the influence of subject-specific mechanical properties of the tendon-aponeurosis complex on a neuromusculoskeletal model of the ankle joint. Seven subjects performed isometric contractions from which the tendon-aponeurosis force-strain relationship was estimated. Hopping and running tasks were performed and muscle forces were estimated using subject-specific tendon-aponeurosis and generic tendon properties. Two ultrasound probes positioned over the muscle-tendon junction and the mid-belly were combined with motion capture to estimate the in vivo tendon and aponeurosis strain of the medial head of gastrocnemius muscle. The tendon-aponeurosis force-strain relationship was scaled for the other ankle muscles based on tendon and aponeurosis length of each muscle measured by ultrasonography. The EMG-driven model was calibrated twice - using the generic tendon definition and a subject-specific tendon-aponeurosis force-strain definition. The use of subject-specific tendon-aponeurosis definition leads to a higher muscle force estimate for the soleus muscle and the plantar-flexor group, and to a better model prediction of the ankle joint moment compared to the model estimate which used a generic definition. Furthermore, the subject-specific tendon-aponeurosis definition leads to a decoupling behaviour between the muscle fibre and muscle-tendon unit
Planck 2013 results. XV. CMB power spectra and likelihood
Planck Collaboration; Ade, P. A. R.; Aghanim, N.; Armitage-Caplan, C.; Arnaud, M.; Ashdown, M.; Atrio-Barandela, F.; Aumont, J.; Baccigalupi, C.; Banday, A. J.; Barreiro, R. B.; Bartlett, J. G.; Battaner, E.; Benabed, K.; Benoît, A.; Benoit-Lévy, A.; Bernard, J.-P.; Bersanelli, M.; Bielewicz, P.; Bobin, J.; Bock, J. J.; Bonaldi, A.; Bonavera, L.; Bond, J. R.; Borrill, J.; Bouchet, F. R.; Boulanger, F.; Bridges, M.; Bucher, M.; Burigana, C.; Butler, R. C.; Calabrese, E.; Cardoso, J.-F.; Catalano, A.; Challinor, A.; Chamballu, A.; Chiang, H. C.; Chiang, L.-Y.; Christensen, P. R.; Church, S.; Clements, D. L.; Colombi, S.; Colombo, L. P. L.; Combet, C.; Couchot, F.; Coulais, A.; Crill, B. P.; Curto, A.; Cuttaia, F.; Danese, L.; Davies, R. D.; Davis, R. J.; de Bernardis, P.; de Rosa, A.; de Zotti, G.; Delabrouille, J.; Delouis, J.-M.; Désert, F.-X.; Dickinson, C.; Diego, J. M.; Dole, H.; Donzelli, S.; Doré, O.; Douspis, M.; Dunkley, J.; Dupac, X.; Efstathiou, G.; Elsner, F.; Enßlin, T. A.; Eriksen, H. K.; Finelli, F.; Forni, O.; Frailis, M.; Fraisse, A. A.; Franceschi, E.; Gaier, T. C.; Galeotta, S.; Galli, S.; Ganga, K.; Giard, M.; Giardino, G.; Giraud-Héraud, Y.; Gjerløw, E.; González-Nuevo, J.; Górski, K. M.; Gratton, S.; Gregorio, A.; Gruppuso, A.; Gudmundsson, J. E.; Hansen, F. K.; Hanson, D.; Harrison, D.; Helou, G.; Henrot-Versillé, S.; Hernández-Monteagudo, C.; Herranz, D.; Hildebrandt, S. R.; Hivon, E.; Hobson, M.; Holmes, W. A.; Hornstrup, A.; Hovest, W.; Huffenberger, K. M.; Hurier, G.; Jaffe, A. H.; Jaffe, T. R.; Jewell, J.; Jones, W. C.; Juvela, M.; Keihänen, E.; Keskitalo, R.; Kiiveri, K.; Kisner, T. S.; Kneissl, R.; Knoche, J.; Knox, L.; Kunz, M.; Kurki-Suonio, H.; Lagache, G.; Lähteenmäki, A.; Lamarre, J.-M.; Lasenby, A.; Lattanzi, M.; Laureijs, R. J.; Lawrence, C. R.; Le Jeune, M.; Leach, S.; Leahy, J. P.; Leonardi, R.; León-Tavares, J.; Lesgourgues, J.; Liguori, M.; Lilje, P. B.; Linden-Vørnle, M.; Lindholm, V.; López-Caniego, M.; Lubin, P. M.; Macías-Pérez, J. F.; Maffei, B.; Maino, D.; Mandolesi, N.; Marinucci, D.; Maris, M.; Marshall, D. J.; Martin, P. G.; Martínez-González, E.; Masi, S.; Massardi, M.; Matarrese, S.; Matthai, F.; Mazzotta, P.; Meinhold, P. R.; Melchiorri, A.; Mendes, L.; Menegoni, E.; Mennella, A.; Migliaccio, M.; Millea, M.; Mitra, S.; Miville-Deschênes, M.-A.; Molinari, D.; Moneti, A.; Montier, L.; Morgante, G.; Mortlock, D.; Moss, A.; Munshi, D.; Murphy, J. A.; Naselsky, P.; Nati, F.; Natoli, P.; Netterfield, C. B.; Nørgaard-Nielsen, H. U.; Noviello, F.; Novikov, D.; Novikov, I.; O'Dwyer, I. J.; Orieux, F.; Osborne, S.; Oxborrow, C. A.; Paci, F.; Pagano, L.; Pajot, F.; Paladini, R.; Paoletti, D.; Partridge, B.; Pasian, F.; Patanchon, G.; Paykari, P.; Perdereau, O.; Perotto, L.; Perrotta, F.; Piacentini, F.; Piat, M.; Pierpaoli, E.; Pietrobon, D.; Plaszczynski, S.; Pointecouteau, E.; Polenta, G.; Ponthieu, N.; Popa, L.; Poutanen, T.; Pratt, G. W.; Prézeau, G.; Prunet, S.; Puget, J.-L.; Rachen, J. P.; Rahlin, A.; Rebolo, R.; Reinecke, M.; Remazeilles, M.; Renault, C.; Ricciardi, S.; Riller, T.; Ringeval, C.; Ristorcelli, I.; Rocha, G.; Rosset, C.; Roudier, G.; Rowan-Robinson, M.; Rubiño-Martín, J. A.; Rusholme, B.; Sandri, M.; Sanselme, L.; Santos, D.; Savini, G.; Scott, D.; Seiffert, M. D.; Shellard, E. P. S.; Spencer, L. D.; Starck, J.-L.; Stolyarov, V.; Stompor, R.; Sudiwala, R.; Sureau, F.; Sutton, D.; Suur-Uski, A.-S.; Sygnet, J.-F.; Tauber, J. A.; Tavagnacco, D.; Terenzi, L.; Toffolatti, L.; Tomasi, M.; Tristram, M.; Tucci, M.; Tuovinen, J.; Türler, M.; Valenziano, L.; Valiviita, J.; Van Tent, B.; Varis, J.; Vielva, P.; Villa, F.; Vittorio, N.; Wade, L. A.; Wandelt, B. D.; Wehus, I. K.; White, M.; White, S. D. M.; Yvon, D.; Zacchei, A.; Zonca, A.
2014-11-01
This paper presents the Planck 2013 likelihood, a complete statistical description of the two-point correlation function of the CMB temperature fluctuations that accounts for all known relevant uncertainties, both instrumental and astrophysical in nature. We use this likelihood to derive our best estimate of the CMB angular power spectrum from Planck over three decades in multipole moment, ℓ, covering 2 ≤ ℓ ≤ 2500. The main source of uncertainty at ℓ ≲ 1500 is cosmic variance. Uncertainties in small-scale foreground modelling and instrumental noise dominate the error budget at higher ℓs. For ℓ < 50, our likelihood exploits all Planck frequency channels from 30 to 353 GHz, separating the cosmological CMB signal from diffuse Galactic foregrounds through a physically motivated Bayesian component separation technique. At ℓ ≥ 50, we employ a correlated Gaussian likelihood approximation based on a fine-grained set of angular cross-spectra derived from multiple detector combinations between the 100, 143, and 217 GHz frequency channels, marginalising over power spectrum foreground templates. We validate our likelihood through an extensive suite of consistency tests, and assess the impact of residual foreground and instrumental uncertainties on the final cosmological parameters. We find good internal agreement among the high-ℓ cross-spectra with residuals below a few μK2 at ℓ ≲ 1000, in agreement with estimated calibration uncertainties. We compare our results with foreground-cleaned CMB maps derived from all Planck frequencies, as well as with cross-spectra derived from the 70 GHz Planck map, and find broad agreement in terms of spectrum residuals and cosmological parameters. We further show that the best-fit ΛCDM cosmology is in excellent agreement with preliminary PlanckEE and TE polarisation spectra. We find that the standard ΛCDM cosmology is well constrained by Planck from the measurements at ℓ ≲ 1500. One specific example is the
CONSTRUCTING A FLEXIBLE LIKELIHOOD FUNCTION FOR SPECTROSCOPIC INFERENCE
Energy Technology Data Exchange (ETDEWEB)
Czekala, Ian; Andrews, Sean M.; Mandel, Kaisey S.; Green, Gregory M. [Harvard-Smithsonian Center for Astrophysics, 60 Garden Street, Cambridge, MA 02138 (United States); Hogg, David W., E-mail: iczekala@cfa.harvard.edu [Center for Cosmology and Particle Physics, Department of Physics, New York University, 4 Washington Place, New York, NY, 10003 (United States)
2015-10-20
We present a modular, extensible likelihood framework for spectroscopic inference based on synthetic model spectra. The subtraction of an imperfect model from a continuously sampled spectrum introduces covariance between adjacent datapoints (pixels) into the residual spectrum. For the high signal-to-noise data with large spectral range that is commonly employed in stellar astrophysics, that covariant structure can lead to dramatically underestimated parameter uncertainties (and, in some cases, biases). We construct a likelihood function that accounts for the structure of the covariance matrix, utilizing the machinery of Gaussian process kernels. This framework specifically addresses the common problem of mismatches in model spectral line strengths (with respect to data) due to intrinsic model imperfections (e.g., in the atomic/molecular databases or opacity prescriptions) by developing a novel local covariance kernel formalism that identifies and self-consistently downweights pathological spectral line “outliers.” By fitting many spectra in a hierarchical manner, these local kernels provide a mechanism to learn about and build data-driven corrections to synthetic spectral libraries. An open-source software implementation of this approach is available at http://iancze.github.io/Starfish, including a sophisticated probabilistic scheme for spectral interpolation when using model libraries that are sparsely sampled in the stellar parameters. We demonstrate some salient features of the framework by fitting the high-resolution V-band spectrum of WASP-14, an F5 dwarf with a transiting exoplanet, and the moderate-resolution K-band spectrum of Gliese 51, an M5 field dwarf.
REDUCING THE LIKELIHOOD OF LONG TENNIS MATCHES
Directory of Open Access Journals (Sweden)
Tristan Barnett
2006-12-01
Full Text Available Long matches can cause problems for tournaments. For example, the starting times of subsequent matches can be substantially delayed causing inconvenience to players, spectators, officials and television scheduling. They can even be seen as unfair in the tournament setting when the winner of a very long match, who may have negative aftereffects from such a match, plays the winner of an average or shorter length match in the next round. Long matches can also lead to injuries to the participating players. One factor that can lead to long matches is the use of the advantage set as the fifth set, as in the Australian Open, the French Open and Wimbledon. Another factor is long rallies and a greater than average number of points per game. This tends to occur more frequently on the slower surfaces such as at the French Open. The mathematical method of generating functions is used to show that the likelihood of long matches can be substantially reduced by using the tiebreak game in the fifth set, or more effectively by using a new type of game, the 50-40 game, throughout the match
Dimension-independent likelihood-informed MCMC
Cui, Tiangang
2015-10-08
Many Bayesian inference problems require exploring the posterior distribution of high-dimensional parameters that represent the discretization of an underlying function. This work introduces a family of Markov chain Monte Carlo (MCMC) samplers that can adapt to the particular structure of a posterior distribution over functions. Two distinct lines of research intersect in the methods developed here. First, we introduce a general class of operator-weighted proposal distributions that are well defined on function space, such that the performance of the resulting MCMC samplers is independent of the discretization of the function. Second, by exploiting local Hessian information and any associated low-dimensional structure in the change from prior to posterior distributions, we develop an inhomogeneous discretization scheme for the Langevin stochastic differential equation that yields operator-weighted proposals adapted to the non-Gaussian structure of the posterior. The resulting dimension-independent and likelihood-informed (DILI) MCMC samplers may be useful for a large class of high-dimensional problems where the target probability measure has a density with respect to a Gaussian reference measure. Two nonlinear inverse problems are used to demonstrate the efficiency of these DILI samplers: an elliptic PDE coefficient inverse problem and path reconstruction in a conditioned diffusion.
Methods to model particulate matter clarification of unit operations subject to unsteady loadings.
Spelman, David; Sansalone, John J
2017-05-15
Stormwater, and also wastewater unit operations (UOs) to a much lower extent, are subject to unsteady hydrodynamic and particulate matter (PM) fluxes. Simulating fully transient clarification of hetero-disperse PM requires much greater computational expense compared to steady simulations. An alternative to fully unsteady methods are stepwise steady (SS) methods which use stepwise steady flow transport and fate to approximate unsteady PM clarification of a UO during transient hydraulic loadings such as rainfall-runoff. The rationale is reduced computational effort for computational fluid dynamics (CFD) compared to simulating continuous unsteadiness of such events. An implicit solution stepwise steady (IS3) method is one approach which builds upon previous SS methods. The IS3 method computes steady flows that are representative of unsteady PM transport throughout an unsteady loading. This method departs from some previous SS methods that assume PM fate can be simulated with an instantaneous clarifier (basin) influent flowrate coupled with a PM input. In this study, various SS methods were tested for basins of varying size and residence time to examine PM fate. Differences between SS methods were a function of turnover fraction indicating the role of unsteady flowrates on PM transport for larger basins of longer residence times. The breakpoint turnover fraction was between two and three. The IS3 method best approximated unsteady behavior of larger basins. These methods identified limitations when utilizing standard event-based loading analysis for larger basins. For basins with a turnover fraction less than two, the majority of effluent PM did not originate from the event-based flow; originating from previous event loadings or existing storage. Inter- and multiple event processes and interactions, that are dependent on this inflow turnover fraction, are not accounted for by single event-based inflow models. Results suggest the use of long-term continuous modeling
Hypothesis likelihood function estimation for synthetic aperture radar targets
Fister, Thomas; Garber, Frederick D.; Sawtelle, Steven C.; Withman, Raymond L.
1993-10-01
The work described in this paper focuses on recent progress in radar signal processing and target recognition techniques developed in support of WL/AARA target recognition programs. The goal of the program is to develop evaluation methodologies of hypotheses in a model- based framework. In this paper, we describe an hypothesis evaluation strategy that is predicated on a generalized likelihood function framework, and allows for incomplete or inaccurate descriptions of the observed unknown target. The target hypothesis evaluation procedure we have developed begins with a structural analysis by means of parametric modeling of the several radar scattering centers. The energy, location, dispersion, and shape of all measured target scattering centers are parametrized. The resulting structural description is used to represent each target and, subsequently, to evaluate the hypotheses of each of the targets in the candidate set.
Individual, team, and coach predictors of players' likelihood to aggress in youth soccer.
Chow, Graig M; Murray, Kristen E; Feltz, Deborah L
2009-08-01
The purpose of this study was to examine personal and socioenvironmental factors of players' likelihood to aggress. Participants were youth soccer players (N = 258) and their coaches (N = 23) from high school and club teams. Players completed the Judgments About Moral Behavior in Youth Sports Questionnaire (JAMBYSQ; Stephens, Bredemeier, & Shields, 1997), which assessed athletes' stage of moral development, team norm for aggression, and self-described likelihood to aggress against an opponent. Coaches were administered the Coaching Efficacy Scale (CES; Feltz, Chase, Moritz, & Sullivan, 1999). Using multilevel modeling, results demonstrated that the team norm for aggression at the athlete and team level were significant predictors of athletes' self likelihood to aggress scores. Further, coaches' game strategy efficacy emerged as a positive predictor of their players' self-described likelihood to aggress. The findings contribute to previous research examining the socioenvironmental predictors of athletic aggression in youth sport by demonstrating the importance of coaching efficacy beliefs.
A composite likelihood method for bivariate meta-analysis in diagnostic systematic reviews.
Chen, Yong; Liu, Yulun; Ning, Jing; Nie, Lei; Zhu, Hongjian; Chu, Haitao
2017-04-01
Diagnostic systematic review is a vital step in the evaluation of diagnostic technologies. In many applications, it involves pooling pairs of sensitivity and specificity of a dichotomized diagnostic test from multiple studies. We propose a composite likelihood (CL) method for bivariate meta-analysis in diagnostic systematic reviews. This method provides an alternative way to make inference on diagnostic measures such as sensitivity, specificity, likelihood ratios, and diagnostic odds ratio. Its main advantages over the standard likelihood method are the avoidance of the nonconvergence problem, which is nontrivial when the number of studies is relatively small, the computational simplicity, and some robustness to model misspecifications. Simulation studies show that the CL method maintains high relative efficiency compared to that of the standard likelihood method. We illustrate our method in a diagnostic review of the performance of contemporary diagnostic imaging technologies for detecting metastases in patients with melanoma.
Gauging the likelihood of stable cavitation from ultrasound contrast agents
Bader, Kenneth B; Holland, Christy K
2015-01-01
The mechanical index (MI) was formulated to gauge the likelihood of adverse bioeffects from inertial cavitation. However, the MI formulation did not consider bubble activity from stable cavitation. This type of bubble activity can be readily nucleated from ultrasound contrast agents (UCAs) and has the potential to promote beneficial bioeffects. Here, the presence of stable cavitation is determined numerically by tracking the onset of subharmonic oscillations within a population of bubbles for frequencies up to 7 MHz and peak rarefactional pressures up to 3 MPa. In addition, the acoustic pressure rupture threshold of an UCA population was determined using the Marmottant model. The threshold for subharmonic emissions of optimally sized bubbles was found to be lower than the inertial cavitation threshold for all frequencies studied. The rupture thresholds of optimally sized UCAs were found to be lower than the threshold for subharmonic emissions for either single cycle or steady state acoustic excitations. Because the thresholds of both subharmonic emissions and UCA rupture are linearly dependent on frequency, an index of the form ICAV = Pr/f (where Pr is the peak rarefactional pressure in MPa and f is the frequency in MHz) was derived to gauge the likelihood of subharmonic emissions due to stable cavitation activity nucleated from UCAs. PMID:23221109
Directory of Open Access Journals (Sweden)
Silvia ePianigiani
2015-12-01
Full Text Available The large deformation of the human breast threatens proper nodules tracking when the subject mammograms are used as pre-planning data for biopsy. However, techniques capable of accurately supporting the surgeons during biopsy are missing. Finite Element (FE models are at the basis of currently investigated methodologies to track nodules displacement. Nonetheless, the impact of breast material modeling on the mechanical response of its tissues (e.g. tumors is not clear. This study proposes a subject-specific FE model of the breast, obtained by anthropometric measurements, to predict breast large deformation. A healthy breast subject-specific FE parametric model was developed and validated by Cranio-caudal (CC and Medio Lateral Oblique (MLO mammograms. The model was successively modified, including nodules, and utilized to investigate the effect of nodules size, typology and material modeling on nodules shift under the effect of CC, MLO and gravity loads. Results show that a Mooney-Rivlin material model can estimate healthy breast large deformation. For a pathological breast, under CC compression, the nodules displacement is very close to zero when a linear elastic material model is used. Finally, when nodules are modeled including tumor material properties, under CC or MLO or gravity loads, nodules shift shows ∼15% average relative difference.
A Predictive Model of the Prevalence of Delirium in Elderly Subjects Admitted to Nursing Homes.
Perez-Ros, Pilar; Martinez-Arnau, Francisco Miguel; Baixauli-Alacreu, Susana; Garcia-Gollarte, Jose Fermin; Tarazona-Santabalbina, Francisco
2017-11-20
Delirium is common in geriatric patients admitted to nursing homes, with an incidence of 22-79% among long-term residents. To establish a predictive model of the risk of delirium episodes in a sample of elderly people living in nursing homes. A retrospective, cross-sectional case-control study covering a period of 12 consecutive months (April 2014 - March 2015) was carried out. The included cases had suffered at least one episode of delirium during the study period. Sociodemographic and clinical variables as well as risk factors predisposing to or triggering episodes of delirium were recorded. A total of 193 cases and 123 controls were recruited. The mean age of the cases was 89.6 years (SD 6.9), and 75.1% were women. The mean age of the controls was 84.7 years (SD 7.42), and 75.6% were women. The prevalence of delirium was 60.7%. The presence of infections (with the exception of urinary tract infections) was the variable offering the best predictive capacity (OR=7.08; 95%CI: 3.30-15.02; p<0.001). Other predictors of delirium were also identified, such as a previous diagnosis of dementia (OR=3.14; 95%CI: 1.81-5.45; p<0.001), the use of anticholinergic drugs (OR=2.98;95%CI: 1.34-6.60; p=0.007), a diagnosis of depression (OR=1.92; 95%CI: 1.03-3.56; p=0.039), and urinary incontinence (OR=1.73; 95%CI: 0.97-3.08; p=0.065). The area under the curve (AUC) was 0.794 (95%CI: 0.74-0.84; p<0.001). The prevalence of delirium among elderly subjects admitted to nursing homes was 60.7%. Infections (with the exception of urinary tract infections), dementia, anticholinergic drug use, depression and urinary incontinence were predictive of the presence of delirium. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.
Gaussian likelihood-based inference for non-invertible MA(1) processes with S alpha S noise
Davis, RA; Mikosch, T
1998-01-01
A limit theory was developed in the papers of Davis and Dunsmuir (1996) and Davis et al. (1995) for the maximum likelihood estimator, based on a Gaussian likelihood, of the moving average parameter theta in an MA(1) model when theta is equal to or close to 1. Using the local parameterization, beta =
Stochastic Maximum Likelihood (SML parametric estimation of overlapped Doppler echoes
Directory of Open Access Journals (Sweden)
E. Boyer
2004-11-01
Full Text Available This paper investigates the area of overlapped echo data processing. In such cases, classical methods, such as Fourier-like techniques or pulse pair methods, fail to estimate the first three spectral moments of the echoes because of their lack of resolution. A promising method, based on a modelization of the covariance matrix of the time series and on a Stochastic Maximum Likelihood (SML estimation of the parameters of interest, has been recently introduced in literature. This method has been tested on simulations and on few spectra from actual data but no exhaustive investigation of the SML algorithm has been conducted on actual data: this paper fills this gap. The radar data came from the thunderstorm campaign that took place at the National Astronomy and Ionospheric Center (NAIC in Arecibo, Puerto Rico, in 1998.
Average Likelihood Methods for Code Division Multiple Access (CDMA)
2014-05-01
AVERAGE LIKELIHOOD METHODS FOR CODE DIVISION MULTIPLE ACCESS ( CDMA ) MAY 2014 FINAL TECHNICAL REPORT APPROVED FOR PUBLIC RELEASE; DISTRIBUTION...REPORT 3. DATES COVERED (From - To) OCT 2011 – OCT 2013 4. TITLE AND SUBTITLE AVERAGE LIKELIHOOD METHODS FOR CODE DIVISION MULTIPLE ACCESS ( CDMA ) 5a...precision parameter from the joint probability of the code matrix. For a full loaded CDMA signal, the average likelihood depends exclusively on feature
Likelihood ratios: Clinical application in day-to-day practice
Directory of Open Access Journals (Sweden)
Parikh Rajul
2009-01-01
Full Text Available In this article we provide an introduction to the use of likelihood ratios in clinical ophthalmology. Likelihood ratios permit the best use of clinical test results to establish diagnoses for the individual patient. Examples and step-by-step calculations demonstrate the estimation of pretest probability, pretest odds, and calculation of posttest odds and posttest probability using likelihood ratios. The benefits and limitations of this approach are discussed.
Stepwise iterative maximum likelihood clustering approach.
Sharma, Alok; Shigemizu, Daichi; Boroevich, Keith A; López, Yosvany; Kamatani, Yoichiro; Kubo, Michiaki; Tsunoda, Tatsuhiko
2016-08-24
Biological/genetic data is a complex mix of various forms or topologies which makes it quite difficult to analyze. An abundance of such data in this modern era requires the development of sophisticated statistical methods to analyze it in a reasonable amount of time. In many biological/genetic analyses, such as genome-wide association study (GWAS) analysis or multi-omics data analysis, it is required to cluster the plethora of data into sub-categories to understand the subtypes of populations, cancers or any other diseases. Traditionally, the k-means clustering algorithm is a dominant clustering method. This is due to its simplicity and reasonable level of accuracy. Many other clustering methods, including support vector clustering, have been developed in the past, but do not perform well with the biological data, either due to computational reasons or failure to identify clusters. The proposed SIML clustering algorithm has been tested on microarray datasets and SNP datasets. It has been compared with a number of clustering algorithms. On MLL datasets, SIML achieved highest clustering accuracy and rand score on 4/9 cases; similarly on SRBCT dataset, it got for 3/5 cases; on ALL subtype it got highest clustering accuracy for 5/7 cases and highest rand score for 4/7 cases. In addition, SIML overall clustering accuracy on a 3 cluster problem using SNP data were 97.3, 94.7 and 100 %, respectively, for each of the clusters. In this paper, considering the nature of biological data, we proposed a maximum likelihood clustering approach using a stepwise iterative procedure. The advantage of this proposed method is that it not only uses the distance information, but also incorporate variance information for clustering. This method is able to cluster when data appeared in overlapping and complex forms. The experimental results illustrate its performance and usefulness over other clustering methods. A Matlab package of this method (SIML) is provided at the web-link http://www.riken.jp/en/research/labs/ims/med_sci_math/ .
Hierarchical Linear Modeling Meta-Analysis of Single-Subject Design Research
Gage, Nicholas A.; Lewis, Timothy J.
2014-01-01
The identification of evidence-based practices continues to provoke issues of disagreement across multiple fields. One area of contention is the role of single-subject design (SSD) research in providing scientific evidence. The debate about SSD's utility centers on three issues: sample size, effect size, and serial dependence. One potential…
The Synthesis of Single-Subject Experimental Data: Extensions of the Basic Multilevel Model
Van den Noortgate, Wim; Moeyaert, Mariola; Ugille, Maaike; Beretvas, Tasha; Ferron, John
2014-01-01
Due to an increasing interest in the use of single-subject experimental designs (SSEDs), appropriate techniques are needed to analyze this type of data. The purpose of this paper proposal is to present four studies (Beretvas, Hembry, Van den Noortgate, & Ferron, 2013; Bunuan, Hembry & Beretvas, 2013; Moeyaert, Ugille, Ferron, Beretvas,…
An experimentally validated fatigue model for wood subjected to tension perpendicular to the grain
DEFF Research Database (Denmark)
Clorius, Christian Odin; Pedersen, Martin Uhre; Hoffmeyer, Preben
2009-01-01
This study presents an experimental investigation of fatigue in wood subjected to tension perpendicular to the grain. The study has been designed with special reference to the influence of the frequency of loading. The investigation reveals an interaction between number of load oscillations...
The Relationship of Coping, Self-Worth, and Subjective Well-Being: A Structural Equation Model
Smedema, Susan Miller; Catalano, Denise; Ebener, Deborah J.
2010-01-01
The purpose of this study was to determine the relationship between various coping-related variables and the evaluation of self-worth and subjective well-being among persons with spinal cord injury. Positive coping variables included hope, proactive coping style, and sense of humor, whereas negative coping variables included perceptions of stress,…
DEFF Research Database (Denmark)
Mantel, Claire; Bech, Søren; Korhonen, Jari
2015-01-01
signal can then be used as input to objective quality metrics. The focus of this paper is on determining which characteristics of locally backlit displays influence quality assessment. A subjective experiment assessing the quality of highly contrasted videos displayed with various local backlight...
Shaker, Matineh; Erdogmus, Deniz; Dy, Jennifer; Bouix, Sylvain
2017-04-01
We present a method to estimate a multivariate Gaussian distribution of diffusion tensor features in a set of brain regions based on a small sample of healthy individuals, and use this distribution to identify imaging abnormalities in subjects with mild traumatic brain injury. The multivariate model receives apriori knowledge in the form of a neighborhood graph imposed on the precision matrix, which models brain region interactions, and an additional L1 sparsity constraint. The model is then estimated using the graphical LASSO algorithm and the Mahalanobis distance of healthy and TBI subjects to the distribution mean is used to evaluate the discriminatory power of the model. Our experiments show that the addition of the apriori neighborhood graph results in significant improvements in classification performance compared to a model which does not take into account the brain region interactions or one which uses a fully connected prior graph. In addition, we describe a method, using our model, to detect the regions that contribute the most to the overall abnormality of the DTI profile of a subject's brain. Copyright © 2017 Elsevier B.V. All rights reserved.
Variant of Wodarz and Nowak's virus dynamics model subject to switching input
Mendoza Meza, Magno Enrique
2013-10-01
This paper deals with the application of the switching input called hybrid on-off control (HOOC) in the mathematical model development in Landi et al. [1], which is a variant of the mathematical model development in Wodarz and Nowak [2] and studied in Wodarz and Nowak [3]. The mathematical model is called the variant of Wodarz and Nowak's virus dynamics model (hereafter denoted as VWN). The HOOC proved to be effective in maintaining the concentration of healthy CD4+ cells closed to the value of the chosen threshold level, for long-term dynamics model, non-progressive patients (hereafter denoted as LTNP model) under strong and weak therapies, and for the dynamics model, fast progressor patients (hereafter denoted as FP model) under strong therapy. In the case of the FP model under weak therapy the HOOC and any continuous therapy is not effective in maintaining the concentration of healthy CD4+ cells closed to the value of the chosen threshold level.
Park, Gwansik; Kim, Taewung; Forman, Jason; Panzer, Matthew B; Crandall, Jeff R
2017-08-01
The goal of this study was to predict the structural response of the femoral shaft under dynamic loading conditions using subject-specific finite element (SS-FE) models and to evaluate the prediction accuracy of the models in relation to the model complexity. In total, SS-FE models of 31 femur specimens were developed. Using those models, dynamic three-point bending and combined loading tests (bending with four different levels of axial compression) of bare femurs were simulated, and the prediction capabilities of five different levels of model complexity were evaluated based on the impact force time histories: baseline, mass-based scaled, structure-based scaled, geometric SS-FE, and heterogenized SS-FE models. Among the five levels of model complexity, the geometric SS-FE and the heterogenized SS-FE models showed statistically significant improvement on response prediction capability compared to the other model formulations whereas the difference between two SS-FE models was negligible. This result indicated the geometric SS-FE models, containing detailed geometric information from CT images with homogeneous linear isotropic elastic material properties, would be an optimal model complexity for prediction of structural response of the femoral shafts under the dynamic loading conditions. The average and the standard deviation of the RMS errors of the geometric SS-FE models for all the 31 cases was 0.46 kN and 0.66 kN, respectively. This study highlights the contribution of geometric variability on the structural response variation of the femoral shafts subjected to dynamic loading condition and the potential of geometric SS-FE models to capture the structural response variation of the femoral shafts.
Directory of Open Access Journals (Sweden)
Daniel L. Rabosky
2006-01-01
Full Text Available Rates of species origination and extinction can vary over time during evolutionary radiations, and it is possible to reconstruct the history of diversification using molecular phylogenies of extant taxa only. Maximum likelihood methods provide a useful framework for inferring temporal variation in diversification rates. LASER is a package for the R programming environment that implements maximum likelihood methods based on the birth-death process to test whether diversification rates have changed over time. LASER contrasts the likelihood of phylogenetic data under models where diversification rates have changed over time to alternative models where rates have remained constant over time. Major strengths of the package include the ability to detect temporal increases in diversification rates and the inference of diversification parameters under multiple rate-variable models of diversification. The program and associated documentation are freely available from the R package archive at http://cran.r-project.org.
Audio-visual Classification and Fusion of Spontaneous Affect Data in Likelihood Space
Nicolaou, Mihalis A.; Gunes, Hatice; Pantic, Maja
2010-01-01
This paper focuses on audio-visual (using facial expression, shoulder and audio cues) classification of spontaneous affect, utilising generative models for classification (i) in terms of Maximum Likelihood Classification with the assumption that the generative model structure in the classifier is
de Witte, K.; Verschelde, M.
2010-01-01
To study education as a complex production process in a noisy and heterogeneous setting, this paper suggests to using a stochastic frontier model estimated by a local maximum likelihood approach (LMLSF). The LMLSF smoothly combines the virtues of the non-parametric Data Envelopment Analysis model
Stan, A.; Munteanu, M.
1974-01-01
Expressions are derived for the displacements of a many storied building subjected to the action of classical weaving looms located at different levels of the building. The building is regarded as a vertical fixed beam with a uniformly distributed mass as well as concentrated masses at each level. The calculation relations are obtained on the assumption of harmonic variation of the forces acting at each level as well as the assumption of narrow band stationary random excitatory forces.
Uchiyama, Takanori; Uchida, Ryusei
The purpose of this study is to develop a new modeling technique for quantitative evaluation of spasticity in the upper limbs of hemiplegic patients. Each subject lay on a bed, and his forearm was supported with a jig to measure the elbow joint angle. The subject was instructed to relax and not to resist the step-like load which was applied to extend the elbow joint. The elbow joint angle and electromyogram (EMG) of the biceps muscle, triceps muscle and brachioradialis muscle were measured. First, the step-like response was approximated with a proposed mathematical model based on musculoskeletal and physiological characteristics by the least square method. The proposed model involved an elastic component depending on both muscle activities and elbow joint angle. The responses were approximated well with the proposed model. Next, the torque generated by the elastic component was estimated. The normalized elastic torque was approximated with a dumped sinusoid by the least square method. The reciprocal of the time constant and the natural frequency of the normalized elastic torque were calculated and they varied depending on the grades of the modified Ashworth scale of the subjects. It was suggested that the proposed modeling technique would provide a good quantitative index of spasticity as shown in the relationship between the reciprocal of the time constant and the natural frequency.
Conlin, Sarah E; Douglass, Richard P; Ouch, Staci
2017-10-26
The present study examined the link between discrimination and Diener's (1984) three components of subjective well-being (positive and negative affect and life satisfaction) among a cisgender sample of lesbian, gay, and bisexual (LGB) adults. Specifically, we investigated internalized homonegativity and expectations of rejection as potential mediators of the links between discrimination and subjective well-being among a sample of 215 participants. Results from our structural equation model demonstrated a strong, positive direct link between discrimination and negative affect. Discrimination also had small, negative indirect effects on life satisfaction through our two mediators. Interestingly, neither discrimination nor our two mediators were related with positive affect, demonstrating the need for future research to uncover potential buffers of this link. Finally, our model evidenced configural, metric, and scalar invariance suggesting that our model applies well for both women and men. Practical implications and future directions for research are discussed.
Kim, S.; Riazi, H.; Shin, C.; Seo, D.
2013-12-01
Due to the large dimensionality of the state vector and sparsity of observations, the initial conditions (IC) of water quality models are subject to large uncertainties. To reduce the IC uncertainties in operational water quality forecasting, an ensemble data assimilation (DA) procedure for the Hydrologic Simulation Program - Fortran (HSPF) model has been developed and evaluated for the Kumho River Subcatchment of the Nakdong River Basin in Korea. The procedure, referred to herein as MLEF-HSPF, uses maximum likelihood ensemble filter (MLEF) which combines strengths of variational assimilation (VAR) and ensemble Kalman filter (EnKF). The Control variables involved in the DA procedure include the bias correction factors for mean areal precipitation and mean areal potential evaporation, the hydrologic state variables, and the water quality state variables such as water temperature, dissolved oxygen (DO), biochemical oxygen demand (BOD), ammonium (NH4), nitrate (NO3), phosphate (PO4) and chlorophyll a (CHL-a). Due to the very large dimensionality of the inverse problem, accurately specifying the parameters for the DA procdedure is a challenge. Systematic sensitivity analysis is carried out for identifying the optimal parameter settings. To evaluate the robustness of MLEF-HSPF, we use multiple subcatchments of the Nakdong River Basin. In evaluation, we focus on the performance of MLEF-HSPF on prediction of extreme water quality events.
Piccorelli, Annalisa V; Schluchter, Mark D
2012-12-20
Numerous methods for joint analysis of longitudinal measures of a continuous outcome y and a time to event outcome T have recently been developed either to focus on the longitudinal data y while correcting for nonignorable dropout, to predict the survival outcome T using the longitudinal data y, or to examine the relationship between y and T. The motivating problem for our work is in joint modeling of the serial measurements of pulmonary function (FEV1% predicted) and survival in cystic fibrosis (CF) patients using registry data. Within the CF registry data, an additional complexity is that not all patients have been followed from birth; therefore, some patients have delayed entry into the study while others may have been missed completely, giving rise to a left truncated distribution. This paper shows in joint modeling situations where y and T are not independent, that it is necessary to account for this left truncation to obtain valid parameter estimates related to both survival and the longitudinal marker. We assume a linear random effects model for FEV1% predicted, where the random intercept and slope of FEV1% predicted, along with a specified transformation of the age at death follow a trivariate normal distribution. We develop an expectation-maximization algorithm for maximum likelihood estimation of parameters, which takes left truncation and right censoring of survival times into account. The methods are illustrated using simulation studies and using data from CF patients in a registry followed at Rainbow Babies and Children's Hospital, Cleveland, OH. Copyright © 2012 John Wiley & Sons, Ltd.
Likelihood ratio tests in rare variant detection for continuous phenotypes.
Zeng, Ping; Zhao, Yang; Liu, Jin; Liu, Liya; Zhang, Liwei; Wang, Ting; Huang, Shuiping; Chen, Feng
2014-09-01
It is believed that rare variants play an important role in human phenotypes; however, the detection of rare variants is extremely challenging due to their very low minor allele frequency. In this paper, the likelihood ratio test (LRT) and restricted likelihood ratio test (ReLRT) are proposed to test the association of rare variants based on the linear mixed effects model, where a group of rare variants are treated as random effects. Like the sequence kernel association test (SKAT), a state-of-the-art method for rare variant detection, LRT and ReLRT can effectively overcome the problem of directionality of effect inherent in the burden test in practice. By taking full advantage of the spectral decomposition, exact finite sample null distributions for LRT and ReLRT are obtained by simulation. We perform extensive numerical studies to evaluate the performance of LRT and ReLRT, and compare to the burden test, SKAT and SKAT-O. The simulations have shown that LRT and ReLRT can correctly control the type I error, and the controls are robust to the weights chosen and the number of rare variants under study. LRT and ReLRT behave similarly to the burden test when all the causal rare variants share the same direction of effect, and outperform SKAT across various situations. When both positive and negative effects exist, LRT and ReLRT suffer from few power reductions compared to the other two competing methods; under this case, an additional finding from our simulations is that SKAT-O is no longer the optimal test, and its power is even lower than that of SKAT. The exome sequencing SNP data from Genetic Analysis Workshop 17 were employed to illustrate the proposed methods, and interesting results are described. © 2014 John Wiley & Sons Ltd/University College London.
van Drunen, Erwin J; Chiew, Yeong Shiong; Chase, J Geoffrey; Lambermont, Bernard; Janssen, Nathalie; Desaive, Thomas
2013-01-01
Modelling the respiratory mechanics of mechanically ventilated (MV) patients can provide useful information to guide MV therapy. Two model-based methods were evaluated based on data from three experimental acute respiratory distress syndrome (ARDS) induced piglets and validated against values available from ventilators. A single compartment lung model with integral-based parameter identification was found to be effective in capturing fundamental respiratory mechanics during inspiration. The trends matched clinical expectation and provided better resolution than clinically derived linear model metrics. An expiration time constant model also captured the same trend in respiratory elastance. However, the assumption of constant resistance and a slightly higher fitting error results in less insight than the single compartment model. Further research is required to confirm its application in titrating to optimal MV settings.
A Lumped-Parameter Subject-Specific Model of Blood Volume Response to Fluid Infusion
Ramin Bighamian; Andrew Reisner; Jin-Oh Hahn
2016-01-01
This paper presents a lumped-parameter model that can reproduce blood volume response to fluid infusion. The model represents the fluid shift between the intravascular and interstitial compartments as the output of a hypothetical feedback controller that regulates the ratio between the volume changes in the intravascular and interstitial fluid at a target value (called target volume ratio). The model is characterized by only three parameters: the target volume ratio, feedback gain (specifyi...
Rodríguez Martínez, José Antonio
2010-01-01
In this doctoral Thesis the thermo-viscoplastic behaviour of metallic alloys used for structural protection purposes has been analyzed. The study includes the proposition of advanced constitutive relations and their integration into numerical models. These numerical models are validated for impact problems within the low-intermediate range of impact velocities (until 85 m/s). The advanced constitutive relations derived are based on the Rusinek-Klepaczko model whose validity is extended to met...
Modelling of E. coli distribution in coastal areas subjected to combined sewer overflows.
De Marchis, Mauro; Freni, Gabriele; Napoli, Enrico
2013-01-01
Rivers, lakes and the sea were the natural receivers of raw urban waste and storm waters for a long time but the low sustainability of such practice, the increase of population and a renewed environmental sensibility increased researcher interest in the analysis and mitigation of the impact of urban waters on receiving water bodies (RWB). In Europe, the integrated modelling of drainage systems and RWB has been promoted as a promising approach for implementing the Water Framework Directive. A particular interest is given to the fate of pathogens and especially of Escherichia coli, in all the cases in which an interaction between population and the RWB is foreseen. The present paper aims to propose an integrated water quality model involving the analysis of several sewer systems (SS) discharging their polluting overflows on the coast in a sensitive marine environment. From a modelling point of view, the proposed application integrated one-dimensional drainage system models with a complex three-dimensional model analysing the propagation in space and time of E. coli in the coastal marine area. The integrated approach was tested in a real case study (the Acicastello bay in Italy) where data were available both for SS model and for RWB propagation model calibration. The analysis shows a good agreement between the model and monitored data. The integrated model was demonstrated to be a valuable tool for investigating the pollutant propagation and to highlight the most impacted areas.
Planck 2013 results. XV. CMB power spectra and likelihood
DEFF Research Database (Denmark)
Tauber, Jan; Bartlett, J.G.; Bucher, M.
2014-01-01
This paper presents the Planck 2013 likelihood, a complete statistical description of the two-point correlation function of the CMB temperature fluctuations that accounts for all known relevant uncertainties, both instrumental and astrophysical in nature. We use this likelihood to derive our best...
Planck intermediate results: XVI. Profile likelihoods for cosmological parameters
DEFF Research Database (Denmark)
Bartlett, J.G.; Cardoso, J.-F.; Delabrouille, J.
2014-01-01
We explore the 2013 Planck likelihood function with a high-precision multi-dimensional minimizer (Minuit). This allows a refinement of the CDM best-fit solution with respect to previously-released results, and the construction of frequentist confidence intervals using profile likelihoods. The agr...
A Survey of the Likelihood Approach to Bioequivalence Trials
Choi, Leena; Caffo, Brian; Rohde, Charles
2009-01-01
SUMMARY Bioequivalence trials are abbreviated clinical trials whereby a generic drug or new formulation is evaluated to determine if it is “equivalent” to a corresponding previously approved brand-name drug or formulation. In this manuscript, we survey the process of testing bioequivalence and advocate the likelihood paradigm for representing the resulting data as evidence. We emphasize the unique conflicts between hypothesis testing and confidence intervals in this area - which we believe are indicative of the existence of the systemic defects in the frequentist approach - that the likelihood paradigm avoids. We suggest the direct use of profile likelihoods for evaluating bioequivalence. We discuss how the likelihood approach is useful to present the evidence for both average and population bioequivalence within a unified framework. We also examine the main properties of profile likelihoods and estimated likelihoods under simulation. This simulation study shows that profile likelihoods offer a viable alternative to the (unknown) true likelihood for a range of parameters commensurate with bioequivalence research. PMID:18618422
Gupta, Manoj; Gupta, T C
2017-10-01
The present study aims to accurately estimate inertial, physical, and dynamic parameters of human body vibratory model consistent with physical structure of the human body that also replicates its dynamic response. A 13 degree-of-freedom (DOF) lumped parameter model for standing person subjected to support excitation is established. Model parameters are determined from anthropometric measurements, uniform mass density, elastic modulus of individual body segments, and modal damping ratios. Elastic moduli of ellipsoidal body segments are initially estimated by comparing stiffness of spring elements, calculated from a detailed scheme, and values available in literature for same. These values are further optimized by minimizing difference between theoretically calculated platform-to-head transmissibility ratio (TR) and experimental measurements. Modal damping ratios are estimated from experimental transmissibility response using two dominant peaks in the frequency range of 0-25 Hz. From comparison between dynamic response determined form modal analysis and experimental results, a set of elastic moduli for different segments of human body and a novel scheme to determine modal damping ratios from TR plots, are established. Acceptable match between transmissibility values calculated from the vibratory model and experimental measurements for 50th percentile U.S. male, except at very low frequencies, establishes the human body model developed. Also, reasonable agreement obtained between theoretical response curve and experimental response envelop for average Indian male, affirms the technique used for constructing vibratory model of a standing person. Present work attempts to develop effective technique for constructing subject specific damped vibratory model based on its physical measurements.
Yamaura, Yuichi; Connor, Edward F.; Royle, Andy; Itoh, Katsuo; Sato, Kiyoshi; Taki, Hisatomo; Mishima, Yoshio
2016-01-01
Models and data used to describe species–area relationships confound sampling with ecological process as they fail to acknowledge that estimates of species richness arise due to sampling. This compromises our ability to make ecological inferences from and about species–area relationships. We develop and illustrate hierarchical community models of abundance and frequency to estimate species richness. The models we propose separate sampling from ecological processes by explicitly accounting for the fact that sampled patches are seldom completely covered by sampling plots and that individuals present in the sampling plots are imperfectly detected. We propose a multispecies abundance model in which community assembly is treated as the summation of an ensemble of species-level Poisson processes and estimate patch-level species richness as a derived parameter. We use sampling process models appropriate for specific survey methods. We propose a multispecies frequency model that treats the number of plots in which a species occurs as a binomial process. We illustrate these models using data collected in surveys of early-successional bird species and plants in young forest plantation patches. Results indicate that only mature forest plant species deviated from the constant density hypothesis, but the null model suggested that the deviations were too small to alter the form of species–area relationships. Nevertheless, results from simulations clearly show that the aggregate pattern of individual species density–area relationships and occurrence probability–area relationships can alter the form of species–area relationships. The plant community model estimated that only half of the species present in the regional species pool were encountered during the survey. The modeling framework we propose explicitly accounts for sampling processes so that ecological processes can be examined free of sampling artefacts. Our modeling approach is extensible and could be applied
Accurate Finite Element Modelling of Chipboard Single-Stud Floor Panels subjected to Dynamic Loads
DEFF Research Database (Denmark)
Sjöström, A.; Flodén, O.; Persson, K.
2012-01-01
In multi-storey buildings, the use of lightweight material has many advantages. The low weight, the low energy consumption and the sustainability of the material are some attractive benefits from using lightweight materials. Compared with heavier structures i.e. concrete the challenge in construc......In multi-storey buildings, the use of lightweight material has many advantages. The low weight, the low energy consumption and the sustainability of the material are some attractive benefits from using lightweight materials. Compared with heavier structures i.e. concrete the challenge...... in lightweight buildings subjected to different types of loads....
A Lumped-Parameter Subject-Specific Model of Blood Volume Response to Fluid Infusion
Directory of Open Access Journals (Sweden)
Ramin Bighamian
2016-08-01
Full Text Available This paper presents a lumped-parameter model that can reproduce blood volume response to fluid infusion. The model represents the fluid shift between the intravascular and interstitial compartments as the output of a hypothetical feedback controller that regulates the ratio between the volume changes in the intravascular and interstitial fluid at a target value (called target volume ratio. The model is characterized by only three parameters: the target volume ratio, feedback gain (specifying the speed of fluid shift, and initial blood volume. This model can obviate the need to incorporate complex mechanisms involved in the fluid shift in reproducing blood volume response to fluid infusion. The ability of the model to reproduce real-world blood volume response to fluid infusion was evaluated by fitting it to a series of data reported in the literature. The model reproduced the data accurately with average error and root-mean-squared error (RMSE of 0.6 % and 9.5 % across crystalloid and colloid fluids when normalized by the underlying responses. Further, the parameters derived for the model showed physiologically plausible behaviors. It was concluded that this simple model may accurately reproduce a variety of blood volume responses to fluid infusion throughout different physiological states by fitting three parameters to a given dataset. This offers a tool that can quantify the fluid shift in a dataset given the measured fractional blood volumes.
A Lumped-Parameter Subject-Specific Model of Blood Volume Response to Fluid Infusion.
Bighamian, Ramin; Reisner, Andrew T; Hahn, Jin-Oh
2016-01-01
This paper presents a lumped-parameter model that can reproduce blood volume response to fluid infusion. The model represents the fluid shift between the intravascular and interstitial compartments as the output of a hypothetical feedback controller that regulates the ratio between the volume changes in the intravascular and interstitial fluid at a target value (called "target volume ratio"). The model is characterized by only three parameters: the target volume ratio, feedback gain (specifying the speed of fluid shift), and initial blood volume. This model can obviate the need to incorporate complex mechanisms involved in the fluid shift in reproducing blood volume response to fluid infusion. The ability of the model to reproduce real-world blood volume response to fluid infusion was evaluated by fitting it to a series of data reported in the literature. The model reproduced the data accurately with average error and root-mean-squared error (RMSE) of 0.6 and 9.5% across crystalloid and colloid fluids when normalized by the underlying responses. Further, the parameters derived for the model showed physiologically plausible behaviors. It was concluded that this simple model may accurately reproduce a variety of blood volume responses to fluid infusion throughout different physiological states by fitting three parameters to a given dataset. This offers a tool that can quantify the fluid shift in a dataset given the measured fractional blood volumes.
Nitroglycerin provocation in normal subjects is not a useful human migraine model?
DEFF Research Database (Denmark)
Tvedskov, J F; Iversen, Helle Klingenberg; Olesen, J
2010-01-01
Provoking delayed migraine with nitroglycerin in migraine sufferers is a cumbersome model. Patients are difficult to recruit, migraine comes on late and variably and only 50-80% of patients develop an attack. A model using normal volunteers would be much more useful, but it should be validated by...
Representing time-varying cyclic dynamics using multiple-subject state-space models
Chow, Sy-Miin; Hamaker, E.L.; Fujita, Frank; Boker, Steven M.
2009-01-01
Over the last few decades, researchers have become increasingly aware of the need to consider intraindividual variability in the form of cyclic processes. In this paper, we review two contemporary cyclic state-space models: Young and colleagues' dynamic harmonic regression model and Harvey and
Subjective Values of Quality of Life Dimensions in Elderly People. A SEM Preference Model Approach
Elosua, Paula
2011-01-01
This article proposes a Thurstonian model in the framework of Structural Equation Modelling (SEM) to assess preferences among quality of life dimensions for the elderly. Data were gathered by a paired comparison design in a sample comprised of 323 people aged from 65 to 94 years old. Five dimensions of quality of life were evaluated: Health,…
Maori Cultural Efficacy and Subjective Wellbeing: A Psychological Model and Research Agenda
Houkamau, Carla A.; Sibley, Chris G.
2011-01-01
Maori, the indigenous peoples of New Zealand, experience a range of negative outcomes. Psychological models and interventions aiming to improve outcomes for Maori tend to be founded on a "culture-as-cure" model. This view promotes cultural efficacy as a critical resilience factor that should improve outcomes for Maori. This is a founding…
Likelihood transform: making optimization and parameter estimation easier
Wang, Yan
2014-01-01
Parameterized optimization and parameter estimation is of great importance in almost every branch of modern science, technology and engineering. A practical issue in the problem is that when the parameter space is large and the available data is noisy, the geometry of the likelihood surface in the parameter space will be complicated. This makes searching and optimization algorithms computationally expensive, sometimes even beyond reach. In this paper, we define a likelihood transform which can make the structure of the likelihood surface much simpler, hence reducing the intrinsic complexity and easing optimization significantly. We demonstrate the properties of likelihood transform by apply it to a simplified gravitational wave chirp signal search. For the signal with an signal-to-noise ratio 20, likelihood transform has made a deterministic template-based search possible for the first time, which turns out to be 1000 times more efficient than an exhaustive grid- based search. The method in principle can be a...
Robust Quasi–LPV Model Reference FTC of a Quadrotor Uav Subject to Actuator Faults
Directory of Open Access Journals (Sweden)
Rotondo Damiano
2015-03-01
Full Text Available A solution for fault tolerant control (FTC of a quadrotor unmanned aerial vehicle (UAV is proposed. It relies on model reference-based control, where a reference model generates the desired trajectory. Depending on the type of reference model used for generating the reference trajectory, and on the assumptions about the availability and uncertainty of fault estimation, different error models are obtained. These error models are suitable for passive FTC, active FTC and hybrid FTC, the latter being able to merge the benefits of active and passive FTC while reducing their respective drawbacks. The controller is generated using results from the robust linear parameter varying (LPV polytopic framework, where the vector of varying parameters is used to schedule between uncertain linear time invariant (LTI systems. The design procedure relies on solving a set of linear matrix inequalities (LMIs in order to achieve regional pole placement and H∞ norm bounding constraints. Simulation results are used to compare the different FTC strategies.
Modeling of delamination damage evolution in laminated composites subjected to low velocity impact
Lo, David C.; Allen, David H.
1994-01-01
This study examines the delamination evolution, under quasi-static conditions, of laminated polymeric composites with mechanically nonlinear resin rich interfaces. The constitutive behavior of the interface is represented by two models developed by Needleman and Tvegaard. These models assumed that the interfacial tractions, a function of only the interfacial displacement, will behave similarly to the interatomic forces generated during the interatomic seperation. The interface material's parameters control the load at which the delamination growth initiates and the final delamination size. A wide range of damage accumulation responses have been obtained by varying the model parameters. These results show that Tvergaard's model has been found to be better suited of the two models in predicting damage evolution for the configurations examined.
Directory of Open Access Journals (Sweden)
Katherine M O'Donnell
Full Text Available Detectability of individual animals is highly variable and nearly always < 1; imperfect detection must be accounted for to reliably estimate population sizes and trends. Hierarchical models can simultaneously estimate abundance and effective detection probability, but there are several different mechanisms that cause variation in detectability. Neglecting temporary emigration can lead to biased population estimates because availability and conditional detection probability are confounded. In this study, we extend previous hierarchical binomial mixture models to account for multiple sources of variation in detectability. The state process of the hierarchical model describes ecological mechanisms that generate spatial and temporal patterns in abundance, while the observation model accounts for the imperfect nature of counting individuals due to temporary emigration and false absences. We illustrate our model's potential advantages, including the allowance of temporary emigration between sampling periods, with a case study of southern red-backed salamanders Plethodon serratus. We fit our model and a standard binomial mixture model to counts of terrestrial salamanders surveyed at 40 sites during 3-5 surveys each spring and fall 2010-2012. Our models generated similar parameter estimates to standard binomial mixture models. Aspect was the best predictor of salamander abundance in our case study; abundance increased as aspect became more northeasterly. Increased time-since-rainfall strongly decreased salamander surface activity (i.e. availability for sampling, while higher amounts of woody cover objects and rocks increased conditional detection probability (i.e. probability of capture, given an animal is exposed to sampling. By explicitly accounting for both components of detectability, we increased congruence between our statistical modeling and our ecological understanding of the system. We stress the importance of choosing survey locations and
Modelling of Freight Trains Classification Using Queueing System Subject to Breakdowns
Directory of Open Access Journals (Sweden)
Michal Dorda
2013-01-01
Full Text Available The paper presents a mathematical model and a simulation model of the freight trains classification process. We model the process as a queueing system with a server which is represented by a hump at a marshalling yard. We distinguish two types of shunting over the hump; primary shunting represents the classification of inbound freight trains over the hump (it is the primary function of marshalling yards, and secondary shunting is, for example, represented by the classification of trains of wagons entering the yard via industrial sidings. Inbound freight trains are considered to be customers in the system, and all needs of secondary shunting are failures of the hump because performing secondary shunting occupies the hump, and therefore inbound freight trains cannot be sorted. All random variables of the model are considered to be exponentially distributed with the exception of customer service times which are Erlang distributed. The mathematical model was created using method of stages and can be solved numerically employing a suitable software tool. The simulation model was created using coloured Petri nets. Both models are tested in conditions of a marshalling yard.
A numerical model on PVB laminated windshield subjected to headform low-speed impact
Xu, X. Q.; Liu, B. H.; Wang, Y.; Li, Y. B.; Xu, J.
2013-07-01
Polyvinyl butyral (PVB) laminated windshield is one of the most important components in automotive to protect vulnerable road users. First, a windshield finite element (FE) model is set up using a piece of interlayer (PVB) sandwiched by two glass layers. Four parameters which have an critical impact on the simulation results, i.e. glass Young's modulus, glass plastic failure strain, PVB stress-strain curve and boundary condition, are suggested to measure the influence on the windshield model. Each windshield model is impacted by a standard headform impactor at the speed of 8m/s based on the LS-DYNA platform and the results are compared with the dynamic experiments of PVB laminated windshield under headform impact to find the most accurate FE model. Furthermore, the most accurate FE windshield model is compacted by the standard headform impactor on various impact velocities (6.6m/s-11.2m/s), angles (60°-90°) compared with the parametric dynamic experiments of PVB laminated windshield to verify the windshield finite element model. This paper provides a useful finite element model of windshield for further systematically numerical studies based on the finite element method to explore the ability of the energy absorption and safety design of PVB laminated windshield.
Modeling thermal responses in human subjects following extended exposure to radiofrequency energy
Directory of Open Access Journals (Sweden)
Foster Kenneth R
2004-02-01
Full Text Available Abstract Background This study examines the use of a simple thermoregulatory model for the human body exposed to extended (45 minute exposures to radiofrequency/microwave (RF/MW energy at different frequencies (100, 450, 2450 MHz and under different environmental conditions. The exposure levels were comparable to or above present limits for human exposure to RF energy. Methods We adapted a compartmental model for the human thermoregulatory system developed by Hardy and Stolwijk, adding power to the torso skin, fat, and muscle compartments to simulate exposure to RF energy. The model uses values for parameters for "standard man" that were originally determined by Hardy and Stolwijk, with no additional adjustment. The model predicts changes in core and skin temperatures, sweat rate, and changes in skin blood flow as a result of RF energy exposure. Results The model yielded remarkably good quantitative agreement between predicted and measured changes in skin and core temperatures, and qualitative agreement between predicted and measured changes in skin blood flow. The model considerably underpredicted the measured sweat rates. Conclusions The model, with previously determined parameter values, was successful in predicting major aspects of human thermoregulatory response to RF energy exposure over a wide frequency range, and at different environmental temperatures. The model was most successful in predicting changes in skin temperature, and it provides insights into the mechanisms by which the heat added to body by RF energy is dissipated to the environment. Several factors are discussed that may have contributed to the failure to account properly for sweat rate. Some features of the data, in particular heating of the legs and ankles during exposure at 100 MHz, would require a more complex model than that considered here.
Maximum-Likelihood Continuity Mapping (MALCOM): An Alternative to HMMs
Energy Technology Data Exchange (ETDEWEB)
Nix, D.A.; Hogden, J.E.
1998-12-01
The authors describe Maximum-Likelihood Continuity Mapping (MALCOM) as an alternative to hidden Markov models (HMMs) for processing sequence data such as speech. While HMMs have a discrete ''hidden'' space constrained by a fixed finite-automata architecture, MALCOM has a continuous hidden space (a continuity map) that is constrained only by a smoothness requirement on paths through the space. MALCOM fits into the same probabilistic framework for speech recognition as HMMs, but it represents a far more realistic model of the speech production process. The authors support this claim by generating continuity maps for three speakers and using the resulting MALCOM paths to predict measured speech articulator data. The correlations between the MALCOM paths (obtained from only the speech acoustics) and the actual articulator movements average 0.77 on an independent test set not used to train MALCOM nor the predictor. On average, this unsupervised model achieves 92% of performance obtained using the corresponding supervised method.
Empirical likelihood for cumulative hazard ratio estimation with covariate adjustment.
Dong, Bin; Matthews, David E
2012-06-01
In medical studies, it is often of scientific interest to evaluate the treatment effect via the ratio of cumulative hazards, especially when those hazards may be nonproportional. To deal with nonproportionality in the Cox regression model, investigators usually assume that the treatment effect has some functional form. However, to do so may create a model misspecification problem because it is generally difficult to justify the specific parametric form chosen for the treatment effect. In this article, we employ empirical likelihood (EL) to develop a nonparametric estimator of the cumulative hazard ratio with covariate adjustment under two nonproportional hazard models, one that is stratified, as well as a less restrictive framework involving group-specific treatment adjustment. The asymptotic properties of the EL ratio statistic are derived in each situation and the finite-sample properties of EL-based estimators are assessed via simulation studies. Simultaneous confidence bands for all values of the adjusted cumulative hazard ratio in a fixed interval of interest are also developed. The proposed methods are illustrated using two different datasets concerning the survival experience of patients with non-Hodgkin's lymphoma or ovarian cancer. © 2011, The International Biometric Society.
Xie, Bing; Wang, Jianliu
2015-01-01
The purpose of this study was to develop three-dimensional finite element models of the whole pelvic support systems of subjects with and without pelvic organ prolapse (POP) that can be used to simulate anterior and posterior wall prolapses. Magnetic resonance imaging was performed in one healthy female volunteer (55 years old, para 2) and one patient (56 years old, para 1) with anterior vaginal wall prolapse. Contours of the pelvic structures were traced by a trained gynecologist. Smoothing of the models was conducted and attachments among structures were established. Finite element models of the pelvic support system with anatomic details were established for both the healthy subject and the POP patient. The models include the uterus, vagina with cavity, cardinal and uterosacral ligaments, levator ani muscle, rectum, bladder, perineal body, pelvis, obturator internus, and coccygeal muscle. Major improvements were provided in the modeling of the supporting ligaments and the vagina with high anatomic precision. These anatomically accurate models can be expected to allow study of the mechanism of POP in more realistic physiological conditions. The resulting knowledge may provide theoretical help for clinical prevention and treatment of POP. PMID:25710033
Directory of Open Access Journals (Sweden)
Shuang Ren
2015-01-01
Full Text Available The purpose of this study was to develop three-dimensional finite element models of the whole pelvic support systems of subjects with and without pelvic organ prolapse (POP that can be used to simulate anterior and posterior wall prolapses. Magnetic resonance imaging was performed in one healthy female volunteer (55 years old, para 2 and one patient (56 years old, para 1 with anterior vaginal wall prolapse. Contours of the pelvic structures were traced by a trained gynecologist. Smoothing of the models was conducted and attachments among structures were established. Finite element models of the pelvic support system with anatomic details were established for both the healthy subject and the POP patient. The models include the uterus, vagina with cavity, cardinal and uterosacral ligaments, levator ani muscle, rectum, bladder, perineal body, pelvis, obturator internus, and coccygeal muscle. Major improvements were provided in the modeling of the supporting ligaments and the vagina with high anatomic precision. These anatomically accurate models can be expected to allow study of the mechanism of POP in more realistic physiological conditions. The resulting knowledge may provide theoretical help for clinical prevention and treatment of POP.
Directory of Open Access Journals (Sweden)
James O Lloyd-Smith
2007-02-01
Full Text Available The negative binomial distribution is used commonly throughout biology as a model for overdispersed count data, with attention focused on the negative binomial dispersion parameter, k. A substantial literature exists on the estimation of k, but most attention has focused on datasets that are not highly overdispersed (i.e., those with k>or=1, and the accuracy of confidence intervals estimated for k is typically not explored.This article presents a simulation study exploring the bias, precision, and confidence interval coverage of maximum-likelihood estimates of k from highly overdispersed distributions. In addition to exploring small-sample bias on negative binomial estimates, the study addresses estimation from datasets influenced by two types of event under-counting, and from disease transmission data subject to selection bias for successful outbreaks.Results show that maximum likelihood estimates of k can be biased upward by small sample size or under-reporting of zero-class events, but are not biased downward by any of the factors considered. Confidence intervals estimated from the asymptotic sampling variance tend to exhibit coverage below the nominal level, with overestimates of k comprising the great majority of coverage errors. Estimation from outbreak datasets does not increase the bias of k estimates, but can add significant upward bias to estimates of the mean. Because k varies inversely with the degree of overdispersion, these findings show that overestimation of the degree of overdispersion is very rare for these datasets.
Directory of Open Access Journals (Sweden)
Coenraad J Hattingh
2013-01-01
Full Text Available Background:Social anxiety disorder (SAD is characterised by abnormal fear and anxiety in social situations. Functional magnetic resonance imaging (fMRI is a brain imaging technique that can be used to illustrate neural activation to emotionally salient stimuli. However, no attempt has yet been made to statistically collate fMRI studies of brain activation, using the activation likelihood-estimate technique, in response to emotion recognition tasks in individuals with social anxiety disorder. Methods:A systematic search of fMRI studies of neural responses to socially emotive cues in SAD and GSP was undertaken. Activation likelihood-estimate (ALE meta-analysis, a voxel based meta-analytic technique, was used to estimate the most significant activations during emotional recognition. Results: 7 studies were eligible for inclusion in the meta-analysis, constituting a total of 91 subjects with SAD or GSP, and 93 healthy controls. The most significant areas of activation during emotional recognition versus neutral stimuli in individuals with social anxiety disorder compared to controls were: bilateral amygdala, left medial temporal lobe encompassing the entorhinal cortex, left medial aspect of the inferior temporal lobe encompassing perirhinal cortex and parahippocampus, right anterior cingulate, right globus pallidus, and distal tip of right postcentral gyrus.Conclusion:The results are consistent with neuroanatomic models of the role of the amygdala in fear conditioning, and the importance of the limbic circuitry in mediating anxiety symptoms.
A coupled hygro-thermo-mechanical model for concrete subjected to variable environmental conditions
National Research Council Canada - National Science Library
Gasch, Tobias; Malm, Richard; Ansell, Anders
2016-01-01
.... Variations of these fields must therefore be included implicitly in an analysis. This paper presents a coupled hygro-thermo-mechanical model for hardened concrete based on the framework of the Microprestress-Solidification theory...
A time-varying subjective quality model for mobile streaming videos with stalling events
Ghadiyaram, Deepti; Pan, Janice; Bovik, Alan C.
2015-09-01
Over-the-top mobile video streaming is invariably influenced by volatile network conditions which cause playback interruptions (stalling events), thereby impairing users' quality of experience (QoE). Developing models that can accurately predict users' QoE could enable the more efficient design of quality-control protocols for video streaming networks that reduce network operational costs while still delivering high-quality video content to the customers. Existing objective models that predict QoE are based on global video features, such as the number of stall events and their lengths, and are trained and validated on a small pool of ad hoc video datasets, most of which are not publicly available. The model we propose in this work goes beyond previous models as it also accounts for the fundamental effect that a viewer's recent level of satisfaction or dissatisfaction has on their overall viewing experience. In other words, the proposed model accounts for and adapts to the recency, or hysteresis effect caused by a stall event in addition to accounting for the lengths, frequency of occurrence, and the positions of stall events - factors that interact in a complex way to affect a user's QoE. On the recently introduced LIVE-Avvasi Mobile Video Database, which consists of 180 distorted videos of varied content that are afflicted solely with over 25 unique realistic stalling events, we trained and validated our model to accurately predict the QoE, attaining standout QoE prediction performance.
Subject-specific left ventricular dysfunction modeling using composite material mechanics approach
Haddad, Seyed Mohammad Hassan; Karami, Elham; Samani, Abbas
2017-03-01
Diverse cardiac conditions such as myocardial infarction and hypertension can lead to diastolic dysfunction as a prevalent cardiac condition. Diastolic dysfunctions can be diagnosed through different adverse mechanisms such as abnormal left ventricle (LV) relaxation, filling, and diastolic stiffness. This paper is geared towards evaluating diastolic stiffness and measuring the LV blood pressure non-invasively. Diastolic stiffness is an important parameter which can be exploited for more accurate diagnosis of diastolic dysfunction. For this purpose, a finite element (FE) LV mechanical model, which works based on a novel composite material model of the cardiac tissue, was utilized. Here, this model was tested for inversion-based applications where it was applied for estimating the cardiac tissue passive stiffness mechanical properties as well as diastolic LV blood pressure. To this end, the model was applied to simulate diastolic inflation of the human LV. The start-diastolic LV geometry was obtained from MR image data segmentation of a healthy human volunteer. The obtained LV geometry was discretized into a FE mesh before FE simulation was conducted. The LV tissue stiffness and diastolic LV blood pressure were adjusted through optimization to achieve the best match between the calculated LV geometry and the one obtained from imaging data. The performance of the LV mechanical simulations using the optimal values of tissue stiffness and blood pressure was validated by comparing the geometrical parameters of the dilated LV model as well as the stress and strain distributions through the LV model with available measurements reported on the LV dilation.
Maximum Likelihood Learning of Conditional MTE Distributions
DEFF Research Database (Denmark)
Langseth, Helge; Nielsen, Thomas Dyhre; Rumí, Rafael
2009-01-01
We describe a procedure for inducing conditional densities within the mixtures of truncated exponentials (MTE) framework. We analyse possible conditional MTE speciﬁcations and propose a model selection scheme, based on the BIC score, for partitioning the domain of the conditioning variables...
Directory of Open Access Journals (Sweden)
U. Schneider
2009-01-01
Full Text Available The paper presents the structural application of a new thermal induced strain model for concrete – the TIS-Model. An advanced transient concrete model (ATCM is applied with the material model of the TIS-Model. The non-linear model comprises thermal strain, elastic strain, plastic strain and transient temperature strains, and load history modelling of restraint concrete structures subjected to fire.The calculations by finite element analysis (FEA were done using the SAFIR structural code. The FEA software was basically new with respect to the material modelling derived to use the new TIS-Model (as a transient model considers thermal induced strain. The equations of the ATCM consider a lot of capabilities, especially for considering irreversible effects of temperature on some material properties. By considering the load history during heating up, increasing load bearing capacity may be obtained due to higher stiffness of the concrete. With this model, it is possible to apply the thermal-physical behaviour of material laws for calculation of structures under extreme temperature conditions.A tunnel cross section designed and built by the cut and cover method is calculated with a tunnel fire curve. The results are compared with the results of a calculation with the model of the Eurocode 2 (EC2-Model. The effect of load history in highly loaded structures under fire load will be investigated.A comparison of this model with the ordinary calculation system of Eurocode 2 (EC2 shows that a better evaluation of the safety level was achieved with the new model. This opens a space for optimizing concrete structure design with transient temperature conditions up to 1000 °C.
Utterance Verification Using State-Level Log-Likelihood Ratio with Frame and State Selection
Kwon, Suk-Bong; Kim, Hoirin
This paper suggests utterance verification system using state-level log-likelihood ratio with frame and state selection. We use hidden Markov models for speech recognition and utterance verification as acoustic models and anti-phone models. The hidden Markov models have three states and each state represents different characteristics of a phone. Thus we propose an algorithm to compute state-level log-likelihood ratio and give weights on states for obtaining more reliable confidence measure of recognized phones. Additionally, we propose a frame selection algorithm to compute confidence measure on frames including proper speech in the input speech. In general, phone segmentation information obtained from speaker-independent speech recognition system is not accurate because triphone-based acoustic models are difficult to effectively train for covering diverse pronunciation and coarticulation effect. So, it is more difficult to find the right matched states when obtaining state segmentation information. A state selection algorithm is suggested for finding valid states. The proposed method using state-level log-likelihood ratio with frame and state selection shows that the relative reduction in equal error rate is 18.1% compared to the baseline system using simple phone-level log-likelihood ratios.
Carbone, V; Fluit, R; Pellikaan, P; van der Krogt, M M; Janssen, D; Damsgaard, M; Vigneron, L; Feilkas, T; Koopman, H F J M; Verdonschot, N
2015-03-18
When analyzing complex biomechanical problems such as predicting the effects of orthopedic surgery, subject-specific musculoskeletal models are essential to achieve reliable predictions. The aim of this paper is to present the Twente Lower Extremity Model 2.0, a new comprehensive dataset of the musculoskeletal geometry of the lower extremity, which is based on medical imaging data and dissection performed on the right lower extremity of a fresh male cadaver. Bone, muscle and subcutaneous fat (including skin) volumes were segmented from computed tomography and magnetic resonance images scans. Inertial parameters were estimated from the image-based segmented volumes. A complete cadaver dissection was performed, in which bony landmarks, attachments sites and lines-of-action of 55 muscle actuators and 12 ligaments, bony wrapping surfaces, and joint geometry were measured. The obtained musculoskeletal geometry dataset was finally implemented in the AnyBody Modeling System (AnyBody Technology A/S, Aalborg, Denmark), resulting in a model consisting of 12 segments, 11 joints and 21 degrees of freedom, and including 166 muscle-tendon elements for each leg. The new TLEM 2.0 dataset was purposely built to be easily combined with novel image-based scaling techniques, such as bone surface morphing, muscle volume registration and muscle-tendon path identification, in order to obtain subject-specific musculoskeletal models in a quick and accurate way. The complete dataset, including CT and MRI scans and segmented volume and surfaces, is made available at http://www.utwente.nl/ctw/bw/research/projects/TLEMsafe for the biomechanical community, in order to accelerate the development and adoption of subject-specific models on large scale. TLEM 2.0 is freely shared for non-commercial use only, under acceptance of the TLEMsafe Research License Agreement. Copyright © 2014 Elsevier Ltd. All rights reserved.
Sal Y Rosas, Victor G; Hughes, James P
2010-04-21
In this article, we present nonparametric and semiparametric methods to analyze current status data subject to outcome misclassification. Our methods use nonparametric maximum likelihood estimation (NPMLE) to estimate the distribution function of the failure time when sensitivity and specificity are known and may vary among subgroups. A nonparametric test is proposed for the two sample hypothesis testing. In regression analysis, we apply the Cox proportional hazard model and likelihood ratio based confidence intervals for the regression coefficients are proposed. Our methods are motivated and demonstrated by data collected from an infectious disease study in Seattle, WA.
Subject-specific liver motion modeling in MRI: a feasibility study on spatiotemporal prediction
Noorda, Yolanda H.; Bartels, Lambertus W.; Viergever, Max A.; Pluim, Josien P. W.
2017-04-01
A liver motion model based on registration of dynamic MRI data, as previously proposed by the authors, was extended with temporal prediction and respiratory signal data. The potential improvements of these extensions with respect to the original model were investigated. Additional evaluations were performed to investigate the limitations of the model regarding temporal prediction and extreme breathing motion. Data were acquired of four volunteers, with breathing instructions and a respiratory belt. The model was built from these data using spatial prediction only and using temporal forward prediction of 300 ms to 1200 ms, using the extended Kalman filter. From temporal prediction of 0 ms to 1200 ms ahead, the Dice coefficient of liver overlap decreased with 0.85%, the median liver surface distance increased with 20.6% and the vessel misalignment increased with 20%. The mean vessel misalignment was 2.9 mm for the original method, 3.42 mm for spatial prediction with a respiratory signal and 4.01 mm for prediction of 1200 ms ahead with a respiratory signal. Although the extension of the model to temporal prediction yields a decreased prediction accuracy, the results are still acceptable. The use of the breathing signal as input to the model is feasible. Sudden changes in the breathing pattern can yield large errors. However, these errors only persist during a short time interval, after which they can be corrected automatically. Therefore, this model could be useful in a clinical setting.
A two-part model for reference curve estimation subject to a limit of detection.
Zhang, Z; Addo, O Y; Himes, J H; Hediger, M L; Albert, P S; Gollenberg, A L; Lee, P A; Louis, G M Buck
2011-05-30
Reference curves are commonly used to identify individuals with extreme values of clinically relevant variables or stages of progression which depend naturally on age or maturation. Estimation of reference curves can be complicated by a technical limit of detection (LOD) that censors the measurement from the left, as is the case in our study of reproductive hormone levels in boys around the time of the onset of puberty. We discuss issues with common approaches to the LOD problem in the context of our pubertal hormone study, and propose a two-part model that addresses these issues. One part of the proposed model specifies the probability of a measurement exceeding the LOD as a function of age. The other part of the model specifies the conditional distribution of a measurement given that it exceeds the LOD, again as a function of age. Information from the two parts can be combined to estimate the identifiable portion (i.e. above the LOD) of a reference curve and to calculate the relative standing of a given measurement above the LOD. Unlike some common approaches to LOD problems, the two-part model is free of untestable assumptions involving unobservable quantities, flexible for modeling the observable data, and easy to implement with existing software. The method is illustrated with hormone data from the Third National Health and Nutrition Examination Survey. This article is a U.S. Government work and is in the public domain in the U.S.A. Published in 2011 by John Wiley & Sons, Ltd.
Modelling critical degrees of saturation of porous building materials subjected to freezing
DEFF Research Database (Denmark)
Hansen, Ernst Jan De Place
1996-01-01
Frost resistance of porous materials can be characterized by the critical degree of saturation, SCR, and the actual degree of saturation, SACT. An experimental determination of SCR is very laborious and therefore only seldom used when testing frost resistance. A theoretical model for prediction o...... involved will be unnecessary, making the model more useful in practice.Keywords: Brick tile, concrete, critical degree of saturation, eigenstrain, fracture mechanics, frost resistance, pore size distribution, pore structure, stress development, theoretical model.......Frost resistance of porous materials can be characterized by the critical degree of saturation, SCR, and the actual degree of saturation, SACT. An experimental determination of SCR is very laborious and therefore only seldom used when testing frost resistance. A theoretical model for prediction.......The model has been tested on various concretes without air-entrainment and on brick tiles with different porosities. Results agree qualitatively with values of the critical degree of saturation determined by measuring resonance frequencies and length change of sealed specimens during freezing...
Bond-Slip Models for FPR-Concrete Interfaces Subjected to Moisture Conditions
Directory of Open Access Journals (Sweden)
Justin Shrestha
2017-01-01
Full Text Available Environmental related durability issues have been of great concerns in the structures strengthened with the fiber reinforced polymers (FRPs. In marine environment, moisture is one of the dominant factors that adversely affect the material properties and the bond interfaces. Several short-term and long-term laboratory experimental investigations have been conducted to study such behaviors but, still, there are insufficient constitutive bond models which could incorporate moisture exposure conditions. This paper proposed a very simple approach in determining the nonlinear bond-slip models for the FRP-concrete interface considering the effect of moisture conditions. The proposed models are based on the strain results of the experimental investigation conducted by the authors using 6 different commercial FRP systems exposed to the moisture conditions for the maximum period of 18 months. The exposure effect in the moisture conditions seems to have great dependency on the FRP system. Based on the contrasting differences in the results under moisture conditions, separate bond-slip models have been proposed for the wet-layup FRP and prefabricated FRP systems. As for the verification of the proposed model under moisture conditions, predicted pull-out load was compared with the experimental pull-out load. The results showed good agreement for all the FRP systems under investigation.
National Research Council Canada - National Science Library
Kosakovsky Pond, Sergei L; Poon, Art F.Y; Leigh Brown, Andrew J; Frost, Simon D.W
2008-01-01
We develop a model-based phylogenetic maximum likelihood test for evidence of preferential substitution toward a given residue at individual positions of a protein alignment-directional evolution of protein sequences (DEPS...
Gentz, Steven J.; Ordway, David O.; Parsons, David S.; Garrison, Craig M.; Rodgers, C. Steven; Collins, Brian W.
2015-01-01
The NASA Engineering and Safety Center (NESC) received a request to develop an analysis model based on both frequency response and wave propagation analyses for predicting shock response spectrum (SRS) on composite materials subjected to pyroshock loading. The model would account for near-field environment (approximately 9 inches from the source) dominated by direct wave propagation, mid-field environment (approximately 2 feet from the source) characterized by wave propagation and structural resonances, and far-field environment dominated by lower frequency bending waves in the structure. This document contains appendices to the Volume I report.
Shen, Jun; Xiao, Jim; Pickthorn, Karen; Huang, Saling; Bell, Gregory; Vick, Andrew; Chen, Ping
2014-10-01
AMG 416 is a novel peptide agonist of the calcium-sensing receptor. In support of the clinical development program, a pharmacokinetic (PK)/pharmacodynamic (PD) model was developed to describe the relationship between plasma AMG 416 levels and serum intact parathyroid hormone (iPTH) concentrations in healthy male subjects. AMG 416 plasma concentrations were characterized by a three-compartment linear PK model, while serum iPTH levels were described by an indirect response model with drug effect on the production of iPTH characterized with an inhibitory Emax model. The production of iPTH was modeled by a circadian rhythm function. The systemic clearance of plasma AMG 416 was estimated to be 6.94 L/h. Two sine functions best described iPTH circadian rhythm with an amplitude estimated to be 0.15 and 0.08, respectively. The maximum response Emax and the potency parameter EC50 were estimated to be 0.69 and 21.0 ng/mL, respectively. This work improved our understanding of the interaction between AMG 416 PK and iPTH concentrations in healthy adult male subjects. Data suggest additional PK/PD studies with AMG 416 are warranted in the hemodialysis population. © 2014, The American College of Clinical Pharmacology.
Directory of Open Access Journals (Sweden)
Han Tantri Hardini
2016-12-01
Full Text Available This research aims to know the influence of problem based learning model toward students’ activities and achievement on Financial Management subject for undergraduate program students of Accounting Education. It was a quantitative research that used true experimental design. Samples of this study were undergraduate program students of Accounting Education in the year of 2014. Class A were control class and class B were experimental class. Data were analyzed by using t-test in order to determine the differences of learning outcomes between control class and experimental class. Then, questionnaires were distributed to gather students’ activities information in their students’ learning model. Findings show that there is an influence of Problem Based Learning model toward students’ activities and learning outcomes on Financial Management subject for undergraduate program students of Accounting Education since t-count ≥ t-table. It is 6.120 ≥ 1.9904. Students’ learning activities with Problem Based Learning model are better than students who are taught by conventional learning model.
Rademaker, Rosanne L; Tredway, Caroline H; Tong, Frank
2012-12-21
Working memory serves as an essential workspace for the mind, allowing for the active maintenance of information to support short-term cognitive goals. Although people can readily report the contents of working memory, it is unknown whether they might have reliable metacognitive knowledge regarding the accuracy of their own memories. We investigated this question to better understand the core properties of the visual working memory system. Observers were briefly presented with displays of three or six oriented gratings, after which they were cued to report the orientation of a specific grating from memory as well as their subjective confidence in their memory. We used a mixed-model approach to obtain separate estimates of the probability of successful memory maintenance and the precision of memory for successfully remembered items. Confidence ratings strongly predicted the likelihood that the cued grating was successfully maintained, and furthermore revealed trial-to-trial variations in the visual precision of memory itself. Our findings provide novel evidence indicating that the precision of visual working memory is variable in nature. These results inform an ongoing debate regarding whether this working memory system relies on discrete slots with fixed visual resolution or on representations with variable precision, as might arise from variability in the amount of resources assigned to individual items on each trial.
Rademaker, Rosanne L.; Tredway, Caroline H.; Tong, Frank
2012-01-01
Working memory serves as an essential workspace for the mind, allowing for the active maintenance of information to support short-term cognitive goals. Although people can readily report the contents of working memory, it is unknown whether they might have reliable metacognitive knowledge regarding the accuracy of their own memories. We investigated this question to better understand the core properties of the visual working memory system. Observers were briefly presented with displays of three or six oriented gratings, after which they were cued to report the orientation of a specific grating from memory as well as their subjective confidence in their memory. We used a mixed-model approach to obtain separate estimates of the probability of successful memory maintenance and the precision of memory for successfully remembered items. Confidence ratings strongly predicted the likelihood that the cued grating was successfully maintained, and furthermore revealed trial-to-trial variations in the visual precision of memory itself. Our findings provide novel evidence indicating that the precision of visual working memory is variable in nature. These results inform an ongoing debate regarding whether this working memory system relies on discrete slots with fixed visual resolution or on representations with variable precision, as might arise from variability in the amount of resources assigned to individual items on each trial. PMID:23262153
Directory of Open Access Journals (Sweden)
Mosbeh R. Kaloop
2016-10-01
Full Text Available The present study investigates the prediction efficiency of nonlinear system-identification models, in assessing the behavior of a coupled structure-passive vibration controller. Two system-identification models, including Nonlinear AutoRegresive with eXogenous inputs (NARX and adaptive neuro-fuzzy inference system (ANFIS, are used to model the behavior of an experimentally scaled three-story building incorporated with a tuned mass damper (TMD subjected to seismic loads. The experimental study is performed to generate the input and output data sets for training and testing the designed models. The parameters of root-mean-squared error, mean absolute error and determination coefficient statistics are used to compare the performance of the aforementioned models. A TMD controller system works efficiently to mitigate the structural vibration. The results revealed that the NARX and ANFIS models could be used to identify the response of a controlled structure. The parameters of both two time-delays of the structure response and the seismic load were proven to be effective tools in identifying the performance of the models. A comparison based on the parametric evaluation of the two methods showed that the NARX model outperforms the ANFIS model in identifying structures response.
Assessing Compatibility of Direct Detection Data: Halo-Independent Global Likelihood Analyses
Gelmini, Graciela B.
2016-10-18
We present two different halo-independent methods utilizing a global maximum likelihood that can assess the compatibility of dark matter direct detection data given a particular dark matter model. The global likelihood we use is comprised of at least one extended likelihood and an arbitrary number of Poisson or Gaussian likelihoods. In the first method we find the global best fit halo function and construct a two sided pointwise confidence band, which can then be compared with those derived from the extended likelihood alone to assess the joint compatibility of the data. In the second method we define a "constrained parameter goodness-of-fit" test statistic, whose $p$-value we then use to define a "plausibility region" (e.g. where $p \\geq 10\\%$). For any halo function not entirely contained within the plausibility region, the level of compatibility of the data is very low (e.g. $p < 10 \\%$). As an example we apply these methods to CDMS-II-Si and SuperCDMS data, assuming dark matter particles with elastic s...
Ahmed, A.
2014-01-01
In order to arrive at safe and reliable design of composite structures, understanding of the mechanisms and mechanics of damage growth in these materials is of paramount significance. Numerical models, if designed, implemented and used carefully, can be helpful not only to understand the mechanisms
Mullins, Benjamin J; Braddock, Roger D; Agranovski, Igor E; Cropp, Roger A
2006-08-15
Extensive experimental investigation of the wetting processes of fibre/liquid systems during air filtration (when drag and gravitational forces are acting) has shown many important features, including droplet extension, oscillatory motion, and detachment or flow of drops from fibres as airflow velocity increases. A detailed experimental study of the aforementioned processes was conducted using glass filter fibres and H(2)O aerosol, which coalesce on the fibre to form barrel droplets with small contact angles. The droplets were predominantly observed in the Reynolds transition (or unsteady laminar) flow region. The droplet oscillation appears to be induced by the onset of vortexes in the flow field around the droplet as the increasing droplet size increases the Reynolds number. Flow in this region is usually modelled using the classical two-dimensional Karman vortex street, and there exist no 3D equivalents. Therefore to model such oscillation it was necessary to create a new conceptual model to account for the forces both inducing and inhibiting such oscillation. The agreement between the model and experimental results is acceptable for both the radial and transverse oscillations.
Modelling motion sickness and subjective vertical mismatch detailed for vertical motions
Bos, J. E.; van der Bles, W.
1998-01-01
In an attempt to predict the amount of motion sickness given any kind of motion stimulus, we describe a model using explicit knowledge of the vestibular system. First, the generally accepted conflict theory is restated in terms of a conflict between a vertical as perceived by the sense organs like
Youth Development as Subjectified Subjectivity – a Dialectical-Ecological Model of Analysis
DEFF Research Database (Denmark)
Pedersen, Sofie; Bang, Jytte
2016-01-01
school, we carry out an analysis of a 16-year old high school student and how her approach to beer, to beer drinking as a part of Danish high school life-style, and to herself changes over time. We suggest a dialectical-ecological model to analyze the dialectical and synthetic movements over time...
Capasso, Roberto; Zurlo, Maria Clelia; Smith, Andrew P
2018-02-01
This study integrates different aspects of ethnicity and work-related stress dimensions (based on the Demands-Resources-Individual-Effects model, DRIVE [Mark, G. M., and A. P. Smith. 2008. "Stress Models: A Review and Suggested New Direction." In Occupational Health Psychology, edited by J. Houdmont and S. Leka, 111-144. Nottingham: Nottingham University Press]) and aims to test a multi-dimensional model that combines individual differences, ethnicity dimensions, work characteristics, and perceived job satisfaction/stress as independent variables in the prediction of subjectives reports of health by workers differing in ethnicity. A questionnaire consisting of the following sections was submitted to 900 workers in Southern Italy: for individual and cultural characteristics, coping strategies, personality behaviours, and acculturation strategies; for work characteristics, perceived job demands and job resources/rewards; for appraisals, perceived job stress/satisfaction and racial discrimination; for subjective reports of health, psychological disorders and general health. To test the reliability and construct validity of the extracted factors referred to all dimensions involved in the proposed model and logistic regression analyses to evaluate the main effects of the independent variables on the health outcomes were conducted. Principal component analysis (PCA) yielded seven factors for individual and cultural characteristics (emotional/relational coping, objective coping, Type A behaviour, negative affectivity, social inhibition, affirmation/maintenance culture, and search identity/adoption of the host culture); three factors for work characteristics (work demands, intrinsic/extrinsic rewards, and work resources); three factors for appraisals (perceived job satisfaction, perceived job stress, perceived racial discrimination) and three factors for subjective reports of health (interpersonal disorders, anxious-depressive disorders, and general health). Logistic
Chung, Chi-Jung; Kuo, Yu-Chen; Hsieh, Yun-Yu; Li, Tsai-Chung; Lin, Cheng-Chieh; Liang, Wen-Miin; Liao, Li-Na; Li, Chia-Ing; Lin, Hsueh-Chun
2017-11-01
This study applied open source technology to establish a subject-enabled analytics model that can enhance measurement statistics of case studies with the public health data in cloud computing. The infrastructure of the proposed model comprises three domains: 1) the health measurement data warehouse (HMDW) for the case study repository, 2) the self-developed modules of online health risk information statistics (HRIStat) for cloud computing, and 3) the prototype of a Web-based process automation system in statistics (PASIS) for the health risk assessment of case studies with subject-enabled evaluation. The system design employed freeware including Java applications, MySQL, and R packages to drive a health risk expert system (HRES). In the design, the HRIStat modules enforce the typical analytics methods for biomedical statistics, and the PASIS interfaces enable process automation of the HRES for cloud computing. The Web-based model supports both modes, step-by-step analysis and auto-computing process, respectively for preliminary evaluation and real time computation. The proposed model was evaluated by computing prior researches in relation to the epidemiological measurement of diseases that were caused by either heavy metal exposures in the environment or clinical complications in hospital. The simulation validity was approved by the commercial statistics software. The model was installed in a stand-alone computer and in a cloud-server workstation to verify computing performance for a data amount of more than 230K sets. Both setups reached efficiency of about 105 sets per second. The Web-based PASIS interface can be used for cloud computing, and the HRIStat module can be flexibly expanded with advanced subjects for measurement statistics. The analytics procedure of the HRES prototype is capable of providing assessment criteria prior to estimating the potential risk to public health. Copyright © 2017 Elsevier B.V. All rights reserved.
Wong, Y Joel; Tsai, Pei-Chun; Liu, Tao; Zhu, Qingqing; Wei, Meifen
2014-10-01
This study examined male Asian international college students' perceptions of racial discrimination, subjective masculinity stress, centrality of masculine identity, and psychological distress by testing a moderated mediation model. Participants were 160 male Asian international college students from 2 large public universities. Participants' perceived racial discrimination was positively related to their subjective masculinity stress only at high (but not low) levels of masculine identity centrality. Additionally, subjective masculinity stress was positively related to psychological distress, although this association was stronger among those who reported high levels of masculine identity centrality. The authors also detected a moderated mediation effect in which subjective masculinity stress mediated the relationship between perceived racial discrimination and psychological distress only at high (but not low) levels of masculine identity centrality. These findings contribute to the counseling psychology literature by highlighting the connections between race- and gender-related stressors as well as the relevance of masculine identity to an understanding of men's mental health. PsycINFO Database Record (c) 2014 APA, all rights reserved.
Estimating likelihood of future crashes for crash-prone drivers
Directory of Open Access Journals (Sweden)
Subasish Das
2015-06-01
Full Text Available At-fault crash-prone drivers are usually considered as the high risk group for possible future incidents or crashes. In Louisiana, 34% of crashes are repeatedly committed by the at-fault crash-prone drivers who represent only 5% of the total licensed drivers in the state. This research has conducted an exploratory data analysis based on the driver faultiness and proneness. The objective of this study is to develop a crash prediction model to estimate the likelihood of future crashes for the at-fault drivers. The logistic regression method is used by employing eight years' traffic crash data (2004–2011 in Louisiana. Crash predictors such as the driver's crash involvement, crash and road characteristics, human factors, collision type, and environmental factors are considered in the model. The at-fault and not-at-fault status of the crashes are used as the response variable. The developed model has identified a few important variables, and is used to correctly classify at-fault crashes up to 62.40% with a specificity of 77.25%. This model can identify as many as 62.40% of the crash incidence of at-fault drivers in the upcoming year. Traffic agencies can use the model for monitoring the performance of an at-fault crash-prone drivers and making roadway improvements meant to reduce crash proneness. From the findings, it is recommended that crash-prone drivers should be targeted for special safety programs regularly through education and regulations.
Evaluation of likelihood functions for data analysis on Graphics Processing Units
Jarp, Sverre; Leduc, J; Nowak, A; Pantaleo, F
2010-01-01
Data analysis techniques based on likelihood function calculation play a crucial role in many High Energy Physics measurements. Depending on the complexity of the models used in the analyses, with several free parameters, many independent variables, large data samples, and complex functions, the calculation of the likelihood functions can require a long CPU execution time. In the past, the continuous gain in performance for each single CPU core kept pace with the increase on the complexity of the analyses, maintaining reason- able the execution time of the sequential software applications. Nowadays, the performance for single cores is not increasing as in the past, while the complexity of the analyses has grown significantly in the Large Hadron Collider era. In this context a breakthrough is represented by the increase of the number of computational cores per computational node. This allows to speed up the execution of the applications, redesigning them with parallelization paradigms. The likelihood function ...
Parallelization of maximum likelihood fits with OpenMP and CUDA
Jarp, S; Leduc, J; Nowak, A; Pantaleo, F
2011-01-01
Data analyses based on maximum likelihood fits are commonly used in the high energy physics community for fitting statistical models to data samples. This technique requires the numerical minimization of the negative log-likelihood function. MINUIT is the most common package used for this purpose in the high energy physics community. The main algorithm in this package, MIGRAD, searches the minimum by using the gradient information. The procedure requires several evaluations of the function, depending on the number of free parameters and their initial values. The whole procedure can be very CPU-time consuming in case of complex functions, with several free parameters, many independent variables and large data samples. Therefore, it becomes particularly important to speed-up the evaluation of the negative log-likelihood function. In this paper we present an algorithm and its implementation which benefits from data vectorization and parallelization (based on OpenMP) and which was also ported to Graphics Processi...
Nordgaard, Anders; Höglund, Tobias
2011-03-01
A reported likelihood ratio for the value of evidence is very often a point estimate based on various types of reference data. When presented in court, such frequentist likelihood ratio gets a higher scientific value if it is accompanied by an error bound. This becomes particularly important when the magnitude of the likelihood ratio is modest and thus is giving less support for the forwarded proposition. Here, we investigate methods for error bound estimation for the specific case of digital camera identification. The underlying probability distributions are continuous and previously proposed models for those are used, but the derived methodology is otherwise general. Both asymptotic and resampling distributions are applied in combination with different types of point estimators. The results show that resampling is preferable for assessment based on asymptotic distributions. Further, assessment of parametric estimators is superior to evaluation of kernel estimators when background data are limited. © 2011 American Academy of Forensic Sciences.
Maximum Likelihood Blind Channel Estimation for Space-Time Coding Systems
Directory of Open Access Journals (Sweden)
Hakan A. Çırpan
2002-05-01
Full Text Available Sophisticated signal processing techniques have to be developed for capacity enhancement of future wireless communication systems. In recent years, space-time coding is proposed to provide significant capacity gains over the traditional communication systems in fading wireless channels. Space-time codes are obtained by combining channel coding, modulation, transmit diversity, and optional receive diversity in order to provide diversity at the receiver and coding gain without sacrificing the bandwidth. In this paper, we consider the problem of blind estimation of space-time coded signals along with the channel parameters. Both conditional and unconditional maximum likelihood approaches are developed and iterative solutions are proposed. The conditional maximum likelihood algorithm is based on iterative least squares with projection whereas the unconditional maximum likelihood approach is developed by means of finite state Markov process modelling. The performance analysis issues of the proposed methods are studied. Finally, some simulation results are presented.
Estimation Methods for Non-Homogeneous Regression - Minimum CRPS vs Maximum Likelihood
Gebetsberger, Manuel; Messner, Jakob W.; Mayr, Georg J.; Zeileis, Achim
2017-04-01
Non-homogeneous regression models are widely used to statistically post-process numerical weather prediction models. Such regression models correct for errors in mean and variance and are capable to forecast a full probability distribution. In order to estimate the corresponding regression coefficients, CRPS minimization is performed in many meteorological post-processing studies since the last decade. In contrast to maximum likelihood estimation, CRPS minimization is claimed to yield more calibrated forecasts. Theoretically, both scoring rules used as an optimization score should be able to locate a similar and unknown optimum. Discrepancies might result from a wrong distributional assumption of the observed quantity. To address this theoretical concept, this study compares maximum likelihood and minimum CRPS estimation for different distributional assumptions. First, a synthetic case study shows that, for an appropriate distributional assumption, both estimation methods yield to similar regression coefficients. The log-likelihood estimator is slightly more efficient. A real world case study for surface temperature forecasts at different sites in Europe confirms these results but shows that surface temperature does not always follow the classical assumption of a Gaussian distribution. KEYWORDS: ensemble post-processing, maximum likelihood estimation, CRPS minimization, probabilistic temperature forecasting, distributional regression models
Simple simulation of diffusion bridges with application to likelihood inference for diffusions
DEFF Research Database (Denmark)
Bladt, Mogens; Sørensen, Michael
2014-01-01
With a view to statistical inference for discretely observed diffusion models, we propose simple methods of simulating diffusion bridges, approximately and exactly. Diffusion bridge simulation plays a fundamental role in likelihood and Bayesian inference for diffusion processes. First a simple me...
Cox regression with missing covariate data using a modified partial likelihood method
DEFF Research Database (Denmark)
Martinussen, Torben; Holst, Klaus K.; Scheike, Thomas H.
2016-01-01
Missing covariate values is a common problem in survival analysis. In this paper we propose a novel method for the Cox regression model that is close to maximum likelihood but avoids the use of the EM-algorithm. It exploits that the observed hazard function is multiplicative in the baseline hazard...
Modelling of stiffness degradation due to cracking in laminates subjected to multi-axial loading.
Kashtalyan, M; Soutis, C
2016-07-13
The paper presents an analytical approach to predicting the effect of intra- and interlaminar cracking on residual stiffness properties of the laminate, which can be used in the post-initial failure analysis, taking full account of damage mode interaction. The approach is based on a two-dimensional shear lag stress analysis and the equivalent constraint model of the laminate with multiple damaged plies. The application of the approach to predicting degraded stiffness properties of multidirectional laminates under multi-axial loading is demonstrated on cross-ply glass/epoxy and carbon/epoxy laminates with transverse and longitudinal matrix cracks and crack-induced transverse and longitudinal delaminations. This article is part of the themed issue 'Multiscale modelling of the structural integrity of composite materials'. © 2016 The Author(s).
An evaluation of the hemiplegic subject based on the Bobath approach. Part I: The model.
Guarna, F; Corriveau, H; Chamberland, J; Arsenault, A B; Dutil, E; Drouin, G
1988-01-01
An evaluation, based on the Bobath approach to treatment has been developed. A model, substantiating this evaluation is presented. In this model, the three stages of motor recovery presented by Bobath have been extended to six, to better follow the progression of the patient. Six parameters have also been identified. These are the elements to be quantified so that the progress of the patient through the stages of motor recovery can be followed. Four of these parameters are borrowed from the Bobath approach, that is: postural reaction, muscle tone, reflex activity and active movement. Two have been added: sensorium and pain. An accompanying paper presents the evaluation protocol along with the operational definition of each of these parameters.
Modelling of stiffness degradation due to cracking in laminates subjected to multi-axial loading
2016-01-01
The paper presents an analytical approach to predicting the effect of intra- and interlaminar cracking on residual stiffness properties of the laminate, which can be used in the post-initial failure analysis, taking full account of damage mode interaction. The approach is based on a two-dimensional shear lag stress analysis and the equivalent constraint model of the laminate with multiple damaged plies. The application of the approach to predicting degraded stiffness properties of multidirectional laminates under multi-axial loading is demonstrated on cross-ply glass/epoxy and carbon/epoxy laminates with transverse and longitudinal matrix cracks and crack-induced transverse and longitudinal delaminations. This article is part of the themed issue ‘Multiscale modelling of the structural integrity of composite materials’. PMID:27242290
Centrifuge modelling of large diameter pile in sand subject to lateral loading
DEFF Research Database (Denmark)
Leth, Caspar Thrane
regulations are developed for long flexible piles with diameters up to approximately 2.0 m and are based on a very limited number of tests. Hence, the method has not been validated for rigid piles with diameters of 4 to 6 m. The primary issues concerning the validity of the standard p-y curves for large...... diameter rigid piles are: The initial stiffness of the curves and description of the static behaviour, including ultimate bearing capacity. The behaviour with respect to cyclic loading both stiffness degradation and ultimate bearing capacity. The aim of the present research is to investigate the static...... and cyclic behaviour of large diameter rigid piles in dry sand by use of physical modelling. The physical modelling has been carried out at Department of Civil Engineering at the Danish Technical University (DTU.BYG), in the period from 2005 to 2009. The main centrifuge facilities, and especially...
de Guzman, Allan B.; Lagdaan, Lovely France M.; Lagoy, Marie Lauren V.
2015-01-01
Subjective memory complaints are one of the major concerns of the elderly and remain a challenging area in gerontology. There are previous studies that identify different factors affecting subjective memory complaints. However, an extended model that correlates life-space on subjective memory complaints remains a blank spot. The objective of this…
2017-11-01
simulations of structural response.8 In the experiments, stereo- digital image correlation (SDIC) was used to record the shock response of a V-hull structure...and the associated fast Fourier transforms corroborate this statement.9 Therefore, the first mode is considered the most important metric for... Organisation , Aeronautical and Maritime Research Laboratory; 2001 June. Report No.: DSTO-TR-1168. 7. Cummins C. Modeling brown clayey sand in LS-DYNA
Multi-finger coordination in healthy subjects and stroke patients: a mathematical modelling approach
Ferrarin Maurizio; Jonsdottir Johanna; Carpinella Ilaria
2011-01-01
Abstract Background Approximately 60% of stroke survivors experience hand dysfunction limiting execution of daily activities. Several methods have been proposed to objectively quantify fingers' joints range of motion (ROM), while few studies exist about multi-finger coordination during hand movements. The present work analysed this aspect, by providing a complete characterization of spatial and temporal aspects of hand movement, through the mathematical modelling of multi-joint finger motion ...
Huntjens, Dymphy R; Liefaard, Lia C; Nandy, Partha; Drenth, Henk-Jan; Vermeulen, An
2016-03-01
Tapentadol is a centrally acting analgesic with two mechanisms of action, µ-opioid receptor agonism and noradrenaline reuptake inhibition. The objectives were to describe the pharmacokinetic behavior of tapentadol after oral administration of an extended-release (ER) formulation in healthy subjects and patients with chronic pain and to evaluate covariate effects. Data were obtained from 2276 subjects enrolled in five phase I and nine phase II and III studies. Nonlinear mixed-effects modeling was conducted using NONMEM. The population estimates of apparent oral clearance and apparent central volume of distribution were 257 L/h and 1870 L, respectively. The complex absorption was described with a transit compartment for the first input. The second input function embraces saturable "binding" in the "absorption compartment", and a time-varying rate constant. Covariate evaluation demonstrated that age, aspartate aminotransferase, and health (painful diabetic neuropathy or not) had a statistically significant effect on apparent clearance, and bioavailability appeared to be dependent on body weight. The pcVPC indicted that the model provided a robust and unbiased fit to the data. A one-compartment disposition model with two input functions and first-order elimination adequately described the pharmacokinetics of tapentadol ER. The dose-dependency in the pharmacokinetics of tapentadol ER is adequately described by the absorption model. None of the covariates were considered as clinically relevant factors that warrant dose adjustments.
Beltrachini, L.; Blenkmann, A.; von Ellenrieder, N.; Petroni, A.; Urquina, H.; Manes, F.; Ibáñez, A.; Muravchik, C. H.
2011-12-01
The major goal of evoked related potential studies arise in source localization techniques to identify the loci of neural activity that give rise to a particular voltage distribution measured on the surface of the scalp. In this paper we evaluate the effect of the head model adopted in order to estimate the N170 component source in attention deficit hyperactivity disorder (ADHD) patients and control subjects, considering faces and words stimuli. The standardized low resolution brain electromagnetic tomography algorithm (sLORETA) is used to compare between the three shell spherical head model and a fully realistic model based on the ICBM-152 atlas. We compare their variance on source estimation and analyze the impact on the N170 source localization. Results show that the often used three shell spherical model may lead to erroneous solutions, specially on ADHD patients, so its use is not recommended. Our results also suggest that N170 sources are mainly located in the right occipital fusiform gyrus for faces stimuli and in the left occipital fusiform gyrus for words stimuli, for both control subjects and ADHD patients. We also found a notable decrease on the N170 estimated source amplitude on ADHD patients, resulting in a plausible marker of the disease.
Li, Xiaochun; Li, Huilin; Jin, Man; D Goldberg, Judith
2016-09-10
We consider the non-inferiority (or equivalence) test of the odds ratio (OR) in a crossover study with binary outcomes to evaluate the treatment effects of two drugs. To solve this problem, Lui and Chang (2011) proposed both an asymptotic method and a conditional method based on a random effects logit model. Kenward and Jones (1987) proposed a likelihood ratio test (LRTM ) based on a log linear model. These existing methods are all subject to model misspecification. In this paper, we propose a likelihood ratio test (LRT) and a score test that are independent of model specification. Monte Carlo simulation studies show that, in scenarios considered in this paper, both the LRT and the score test have higher power than the asymptotic and conditional methods for the non-inferiority test; the LRT, score, and asymptotic methods have similar power, and they all have higher power than the conditional method for the equivalence test. When data can be well described by a log linear model, the LRTM has the highest power among all the five methods (LRTM , LRT, score, asymptotic, and conditional) for both non-inferiority and equivalence tests. However, in scenarios for which a log linear model does not describe the data well, the LRTM has the lowest power for the non-inferiority test and has inflated type I error rates for the equivalence test. We provide an example from a clinical trial that illustrates our methods. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
The roll damping assessment via decay model testing (new ideas about an old subject)
Fernandes, Antonio C.; Oliveira, Allan C.
2009-06-01
The methodology to obtain the non-linear roll damping from decay tests is very old. It has been proposed by Froude in the 19th century and used from then on. Behind it there is a quadratic model [dot θ |dot θ |] for the damping and a subsequent equivalent linearization. Probably all model basin in the world follows this approach to assess the damping from a decay test. This is well documented and so is the methods to get the p 1- p 2 coefficients. This is very general in the sense that in principle, it could be applied to any kind of hull. However, it has become clear that for hull with a flat bottom such as a very large crude carrier (VLCC), this approach may lead to confusing results such as negative p 2. Faced with this, the work presents a completely new idea. Avoiding the polynomial approximation, the basic attitude is to devise two regions from the decaying test response. The first, called the large amplitude response region yields a larger damping, probably due to the large bilge keel vortices that are attracted to the hull flat bottom. The second is the small amplitude response region where the vortices are not attracted to the bottom but travels approximately 45° sidewise. These observations has led to a new approach called the bi-linear approach as discussed in the work after analyzing several (many) model test results. In fact, a new modified bi-linear approach is ultimately proposed after the understanding of a transition region instead of a transition angle.
The Limits of Subjective Rights through a Model of Basic Legal Positions
Directory of Open Access Journals (Sweden)
María Claudia Mercio Cachapuz
2017-12-01
Full Text Available The aim of this paper is to study the problem of the limits of constitutional rights, in an institutional level or in a concrete situation of conflicto of interest. The paper proposes a comparison of Häberle’s and Alexy’s point of view, regarding the adoption of internal and external theories of restrictions on rights. It also proposes, through the Hohfeld’s analytical model of basic legal positions, the correct interpretation to the analysis of hard cases.
Directory of Open Access Journals (Sweden)
Katrin eHanken
2014-12-01
Full Text Available In multiple sclerosis (MS patients, fatigue is rated as one of the most common and disabling symptoms. However, the pathophysiology underlying this fatigue is not yet clear. Several lines of evidence suggest that immunological factors, such as elevated levels of proinflammatory cytokines, may contribute to subjective fatigue in MS patients. Proinflammatory cytokines represent primary mediators of immune-to-brain-communication, modulating changes in the neurophysiology of the central nervous system. Recently, we proposed a model arguing that fatigue in MS patients is a subjective feeling which is related to inflammation. Moreover, it implies that fatigue can be measured behaviorally only by applying specific cognitive tasks related to alertness and vigilance. In the present review we focus on the subjective feeling of MS-related fatigue. We examine the hypothesis that the subjective feeling of MS-related fatigue may be a variant of inflammation-induced sickness behavior, resulting from cytokine-mediated activity changes within brain areas involved in interoception and homeostasis including the insula, the anterior cingulate and the hypothalamus. We first present studies demonstrating a relationship between proinflammatory cytokines and subjective fatigue in healthy individuals, in people with inflammatory disorders, and particularly in MS patients. Subsequently, we discuss studies analyzing the impact of anti-inflammatory treatment on fatigue. In the next part of this review we present studies on the transmission and neural representation of inflammatory signals, with a special focus on possible neural concomitants of inflammation-induced fatigue. We also present two of our studies on the relationship between local gray and white matter atrophy and fatigue in MS patients. Finally, we discuss some implications of our findings and future perspectives.
Posterior distributions for likelihood ratios in forensic science.
van den Hout, Ardo; Alberink, Ivo
2016-09-01
Evaluation of evidence in forensic science is discussed using posterior distributions for likelihood ratios. Instead of eliminating the uncertainty by integrating (Bayes factor) or by conditioning on parameter values, uncertainty in the likelihood ratio is retained by parameter uncertainty derived from posterior distributions. A posterior distribution for a likelihood ratio can be summarised by the median and credible intervals. Using the posterior mean of the distribution is not recommended. An analysis of forensic data for body height estimation is undertaken. The posterior likelihood approach has been criticised both theoretically and with respect to applicability. This paper addresses the latter and illustrates an interesting application area. Copyright © 2016 The Chartered Society of Forensic Sciences. Published by Elsevier Ireland Ltd. All rights reserved.