Earthquake likelihood model testing
Schorlemmer, D.; Gerstenberger, M.C.; Wiemer, S.; Jackson, D.D.; Rhoades, D.A.
2007-01-01
INTRODUCTIONThe Regional Earthquake Likelihood Models (RELM) project aims to produce and evaluate alternate models of earthquake potential (probability per unit volume, magnitude, and time) for California. Based on differing assumptions, these models are produced to test the validity of their assumptions and to explore which models should be incorporated in seismic hazard and risk evaluation. Tests based on physical and geological criteria are useful but we focus on statistical methods using future earthquake catalog data only. We envision two evaluations: a test of consistency with observed data and a comparison of all pairs of models for relative consistency. Both tests are based on the likelihood method, and both are fully prospective (i.e., the models are not adjusted to fit the test data). To be tested, each model must assign a probability to any possible event within a specified region of space, time, and magnitude. For our tests the models must use a common format: earthquake rates in specified “bins” with location, magnitude, time, and focal mechanism limits.Seismology cannot yet deterministically predict individual earthquakes; however, it should seek the best possible models for forecasting earthquake occurrence. This paper describes the statistical rules of an experiment to examine and test earthquake forecasts. The primary purposes of the tests described below are to evaluate physical models for earthquakes, assure that source models used in seismic hazard and risk studies are consistent with earthquake data, and provide quantitative measures by which models can be assigned weights in a consensus model or be judged as suitable for particular regions.In this paper we develop a statistical method for testing earthquake likelihood models. A companion paper (Schorlemmer and Gerstenberger 2007, this issue) discusses the actual implementation of these tests in the framework of the RELM initiative.Statistical testing of hypotheses is a common task and a
Counseling Pretreatment and the Elaboration Likelihood Model of Attitude Change.
Heesacker, Martin
1986-01-01
Results of the application of the Elaboration Likelihood Model (ELM) to a counseling context revealed that more favorable attitudes toward counseling occurred as subjects' ego involvement increased and as intervention quality improved. Counselor credibility affected the degree to which subjects' attitudes reflected argument quality differences.…
Kinnear, John; Jackson, Ruth
2017-07-01
Although physicians are highly trained in the application of evidence-based medicine, and are assumed to make rational decisions, there is evidence that their decision making is prone to biases. One of the biases that has been shown to affect accuracy of judgements is that of representativeness and base-rate neglect, where the saliency of a person's features leads to overestimation of their likelihood of belonging to a group. This results in the substitution of 'subjective' probability for statistical probability. This study examines clinicians' propensity to make estimations of subjective probability when presented with clinical information that is considered typical of a medical condition. The strength of the representativeness bias is tested by presenting choices in textual and graphic form. Understanding of statistical probability is also tested by omitting all clinical information. For the questions that included clinical information, 46.7% and 45.5% of clinicians made judgements of statistical probability, respectively. Where the question omitted clinical information, 79.9% of clinicians made a judgement consistent with statistical probability. There was a statistically significant difference in responses to the questions with and without representativeness information (χ2 (1, n=254)=54.45, pprobability. One of the causes for this representativeness bias may be the way clinical medicine is taught where stereotypic presentations are emphasised in diagnostic decision making. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.
Review of Elaboration Likelihood Model of persuasion
藤原, 武弘; 神山, 貴弥
1989-01-01
This article mainly introduces Elaboration Likelihood Model (ELM), proposed by Petty & Cacioppo, that is, a general attitude change theory. ELM posturates two routes to persuasion; central and peripheral route. Attitude change by central route is viewed as resulting from a diligent consideration of the issue-relevant informations presented. On the other hand, attitude change by peripheral route is viewed as resulting from peripheral cues in the persuasion context. Secondly we compare these tw...
Modelling maximum likelihood estimation of availability
International Nuclear Information System (INIS)
Waller, R.A.; Tietjen, G.L.; Rock, G.W.
1975-01-01
Suppose the performance of a nuclear powered electrical generating power plant is continuously monitored to record the sequence of failure and repairs during sustained operation. The purpose of this study is to assess one method of estimating the performance of the power plant when the measure of performance is availability. That is, we determine the probability that the plant is operational at time t. To study the availability of a power plant, we first assume statistical models for the variables, X and Y, which denote the time-to-failure and the time-to-repair variables, respectively. Once those statistical models are specified, the availability, A(t), can be expressed as a function of some or all of their parameters. Usually those parameters are unknown in practice and so A(t) is unknown. This paper discusses the maximum likelihood estimator of A(t) when the time-to-failure model for X is an exponential density with parameter, lambda, and the time-to-repair model for Y is an exponential density with parameter, theta. Under the assumption of exponential models for X and Y, it follows that the instantaneous availability at time t is A(t)=lambda/(lambda+theta)+theta/(lambda+theta)exp[-[(1/lambda)+(1/theta)]t] with t>0. Also, the steady-state availability is A(infinity)=lambda/(lambda+theta). We use the observations from n failure-repair cycles of the power plant, say X 1 , X 2 , ..., Xsub(n), Y 1 , Y 2 , ..., Ysub(n) to present the maximum likelihood estimators of A(t) and A(infinity). The exact sampling distributions for those estimators and some statistical properties are discussed before a simulation model is used to determine 95% simulation intervals for A(t). The methodology is applied to two examples which approximate the operating history of two nuclear power plants. (author)
Likelihood analysis of the minimal AMSB model
Energy Technology Data Exchange (ETDEWEB)
Bagnaschi, E.; Weiglein, G. [DESY, Hamburg (Germany); Borsato, M.; Chobanova, V.; Lucio, M.; Santos, D.M. [Universidade de Santiago de Compostela, Santiago de Compostela (Spain); Sakurai, K. [Institute for Particle Physics Phenomenology, University of Durham, Science Laboratories, Department of Physics, Durham (United Kingdom); University of Warsaw, Faculty of Physics, Institute of Theoretical Physics, Warsaw (Poland); Buchmueller, O.; Citron, M.; Costa, J.C.; Richards, A. [Imperial College, High Energy Physics Group, Blackett Laboratory, London (United Kingdom); Cavanaugh, R. [Fermi National Accelerator Laboratory, Batavia, IL (United States); University of Illinois at Chicago, Physics Department, Chicago, IL (United States); De Roeck, A. [Experimental Physics Department, CERN, Geneva (Switzerland); Antwerp University, Wilrijk (Belgium); Dolan, M.J. [School of Physics, University of Melbourne, ARC Centre of Excellence for Particle Physics at the Terascale, Melbourne (Australia); Ellis, J.R. [King' s College London, Theoretical Particle Physics and Cosmology Group, Department of Physics, London (United Kingdom); CERN, Theoretical Physics Department, Geneva (Switzerland); Flaecher, H. [University of Bristol, H.H. Wills Physics Laboratory, Bristol (United Kingdom); Heinemeyer, S. [Campus of International Excellence UAM+CSIC, Madrid (Spain); Instituto de Fisica Teorica UAM-CSIC, Madrid (Spain); Instituto de Fisica de Cantabria (CSIC-UC), Cantabria (Spain); Isidori, G. [Physik-Institut, Universitaet Zuerich, Zurich (Switzerland); Luo, F. [Kavli IPMU (WPI), UTIAS, The University of Tokyo, Kashiwa, Chiba (Japan); Olive, K.A. [School of Physics and Astronomy, University of Minnesota, William I. Fine Theoretical Physics Institute, Minneapolis, MN (United States)
2017-04-15
We perform a likelihood analysis of the minimal anomaly-mediated supersymmetry-breaking (mAMSB) model using constraints from cosmology and accelerator experiments. We find that either a wino-like or a Higgsino-like neutralino LSP, χ{sup 0}{sub 1}, may provide the cold dark matter (DM), both with similar likelihoods. The upper limit on the DM density from Planck and other experiments enforces m{sub χ{sup 0}{sub 1}}
Maximum likelihood estimation of finite mixture model for economic data
Phoong, Seuk-Yen; Ismail, Mohd Tahir
2014-06-01
Finite mixture model is a mixture model with finite-dimension. This models are provides a natural representation of heterogeneity in a finite number of latent classes. In addition, finite mixture models also known as latent class models or unsupervised learning models. Recently, maximum likelihood estimation fitted finite mixture models has greatly drawn statistician's attention. The main reason is because maximum likelihood estimation is a powerful statistical method which provides consistent findings as the sample sizes increases to infinity. Thus, the application of maximum likelihood estimation is used to fit finite mixture model in the present paper in order to explore the relationship between nonlinear economic data. In this paper, a two-component normal mixture model is fitted by maximum likelihood estimation in order to investigate the relationship among stock market price and rubber price for sampled countries. Results described that there is a negative effect among rubber price and stock market price for Malaysia, Thailand, Philippines and Indonesia.
Practical likelihood analysis for spatial generalized linear mixed models
DEFF Research Database (Denmark)
Bonat, W. H.; Ribeiro, Paulo Justiniano
2016-01-01
We investigate an algorithm for maximum likelihood estimation of spatial generalized linear mixed models based on the Laplace approximation. We compare our algorithm with a set of alternative approaches for two datasets from the literature. The Rhizoctonia root rot and the Rongelap are......, respectively, examples of binomial and count datasets modeled by spatial generalized linear mixed models. Our results show that the Laplace approximation provides similar estimates to Markov Chain Monte Carlo likelihood, Monte Carlo expectation maximization, and modified Laplace approximation. Some advantages...... of Laplace approximation include the computation of the maximized log-likelihood value, which can be used for model selection and tests, and the possibility to obtain realistic confidence intervals for model parameters based on profile likelihoods. The Laplace approximation also avoids the tuning...
Tapered composite likelihood for spatial max-stable models
Sang, Huiyan
2014-05-01
Spatial extreme value analysis is useful to environmental studies, in which extreme value phenomena are of interest and meaningful spatial patterns can be discerned. Max-stable process models are able to describe such phenomena. This class of models is asymptotically justified to characterize the spatial dependence among extremes. However, likelihood inference is challenging for such models because their corresponding joint likelihood is unavailable and only bivariate or trivariate distributions are known. In this paper, we propose a tapered composite likelihood approach by utilizing lower dimensional marginal likelihoods for inference on parameters of various max-stable process models. We consider a weighting strategy based on a "taper range" to exclude distant pairs or triples. The "optimal taper range" is selected to maximize various measures of the Godambe information associated with the tapered composite likelihood function. This method substantially reduces the computational cost and improves the efficiency over equally weighted composite likelihood estimators. We illustrate its utility with simulation experiments and an analysis of rainfall data in Switzerland.
Tapered composite likelihood for spatial max-stable models
Sang, Huiyan; Genton, Marc G.
2014-01-01
Spatial extreme value analysis is useful to environmental studies, in which extreme value phenomena are of interest and meaningful spatial patterns can be discerned. Max-stable process models are able to describe such phenomena. This class of models is asymptotically justified to characterize the spatial dependence among extremes. However, likelihood inference is challenging for such models because their corresponding joint likelihood is unavailable and only bivariate or trivariate distributions are known. In this paper, we propose a tapered composite likelihood approach by utilizing lower dimensional marginal likelihoods for inference on parameters of various max-stable process models. We consider a weighting strategy based on a "taper range" to exclude distant pairs or triples. The "optimal taper range" is selected to maximize various measures of the Godambe information associated with the tapered composite likelihood function. This method substantially reduces the computational cost and improves the efficiency over equally weighted composite likelihood estimators. We illustrate its utility with simulation experiments and an analysis of rainfall data in Switzerland.
Owen, Art B
2001-01-01
Empirical likelihood provides inferences whose validity does not depend on specifying a parametric model for the data. Because it uses a likelihood, the method has certain inherent advantages over resampling methods: it uses the data to determine the shape of the confidence regions, and it makes it easy to combined data from multiple sources. It also facilitates incorporating side information, and it simplifies accounting for censored, truncated, or biased sampling.One of the first books published on the subject, Empirical Likelihood offers an in-depth treatment of this method for constructing confidence regions and testing hypotheses. The author applies empirical likelihood to a range of problems, from those as simple as setting a confidence region for a univariate mean under IID sampling, to problems defined through smooth functions of means, regression models, generalized linear models, estimating equations, or kernel smooths, and to sampling with non-identically distributed data. Abundant figures offer vi...
Likelihood ratio sequential sampling models of recognition memory.
Osth, Adam F; Dennis, Simon; Heathcote, Andrew
2017-02-01
The mirror effect - a phenomenon whereby a manipulation produces opposite effects on hit and false alarm rates - is benchmark regularity of recognition memory. A likelihood ratio decision process, basing recognition on the relative likelihood that a stimulus is a target or a lure, naturally predicts the mirror effect, and so has been widely adopted in quantitative models of recognition memory. Glanzer, Hilford, and Maloney (2009) demonstrated that likelihood ratio models, assuming Gaussian memory strength, are also capable of explaining regularities observed in receiver-operating characteristics (ROCs), such as greater target than lure variance. Despite its central place in theorising about recognition memory, however, this class of models has not been tested using response time (RT) distributions. In this article, we develop a linear approximation to the likelihood ratio transformation, which we show predicts the same regularities as the exact transformation. This development enabled us to develop a tractable model of recognition-memory RT based on the diffusion decision model (DDM), with inputs (drift rates) provided by an approximate likelihood ratio transformation. We compared this "LR-DDM" to a standard DDM where all targets and lures receive their own drift rate parameters. Both were implemented as hierarchical Bayesian models and applied to four datasets. Model selection taking into account parsimony favored the LR-DDM, which requires fewer parameters than the standard DDM but still fits the data well. These results support log-likelihood based models as providing an elegant explanation of the regularities of recognition memory, not only in terms of choices made but also in terms of the times it takes to make them. Copyright © 2016 Elsevier Inc. All rights reserved.
Reconceptualizing Social Influence in Counseling: The Elaboration Likelihood Model.
McNeill, Brian W.; Stoltenberg, Cal D.
1989-01-01
Presents Elaboration Likelihood Model (ELM) of persuasion (a reconceptualization of the social influence process) as alternative model of attitude change. Contends ELM unifies conflicting social psychology results and can potentially account for inconsistent research findings in counseling psychology. Provides guidelines on integrating…
Profile-likelihood Confidence Intervals in Item Response Theory Models.
Chalmers, R Philip; Pek, Jolynn; Liu, Yang
2017-01-01
Confidence intervals (CIs) are fundamental inferential devices which quantify the sampling variability of parameter estimates. In item response theory, CIs have been primarily obtained from large-sample Wald-type approaches based on standard error estimates, derived from the observed or expected information matrix, after parameters have been estimated via maximum likelihood. An alternative approach to constructing CIs is to quantify sampling variability directly from the likelihood function with a technique known as profile-likelihood confidence intervals (PL CIs). In this article, we introduce PL CIs for item response theory models, compare PL CIs to classical large-sample Wald-type CIs, and demonstrate important distinctions among these CIs. CIs are then constructed for parameters directly estimated in the specified model and for transformed parameters which are often obtained post-estimation. Monte Carlo simulation results suggest that PL CIs perform consistently better than Wald-type CIs for both non-transformed and transformed parameters.
Modeling gene expression measurement error: a quasi-likelihood approach
Directory of Open Access Journals (Sweden)
Strimmer Korbinian
2003-03-01
Full Text Available Abstract Background Using suitable error models for gene expression measurements is essential in the statistical analysis of microarray data. However, the true probabilistic model underlying gene expression intensity readings is generally not known. Instead, in currently used approaches some simple parametric model is assumed (usually a transformed normal distribution or the empirical distribution is estimated. However, both these strategies may not be optimal for gene expression data, as the non-parametric approach ignores known structural information whereas the fully parametric models run the risk of misspecification. A further related problem is the choice of a suitable scale for the model (e.g. observed vs. log-scale. Results Here a simple semi-parametric model for gene expression measurement error is presented. In this approach inference is based an approximate likelihood function (the extended quasi-likelihood. Only partial knowledge about the unknown true distribution is required to construct this function. In case of gene expression this information is available in the form of the postulated (e.g. quadratic variance structure of the data. As the quasi-likelihood behaves (almost like a proper likelihood, it allows for the estimation of calibration and variance parameters, and it is also straightforward to obtain corresponding approximate confidence intervals. Unlike most other frameworks, it also allows analysis on any preferred scale, i.e. both on the original linear scale as well as on a transformed scale. It can also be employed in regression approaches to model systematic (e.g. array or dye effects. Conclusions The quasi-likelihood framework provides a simple and versatile approach to analyze gene expression data that does not make any strong distributional assumptions about the underlying error model. For several simulated as well as real data sets it provides a better fit to the data than competing models. In an example it also
Quantifying uncertainty, variability and likelihood for ordinary differential equation models
LENUS (Irish Health Repository)
Weisse, Andrea Y
2010-10-28
Abstract Background In many applications, ordinary differential equation (ODE) models are subject to uncertainty or variability in initial conditions and parameters. Both, uncertainty and variability can be quantified in terms of a probability density function on the state and parameter space. Results The partial differential equation that describes the evolution of this probability density function has a form that is particularly amenable to application of the well-known method of characteristics. The value of the density at some point in time is directly accessible by the solution of the original ODE extended by a single extra dimension (for the value of the density). This leads to simple methods for studying uncertainty, variability and likelihood, with significant advantages over more traditional Monte Carlo and related approaches especially when studying regions with low probability. Conclusions While such approaches based on the method of characteristics are common practice in other disciplines, their advantages for the study of biological systems have so far remained unrecognized. Several examples illustrate performance and accuracy of the approach and its limitations.
Likelihood inference for a nonstationary fractional autoregressive model
DEFF Research Database (Denmark)
Johansen, Søren; Ørregård Nielsen, Morten
2010-01-01
This paper discusses model-based inference in an autoregressive model for fractional processes which allows the process to be fractional of order d or d-b. Fractional differencing involves infinitely many past values and because we are interested in nonstationary processes we model the data X1......,...,X_{T} given the initial values X_{-n}, n=0,1,..., as is usually done. The initial values are not modeled but assumed to be bounded. This represents a considerable generalization relative to all previous work where it is assumed that initial values are zero. For the statistical analysis we assume...... the conditional Gaussian likelihood and for the probability analysis we also condition on initial values but assume that the errors in the autoregressive model are i.i.d. with suitable moment conditions. We analyze the conditional likelihood and its derivatives as stochastic processes in the parameters, including...
Penggunaan Elaboration Likelihood Model dalam Menganalisis Penerimaan Teknologi Informasi
vitrian, vitrian2
2010-01-01
This article discusses some technology acceptance models in an organization. Thorough analysis of how technology is acceptable help managers make any planning to implement new teachnology and make sure that new technology could enhance organization's performance. Elaboration Likelihood Model (ELM) is the one which sheds light on some behavioral factors in acceptance of information technology. The basic tenet of ELM states that human behavior in principle can be influenced through central r...
Gaussian copula as a likelihood function for environmental models
Wani, O.; Espadas, G.; Cecinati, F.; Rieckermann, J.
2017-12-01
Parameter estimation of environmental models always comes with uncertainty. To formally quantify this parametric uncertainty, a likelihood function needs to be formulated, which is defined as the probability of observations given fixed values of the parameter set. A likelihood function allows us to infer parameter values from observations using Bayes' theorem. The challenge is to formulate a likelihood function that reliably describes the error generating processes which lead to the observed monitoring data, such as rainfall and runoff. If the likelihood function is not representative of the error statistics, the parameter inference will give biased parameter values. Several uncertainty estimation methods that are currently being used employ Gaussian processes as a likelihood function, because of their favourable analytical properties. Box-Cox transformation is suggested to deal with non-symmetric and heteroscedastic errors e.g. for flow data which are typically more uncertain in high flows than in periods with low flows. Problem with transformations is that the results are conditional on hyper-parameters, for which it is difficult to formulate the analyst's belief a priori. In an attempt to address this problem, in this research work we suggest learning the nature of the error distribution from the errors made by the model in the "past" forecasts. We use a Gaussian copula to generate semiparametric error distributions . 1) We show that this copula can be then used as a likelihood function to infer parameters, breaking away from the practice of using multivariate normal distributions. Based on the results from a didactical example of predicting rainfall runoff, 2) we demonstrate that the copula captures the predictive uncertainty of the model. 3) Finally, we find that the properties of autocorrelation and heteroscedasticity of errors are captured well by the copula, eliminating the need to use transforms. In summary, our findings suggest that copulas are an
Menyoal Elaboration Likelihood Model (ELM) dan Teori Retorika
Yudi Perbawaningsih
2012-01-01
Abstract: Persuasion is a communication process to establish or change attitudes, which can be understood through theory of Rhetoric and theory of Elaboration Likelihood Model (ELM). This study elaborates these theories in a Public Lecture series which to persuade the students in choosing their concentration of study. The result shows that in term of persuasion effectiveness it is not quite relevant to separate the message and its source. The quality of source is determined by the quality of ...
Menyoal Elaboration Likelihood Model (ELM) Dan Teori Retorika
Perbawaningsih, Yudi
2012-01-01
: Persuasion is a communication process to establish or change attitudes, which can be understood through theory of Rhetoric and theory of Elaboration Likelihood Model (ELM). This study elaborates these theories in a Public Lecture series which to persuade the students in choosing their concentration of study. The result shows that in term of persuasion effectiveness it is not quite relevant to separate the message and its source. The quality of source is determined by the quality of the mess...
Marginal Maximum Likelihood Estimation of Item Response Models in R
Directory of Open Access Journals (Sweden)
Matthew S. Johnson
2007-02-01
Full Text Available Item response theory (IRT models are a class of statistical models used by researchers to describe the response behaviors of individuals to a set of categorically scored items. The most common IRT models can be classified as generalized linear fixed- and/or mixed-effect models. Although IRT models appear most often in the psychological testing literature, researchers in other fields have successfully utilized IRT-like models in a wide variety of applications. This paper discusses the three major methods of estimation in IRT and develops R functions utilizing the built-in capabilities of the R environment to find the marginal maximum likelihood estimates of the generalized partial credit model. The currently available R packages ltm is also discussed.
Likelihood-Based Inference in Nonlinear Error-Correction Models
DEFF Research Database (Denmark)
Kristensen, Dennis; Rahbæk, Anders
We consider a class of vector nonlinear error correction models where the transfer function (or loadings) of the stationary relation- ships is nonlinear. This includes in particular the smooth transition models. A general representation theorem is given which establishes the dynamic properties...... and a linear trend in general. Gaussian likelihood-based estimators are considered for the long- run cointegration parameters, and the short-run parameters. Asymp- totic theory is provided for these and it is discussed to what extend asymptotic normality and mixed normaity can be found. A simulation study...
Likelihood inference for a fractionally cointegrated vector autoregressive model
DEFF Research Database (Denmark)
Johansen, Søren; Ørregård Nielsen, Morten
2012-01-01
such that the process X_{t} is fractional of order d and cofractional of order d-b; that is, there exist vectors ß for which ß'X_{t} is fractional of order d-b, and no other fractionality order is possible. We define the statistical model by 0inference when the true values satisfy b0¿1/2 and d0-b0......We consider model based inference in a fractionally cointegrated (or cofractional) vector autoregressive model with a restricted constant term, ¿, based on the Gaussian likelihood conditional on initial values. The model nests the I(d) VAR model. We give conditions on the parameters...... process in the parameters when errors are i.i.d. with suitable moment conditions and initial values are bounded. When the limit is deterministic this implies uniform convergence in probability of the conditional likelihood function. If the true value b0>1/2, we prove that the limit distribution of (ß...
Calibration of two complex ecosystem models with different likelihood functions
Hidy, Dóra; Haszpra, László; Pintér, Krisztina; Nagy, Zoltán; Barcza, Zoltán
2014-05-01
The biosphere is a sensitive carbon reservoir. Terrestrial ecosystems were approximately carbon neutral during the past centuries, but they became net carbon sinks due to climate change induced environmental change and associated CO2 fertilization effect of the atmosphere. Model studies and measurements indicate that the biospheric carbon sink can saturate in the future due to ongoing climate change which can act as a positive feedback. Robustness of carbon cycle models is a key issue when trying to choose the appropriate model for decision support. The input parameters of the process-based models are decisive regarding the model output. At the same time there are several input parameters for which accurate values are hard to obtain directly from experiments or no local measurements are available. Due to the uncertainty associated with the unknown model parameters significant bias can be experienced if the model is used to simulate the carbon and nitrogen cycle components of different ecosystems. In order to improve model performance the unknown model parameters has to be estimated. We developed a multi-objective, two-step calibration method based on Bayesian approach in order to estimate the unknown parameters of PaSim and Biome-BGC models. Biome-BGC and PaSim are a widely used biogeochemical models that simulate the storage and flux of water, carbon, and nitrogen between the ecosystem and the atmosphere, and within the components of the terrestrial ecosystems (in this research the developed version of Biome-BGC is used which is referred as BBGC MuSo). Both models were calibrated regardless the simulated processes and type of model parameters. The calibration procedure is based on the comparison of measured data with simulated results via calculating a likelihood function (degree of goodness-of-fit between simulated and measured data). In our research different likelihood function formulations were used in order to examine the effect of the different model
Menyoal Elaboration Likelihood Model (ELM dan Teori Retorika
Directory of Open Access Journals (Sweden)
Yudi Perbawaningsih
2012-06-01
Full Text Available Abstract: Persuasion is a communication process to establish or change attitudes, which can be understood through theory of Rhetoric and theory of Elaboration Likelihood Model (ELM. This study elaborates these theories in a Public Lecture series which to persuade the students in choosing their concentration of study. The result shows that in term of persuasion effectiveness it is not quite relevant to separate the message and its source. The quality of source is determined by the quality of the message, and vice versa. Separating the two routes of the persuasion process as described in the ELM theory would not be relevant. Abstrak: Persuasi adalah proses komunikasi untuk membentuk atau mengubah sikap, yang dapat dipahami dengan teori Retorika dan teori Elaboration Likelihood Model (ELM. Penelitian ini mengelaborasi teori tersebut dalam Kuliah Umum sebagai sarana mempersuasi mahasiswa untuk memilih konsentrasi studi studi yang didasarkan pada proses pengolahan informasi. Menggunakan metode survey, didapatkan hasil yaitu tidaklah cukup relevan memisahkan pesan dan narasumber dalam melihat efektivitas persuasi. Keduanya menyatu yang berarti bahwa kualitas narasumber ditentukan oleh kualitas pesan yang disampaikannya, dan sebaliknya. Memisahkan proses persuasi dalam dua lajur seperti yang dijelaskan dalam ELM teori menjadi tidak relevan.
Molenaar, P.C.M.; Nesselroade, J.R.
1998-01-01
The study of intraindividual variability pervades empirical inquiry in virtually all subdisciplines of psychology. The statistical analysis of multivariate time-series data - a central product of intraindividual investigations - requires special modeling techniques. The dynamic factor model (DFM),
Music genre classification via likelihood fusion from multiple feature models
Shiu, Yu; Kuo, C.-C. J.
2005-01-01
Music genre provides an efficient way to index songs in a music database, and can be used as an effective means to retrieval music of a similar type, i.e. content-based music retrieval. A new two-stage scheme for music genre classification is proposed in this work. At the first stage, we examine a couple of different features, construct their corresponding parametric models (e.g. GMM and HMM) and compute their likelihood functions to yield soft classification results. In particular, the timbre, rhythm and temporal variation features are considered. Then, at the second stage, these soft classification results are integrated to result in a hard decision for final music genre classification. Experimental results are given to demonstrate the performance of the proposed scheme.
The elaboration likelihood model and communication about food risks.
Frewer, L J; Howard, C; Hedderley, D; Shepherd, R
1997-12-01
Factors such as hazard type and source credibility have been identified as important in the establishment of effective strategies for risk communication. The elaboration likelihood model was adapted to investigate the potential impact of hazard type, information source, and persuasive content of information on individual engagement in elaborative, or thoughtful, cognitions about risk messages. One hundred sixty respondents were allocated to one of eight experimental groups, and the effects of source credibility, persuasive content of information and hazard type were systematically varied. The impact of the different factors on beliefs about the information and elaborative processing examined. Low credibility was particularly important in reducing risk perceptions, although persuasive content and hazard type were also influential in determining whether elaborative processing occurred.
Likelihood ratio model for classification of forensic evidence
Energy Technology Data Exchange (ETDEWEB)
Zadora, G., E-mail: gzadora@ies.krakow.pl [Institute of Forensic Research, Westerplatte 9, 31-033 Krakow (Poland); Neocleous, T., E-mail: tereza@stats.gla.ac.uk [University of Glasgow, Department of Statistics, 15 University Gardens, Glasgow G12 8QW (United Kingdom)
2009-05-29
One of the problems of analysis of forensic evidence such as glass fragments, is the determination of their use-type category, e.g. does a glass fragment originate from an unknown window or container? Very small glass fragments arise during various accidents and criminal offences, and could be carried on the clothes, shoes and hair of participants. It is therefore necessary to obtain information on their physicochemical composition in order to solve the classification problem. Scanning Electron Microscopy coupled with an Energy Dispersive X-ray Spectrometer and the Glass Refractive Index Measurement method are routinely used in many forensic institutes for the investigation of glass. A natural form of glass evidence evaluation for forensic purposes is the likelihood ratio-LR = p(E|H{sub 1})/p(E|H{sub 2}). The main aim of this paper was to study the performance of LR models for glass object classification which considered one or two sources of data variability, i.e. between-glass-object variability and(or) within-glass-object variability. Within the proposed model a multivariate kernel density approach was adopted for modelling the between-object distribution and a multivariate normal distribution was adopted for modelling within-object distributions. Moreover, a graphical method of estimating the dependence structure was employed to reduce the highly multivariate problem to several lower-dimensional problems. The performed analysis showed that the best likelihood model was the one which allows to include information about between and within-object variability, and with variables derived from elemental compositions measured by SEM-EDX, and refractive values determined before (RI{sub b}) and after (RI{sub a}) the annealing process, in the form of dRI = log{sub 10}|RI{sub a} - RI{sub b}|. This model gave better results than the model with only between-object variability considered. In addition, when dRI and variables derived from elemental compositions were used, this
Likelihood ratio model for classification of forensic evidence
International Nuclear Information System (INIS)
Zadora, G.; Neocleous, T.
2009-01-01
One of the problems of analysis of forensic evidence such as glass fragments, is the determination of their use-type category, e.g. does a glass fragment originate from an unknown window or container? Very small glass fragments arise during various accidents and criminal offences, and could be carried on the clothes, shoes and hair of participants. It is therefore necessary to obtain information on their physicochemical composition in order to solve the classification problem. Scanning Electron Microscopy coupled with an Energy Dispersive X-ray Spectrometer and the Glass Refractive Index Measurement method are routinely used in many forensic institutes for the investigation of glass. A natural form of glass evidence evaluation for forensic purposes is the likelihood ratio-LR = p(E|H 1 )/p(E|H 2 ). The main aim of this paper was to study the performance of LR models for glass object classification which considered one or two sources of data variability, i.e. between-glass-object variability and(or) within-glass-object variability. Within the proposed model a multivariate kernel density approach was adopted for modelling the between-object distribution and a multivariate normal distribution was adopted for modelling within-object distributions. Moreover, a graphical method of estimating the dependence structure was employed to reduce the highly multivariate problem to several lower-dimensional problems. The performed analysis showed that the best likelihood model was the one which allows to include information about between and within-object variability, and with variables derived from elemental compositions measured by SEM-EDX, and refractive values determined before (RI b ) and after (RI a ) the annealing process, in the form of dRI = log 10 |RI a - RI b |. This model gave better results than the model with only between-object variability considered. In addition, when dRI and variables derived from elemental compositions were used, this model outperformed two other
Estimation of Financial Agent-Based Models with Simulated Maximum Likelihood
Czech Academy of Sciences Publication Activity Database
Kukačka, Jiří; Baruník, Jozef
2017-01-01
Roč. 85, č. 1 (2017), s. 21-45 ISSN 0165-1889 R&D Projects: GA ČR(CZ) GBP402/12/G097 Institutional support: RVO:67985556 Keywords : heterogeneous agent model, * simulated maximum likelihood * switching Subject RIV: AH - Economics OBOR OECD: Finance Impact factor: 1.000, year: 2016 http://library.utia.cas.cz/separaty/2017/E/kukacka-0478481.pdf
Zeng, X.
2015-12-01
A large number of model executions are required to obtain alternative conceptual models' predictions and their posterior probabilities in Bayesian model averaging (BMA). The posterior model probability is estimated through models' marginal likelihood and prior probability. The heavy computation burden hinders the implementation of BMA prediction, especially for the elaborated marginal likelihood estimator. For overcoming the computation burden of BMA, an adaptive sparse grid (SG) stochastic collocation method is used to build surrogates for alternative conceptual models through the numerical experiment of a synthetical groundwater model. BMA predictions depend on model posterior weights (or marginal likelihoods), and this study also evaluated four marginal likelihood estimators, including arithmetic mean estimator (AME), harmonic mean estimator (HME), stabilized harmonic mean estimator (SHME), and thermodynamic integration estimator (TIE). The results demonstrate that TIE is accurate in estimating conceptual models' marginal likelihoods. The BMA-TIE has better predictive performance than other BMA predictions. TIE has high stability for estimating conceptual model's marginal likelihood. The repeated estimated conceptual model's marginal likelihoods by TIE have significant less variability than that estimated by other estimators. In addition, the SG surrogates are efficient to facilitate BMA predictions, especially for BMA-TIE. The number of model executions needed for building surrogates is 4.13%, 6.89%, 3.44%, and 0.43% of the required model executions of BMA-AME, BMA-HME, BMA-SHME, and BMA-TIE, respectively.
Maximum likelihood approach for several stochastic volatility models
International Nuclear Information System (INIS)
Camprodon, Jordi; Perelló, Josep
2012-01-01
Volatility measures the amplitude of price fluctuations. Despite it being one of the most important quantities in finance, volatility is not directly observable. Here we apply a maximum likelihood method which assumes that price and volatility follow a two-dimensional diffusion process where volatility is the stochastic diffusion coefficient of the log-price dynamics. We apply this method to the simplest versions of the expOU, the OU and the Heston stochastic volatility models and we study their performance in terms of the log-price probability, the volatility probability, and its Mean First-Passage Time. The approach has some predictive power on the future returns amplitude by only knowing the current volatility. The assumed models do not consider long-range volatility autocorrelation and the asymmetric return-volatility cross-correlation but the method still yields very naturally these two important stylized facts. We apply the method to different market indices and with a good performance in all cases. (paper)
Race of source effects in the elaboration likelihood model.
White, P H; Harkins, S G
1994-11-01
In a series of experiments, we investigated the effect of race of source on persuasive communications in the Elaboration Likelihood Model (R.E. Petty & J.T. Cacioppo, 1981, 1986). In Experiment 1, we found no evidence that White participants responded to a Black source as a simple negative cue. Experiment 2 suggested the possibility that exposure to a Black source led to low-involvement message processing. In Experiments 3 and 4, a distraction paradigm was used to test this possibility, and it was found that participants under low involvement were highly motivated to process a message presented by a Black source. In Experiment 5, we found that attitudes toward the source's ethnic group, rather than violations of expectancies, accounted for this processing effect. Taken together, the results of these experiments are consistent with S.L. Gaertner and J.F. Dovidio's (1986) theory of aversive racism, which suggests that Whites, because of a combination of egalitarian values and underlying negative racial attitudes, are very concerned about not appearing unfavorable toward Blacks, leading them to be highly motivated to process messages presented by a source from this group.
International Nuclear Information System (INIS)
Manglos, S.H.
1992-01-01
Transverse image truncation can be a serious problem for human imaging using cone-beam transmission CT (CB-CT) implemented on a conventional rotating gamma camera. This paper presents a reconstruction method to reduce or eliminate the artifacts resulting from the truncation. The method uses a previously published transmission maximum likelihood EM algorithm, adapted to the cone-beam geometry. The reconstruction method is evaluated qualitatively using three human subjects of various dimensions and various degrees of truncation. (author)
Performances of the likelihood-ratio classifier based on different data modelings
Chen, C.; Veldhuis, Raymond N.J.
2008-01-01
The classical likelihood ratio classifier easily collapses in many biometric applications especially with independent training-test subjects. The reason lies in the inaccurate estimation of the underlying user-specific feature density. Firstly, the feature density estimation suffers from
Ros, B.P.; Bijma, F.; de Munck, J.C.; de Gunst, M.C.M.
2016-01-01
This paper deals with multivariate Gaussian models for which the covariance matrix is a Kronecker product of two matrices. We consider maximum likelihood estimation of the model parameters, in particular of the covariance matrix. There is no explicit expression for the maximum likelihood estimator
Lammers, H B
2000-04-01
From an Elaboration Likelihood Model perspective, it was hypothesized that postexposure awareness of deceptive packaging claims would have a greater negative effect on scores for purchase intention by consumers lowly involved rather than highly involved with a product (n = 40). Undergraduates who were classified as either highly or lowly (ns = 20 and 20) involved with M&Ms examined either a deceptive or non-deceptive package design for M&Ms candy and were subsequently informed of the deception employed in the packaging before finally rating their intention to purchase. As anticipated, highly deceived subjects who were low in involvement rated intention to purchase lower than their highly involved peers. Overall, the results attest to the robustness of the model and suggest that the model has implications beyond advertising effects and into packaging effects.
DEFF Research Database (Denmark)
Nielsen, Jan; Parner, Erik
2010-01-01
In this paper, we model multivariate time-to-event data by composite likelihood of pairwise frailty likelihoods and marginal hazards using natural cubic splines. Both right- and interval-censored data are considered. The suggested approach is applied on two types of family studies using the gamma...
Zero-inflated Poisson model based likelihood ratio test for drug safety signal detection.
Huang, Lan; Zheng, Dan; Zalkikar, Jyoti; Tiwari, Ram
2017-02-01
In recent decades, numerous methods have been developed for data mining of large drug safety databases, such as Food and Drug Administration's (FDA's) Adverse Event Reporting System, where data matrices are formed by drugs such as columns and adverse events as rows. Often, a large number of cells in these data matrices have zero cell counts and some of them are "true zeros" indicating that the drug-adverse event pairs cannot occur, and these zero counts are distinguished from the other zero counts that are modeled zero counts and simply indicate that the drug-adverse event pairs have not occurred yet or have not been reported yet. In this paper, a zero-inflated Poisson model based likelihood ratio test method is proposed to identify drug-adverse event pairs that have disproportionately high reporting rates, which are also called signals. The maximum likelihood estimates of the model parameters of zero-inflated Poisson model based likelihood ratio test are obtained using the expectation and maximization algorithm. The zero-inflated Poisson model based likelihood ratio test is also modified to handle the stratified analyses for binary and categorical covariates (e.g. gender and age) in the data. The proposed zero-inflated Poisson model based likelihood ratio test method is shown to asymptotically control the type I error and false discovery rate, and its finite sample performance for signal detection is evaluated through a simulation study. The simulation results show that the zero-inflated Poisson model based likelihood ratio test method performs similar to Poisson model based likelihood ratio test method when the estimated percentage of true zeros in the database is small. Both the zero-inflated Poisson model based likelihood ratio test and likelihood ratio test methods are applied to six selected drugs, from the 2006 to 2011 Adverse Event Reporting System database, with varying percentages of observed zero-count cells.
CERN. Geneva
2015-01-01
Most physics results at the LHC end in a likelihood ratio test. This includes discovery and exclusion for searches as well as mass, cross-section, and coupling measurements. The use of Machine Learning (multivariate) algorithms in HEP is mainly restricted to searches, which can be reduced to classification between two fixed distributions: signal vs. background. I will show how we can extend the use of ML classifiers to distributions parameterized by physical quantities like masses and couplings as well as nuisance parameters associated to systematic uncertainties. This allows for one to approximate the likelihood ratio while still using a high dimensional feature vector for the data. Both the MEM and ABC approaches mentioned above aim to provide inference on model parameters (like cross-sections, masses, couplings, etc.). ABC is fundamentally tied Bayesian inference and focuses on the “likelihood free” setting where only a simulator is available and one cannot directly compute the likelihood for the dat...
Statistical modelling of survival data with random effects h-likelihood approach
Ha, Il Do; Lee, Youngjo
2017-01-01
This book provides a groundbreaking introduction to the likelihood inference for correlated survival data via the hierarchical (or h-) likelihood in order to obtain the (marginal) likelihood and to address the computational difficulties in inferences and extensions. The approach presented in the book overcomes shortcomings in the traditional likelihood-based methods for clustered survival data such as intractable integration. The text includes technical materials such as derivations and proofs in each chapter, as well as recently developed software programs in R (“frailtyHL”), while the real-world data examples together with an R package, “frailtyHL” in CRAN, provide readers with useful hands-on tools. Reviewing new developments since the introduction of the h-likelihood to survival analysis (methods for interval estimation of the individual frailty and for variable selection of the fixed effects in the general class of frailty models) and guiding future directions, the book is of interest to research...
How to Maximize the Likelihood Function for a DSGE Model
DEFF Research Database (Denmark)
Andreasen, Martin Møller
This paper extends two optimization routines to deal with objective functions for DSGE models. The optimization routines are i) a version of Simulated Annealing developed by Corana, Marchesi & Ridella (1987), and ii) the evolutionary algorithm CMA-ES developed by Hansen, Müller & Koumoutsakos (2003...
Finite mixture model: A maximum likelihood estimation approach on time series data
Yen, Phoong Seuk; Ismail, Mohd Tahir; Hamzah, Firdaus Mohamad
2014-09-01
Recently, statistician emphasized on the fitting of finite mixture model by using maximum likelihood estimation as it provides asymptotic properties. In addition, it shows consistency properties as the sample sizes increases to infinity. This illustrated that maximum likelihood estimation is an unbiased estimator. Moreover, the estimate parameters obtained from the application of maximum likelihood estimation have smallest variance as compared to others statistical method as the sample sizes increases. Thus, maximum likelihood estimation is adopted in this paper to fit the two-component mixture model in order to explore the relationship between rubber price and exchange rate for Malaysia, Thailand, Philippines and Indonesia. Results described that there is a negative effect among rubber price and exchange rate for all selected countries.
The fine-tuning cost of the likelihood in SUSY models
Ghilencea, D M
2013-01-01
In SUSY models, the fine tuning of the electroweak (EW) scale with respect to their parameters gamma_i={m_0, m_{1/2}, mu_0, A_0, B_0,...} and the maximal likelihood L to fit the experimental data are usually regarded as two different problems. We show that, if one regards the EW minimum conditions as constraints that fix the EW scale, this commonly held view is not correct and that the likelihood contains all the information about fine-tuning. In this case we show that the corrected likelihood is equal to the ratio L/Delta of the usual likelihood L and the traditional fine tuning measure Delta of the EW scale. A similar result is obtained for the integrated likelihood over the set {gamma_i}, that can be written as a surface integral of the ratio L/Delta, with the surface in gamma_i space determined by the EW minimum constraints. As a result, a large likelihood actually demands a large ratio L/Delta or equivalently, a small chi^2_{new}=chi^2_{old}+2*ln(Delta). This shows the fine-tuning cost to the likelihood ...
Generalized Empirical Likelihood-Based Focused Information Criterion and Model Averaging
Directory of Open Access Journals (Sweden)
Naoya Sueishi
2013-07-01
Full Text Available This paper develops model selection and averaging methods for moment restriction models. We first propose a focused information criterion based on the generalized empirical likelihood estimator. We address the issue of selecting an optimal model, rather than a correct model, for estimating a specific parameter of interest. Then, this study investigates a generalized empirical likelihood-based model averaging estimator that minimizes the asymptotic mean squared error. A simulation study suggests that our averaging estimator can be a useful alternative to existing post-selection estimators.
Generalized linear models with random effects unified analysis via H-likelihood
Lee, Youngjo; Pawitan, Yudi
2006-01-01
Since their introduction in 1972, generalized linear models (GLMs) have proven useful in the generalization of classical normal models. Presenting methods for fitting GLMs with random effects to data, Generalized Linear Models with Random Effects: Unified Analysis via H-likelihood explores a wide range of applications, including combining information over trials (meta-analysis), analysis of frailty models for survival data, genetic epidemiology, and analysis of spatial and temporal models with correlated errors.Written by pioneering authorities in the field, this reference provides an introduction to various theories and examines likelihood inference and GLMs. The authors show how to extend the class of GLMs while retaining as much simplicity as possible. By maximizing and deriving other quantities from h-likelihood, they also demonstrate how to use a single algorithm for all members of the class, resulting in a faster algorithm as compared to existing alternatives. Complementing theory with examples, many of...
Maximum likelihood estimation of the parameters of nonminimum phase and noncausal ARMA models
DEFF Research Database (Denmark)
Rasmussen, Klaus Bolding
1994-01-01
The well-known prediction-error-based maximum likelihood (PEML) method can only handle minimum phase ARMA models. This paper presents a new method known as the back-filtering-based maximum likelihood (BFML) method, which can handle nonminimum phase and noncausal ARMA models. The BFML method...... is identical to the PEML method in the case of a minimum phase ARMA model, and it turns out that the BFML method incorporates a noncausal ARMA filter with poles outside the unit circle for estimation of the parameters of a causal, nonminimum phase ARMA model...
Maximum Likelihood Dynamic Factor Modeling for Arbitrary "N" and "T" Using SEM
Voelkle, Manuel C.; Oud, Johan H. L.; von Oertzen, Timo; Lindenberger, Ulman
2012-01-01
This article has 3 objectives that build on each other. First, we demonstrate how to obtain maximum likelihood estimates for dynamic factor models (the direct autoregressive factor score model) with arbitrary "T" and "N" by means of structural equation modeling (SEM) and compare the approach to existing methods. Second, we go beyond standard time…
The fine-tuning cost of the likelihood in SUSY models
International Nuclear Information System (INIS)
Ghilencea, D.M.; Ross, G.G.
2013-01-01
In SUSY models, the fine-tuning of the electroweak (EW) scale with respect to their parameters γ i ={m 0 ,m 1/2 ,μ 0 ,A 0 ,B 0 ,…} and the maximal likelihood L to fit the experimental data are usually regarded as two different problems. We show that, if one regards the EW minimum conditions as constraints that fix the EW scale, this commonly held view is not correct and that the likelihood contains all the information about fine-tuning. In this case we show that the corrected likelihood is equal to the ratio L/Δ of the usual likelihood L and the traditional fine-tuning measure Δ of the EW scale. A similar result is obtained for the integrated likelihood over the set {γ i }, that can be written as a surface integral of the ratio L/Δ, with the surface in γ i space determined by the EW minimum constraints. As a result, a large likelihood actually demands a large ratio L/Δ or equivalently, a small χ new 2 =χ old 2 +2lnΔ. This shows the fine-tuning cost to the likelihood (χ new 2 ) of the EW scale stability enforced by SUSY, that is ignored in data fits. A good χ new 2 /d.o.f.≈1 thus demands SUSY models have a fine-tuning amount Δ≪exp(d.o.f./2), which provides a model-independent criterion for acceptable fine-tuning. If this criterion is not met, one can thus rule out SUSY models without a further χ 2 /d.o.f. analysis. Numerical methods to fit the data can easily be adapted to account for this effect.
Block Empirical Likelihood for Longitudinal Single-Index Varying-Coefficient Model
Directory of Open Access Journals (Sweden)
Yunquan Song
2013-01-01
Full Text Available In this paper, we consider a single-index varying-coefficient model with application to longitudinal data. In order to accommodate the within-group correlation, we apply the block empirical likelihood procedure to longitudinal single-index varying-coefficient model, and prove a nonparametric version of Wilks’ theorem which can be used to construct the block empirical likelihood confidence region with asymptotically correct coverage probability for the parametric component. In comparison with normal approximations, the proposed method does not require a consistent estimator for the asymptotic covariance matrix, making it easier to conduct inference for the model's parametric component. Simulations demonstrate how the proposed method works.
Rivera, Diego; Rivas, Yessica; Godoy, Alex
2015-02-01
Hydrological models are simplified representations of natural processes and subject to errors. Uncertainty bounds are a commonly used way to assess the impact of an input or model architecture uncertainty in model outputs. Different sets of parameters could have equally robust goodness-of-fit indicators, which is known as Equifinality. We assessed the outputs from a lumped conceptual hydrological model to an agricultural watershed in central Chile under strong interannual variability (coefficient of variability of 25%) by using the Equifinality concept and uncertainty bounds. The simulation period ran from January 1999 to December 2006. Equifinality and uncertainty bounds from GLUE methodology (Generalized Likelihood Uncertainty Estimation) were used to identify parameter sets as potential representations of the system. The aim of this paper is to exploit the use of uncertainty bounds to differentiate behavioural parameter sets in a simple hydrological model. Then, we analyze the presence of equifinality in order to improve the identification of relevant hydrological processes. The water balance model for Chillan River exhibits, at a first stage, equifinality. However, it was possible to narrow the range for the parameters and eventually identify a set of parameters representing the behaviour of the watershed (a behavioural model) in agreement with observational and soft data (calculation of areal precipitation over the watershed using an isohyetal map). The mean width of the uncertainty bound around the predicted runoff for the simulation period decreased from 50 to 20 m3s-1 after fixing the parameter controlling the areal precipitation over the watershed. This decrement is equivalent to decreasing the ratio between simulated and observed discharge from 5.2 to 2.5. Despite the criticisms against the GLUE methodology, such as the lack of statistical formality, it is identified as a useful tool assisting the modeller with the identification of critical parameters.
Ning, Jing; Chen, Yong; Piao, Jin
2017-07-01
Publication bias occurs when the published research results are systematically unrepresentative of the population of studies that have been conducted, and is a potential threat to meaningful meta-analysis. The Copas selection model provides a flexible framework for correcting estimates and offers considerable insight into the publication bias. However, maximizing the observed likelihood under the Copas selection model is challenging because the observed data contain very little information on the latent variable. In this article, we study a Copas-like selection model and propose an expectation-maximization (EM) algorithm for estimation based on the full likelihood. Empirical simulation studies show that the EM algorithm and its associated inferential procedure performs well and avoids the non-convergence problem when maximizing the observed likelihood. © The Author 2017. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
A Composite Likelihood Inference in Latent Variable Models for Ordinal Longitudinal Responses
Vasdekis, Vassilis G. S.; Cagnone, Silvia; Moustaki, Irini
2012-01-01
The paper proposes a composite likelihood estimation approach that uses bivariate instead of multivariate marginal probabilities for ordinal longitudinal responses using a latent variable model. The model considers time-dependent latent variables and item-specific random effects to be accountable for the interdependencies of the multivariate…
SIERO, FW; DOOSJE, BJ
1993-01-01
An experiment was conducted to examine the influence of the perceived extremity of a message and motivation to elaborate upon the process of persuasion. The first goal was to test a model of attitude change relating Social Judgment Theory to the Elaboration Likelihood Model. The second objective was
Petty, Richard E.; And Others
1987-01-01
Answers James Stiff's criticism of the Elaboration Likelihood Model (ELM) of persuasion. Corrects certain misperceptions of the ELM and criticizes Stiff's meta-analysis that compares ELM predictions with those derived from Kahneman's elastic capacity model. Argues that Stiff's presentation of the ELM and the conclusions he draws based on the data…
On-line validation of linear process models using generalized likelihood ratios
International Nuclear Information System (INIS)
Tylee, J.L.
1981-12-01
A real-time method for testing the validity of linear models of nonlinear processes is described and evaluated. Using generalized likelihood ratios, the model dynamics are continually monitored to see if the process has moved far enough away from the nominal linear model operating point to justify generation of a new linear model. The method is demonstrated using a seventh-order model of a natural circulation steam generator
Elaboration Likelihood Model and an Analysis of the Contexts of Its Application
Aslıhan Kıymalıoğlu
2014-01-01
Elaboration Likelihood Model (ELM), which supports the existence of two routes to persuasion: central and peripheral routes, has been one of the major models on persuasion. As the number of studies in the Turkish literature on ELM is limited, a detailed explanation of the model together with a comprehensive literature review was considered to be contributory for this gap. The findings of the review reveal that the model was mostly used in marketing and advertising researches, that the concept...
Maximum likelihood estimation for Cox's regression model under nested case-control sampling
DEFF Research Database (Denmark)
Scheike, Thomas Harder; Juul, Anders
2004-01-01
-like growth factor I was associated with ischemic heart disease. The study was based on a population of 3784 Danes and 231 cases of ischemic heart disease where controls were matched on age and gender. We illustrate the use of the MLE for these data and show how the maximum likelihood framework can be used......Nested case-control sampling is designed to reduce the costs of large cohort studies. It is important to estimate the parameters of interest as efficiently as possible. We present a new maximum likelihood estimator (MLE) for nested case-control sampling in the context of Cox's proportional hazards...... model. The MLE is computed by the EM-algorithm, which is easy to implement in the proportional hazards setting. Standard errors are estimated by a numerical profile likelihood approach based on EM aided differentiation. The work was motivated by a nested case-control study that hypothesized that insulin...
Estimation of stochastic frontier models with fixed-effects through Monte Carlo Maximum Likelihood
Emvalomatis, G.; Stefanou, S.E.; Oude Lansink, A.G.J.M.
2011-01-01
Estimation of nonlinear fixed-effects models is plagued by the incidental parameters problem. This paper proposes a procedure for choosing appropriate densities for integrating the incidental parameters from the likelihood function in a general context. The densities are based on priors that are
Penfield, Randall D.; Bergeron, Jennifer M.
2005-01-01
This article applies a weighted maximum likelihood (WML) latent trait estimator to the generalized partial credit model (GPCM). The relevant equations required to obtain the WML estimator using the Newton-Raphson algorithm are presented, and a simulation study is described that compared the properties of the WML estimator to those of the maximum…
Evaluation of Smoking Prevention Television Messages Based on the Elaboration Likelihood Model
Flynn, Brian S.; Worden, John K.; Bunn, Janice Yanushka; Connolly, Scott W.; Dorwaldt, Anne L.
2011-01-01
Progress in reducing youth smoking may depend on developing improved methods to communicate with higher risk youth. This study explored the potential of smoking prevention messages based on the Elaboration Likelihood Model (ELM) to address these needs. Structured evaluations of 12 smoking prevention messages based on three strategies derived from…
The Elaboration Likelihood Model: Implications for the Practice of School Psychology.
Petty, Richard E.; Heesacker, Martin; Hughes, Jan N.
1997-01-01
Reviews a contemporary theory of attitude change, the Elaboration Likelihood Model (ELM) of persuasion, and addresses its relevance to school psychology. Claims that a key postulate of ELM is that attitude change results from thoughtful (central route) or nonthoughtful (peripheral route) processes. Illustrations of ELM's utility for school…
Heppner, Mary J.; And Others
1995-01-01
Intervention sought to improve first-year college students' attitudes about rape. Used the Elaboration Likelihood Model to examine men's and women's attitude change process. Found numerous sex differences in ways men and women experienced and changed during and after intervention. Women's attitude showed more lasting change while men's was more…
Application of the Elaboration Likelihood Model of Attitude Change to Assertion Training.
Ernst, John M.; Heesacker, Martin
1993-01-01
College students (n=113) participated in study comparing effects of elaboration likelihood model (ELM) based assertion workshop with those of typical assertion workshop. ELM-based workshop was significantly better at producing favorable attitude change, greater intention to act assertively, and more favorable evaluations of workshop content.…
Eaves, Michael
This paper provides a literature review of the elaboration likelihood model (ELM) as applied in persuasion. Specifically, the paper addresses distraction with regard to effects on persuasion. In addition, the application of proxemic violations as peripheral cues in message processing is discussed. Finally, the paper proposes to shed new light on…
Magis, David; Raiche, Gilles
2012-01-01
This paper focuses on two estimators of ability with logistic item response theory models: the Bayesian modal (BM) estimator and the weighted likelihood (WL) estimator. For the BM estimator, Jeffreys' prior distribution is considered, and the corresponding estimator is referred to as the Jeffreys modal (JM) estimator. It is established that under…
A maximum pseudo-likelihood approach for estimating species trees under the coalescent model
Directory of Open Access Journals (Sweden)
Edwards Scott V
2010-10-01
Full Text Available Abstract Background Several phylogenetic approaches have been developed to estimate species trees from collections of gene trees. However, maximum likelihood approaches for estimating species trees under the coalescent model are limited. Although the likelihood of a species tree under the multispecies coalescent model has already been derived by Rannala and Yang, it can be shown that the maximum likelihood estimate (MLE of the species tree (topology, branch lengths, and population sizes from gene trees under this formula does not exist. In this paper, we develop a pseudo-likelihood function of the species tree to obtain maximum pseudo-likelihood estimates (MPE of species trees, with branch lengths of the species tree in coalescent units. Results We show that the MPE of the species tree is statistically consistent as the number M of genes goes to infinity. In addition, the probability that the MPE of the species tree matches the true species tree converges to 1 at rate O(M -1. The simulation results confirm that the maximum pseudo-likelihood approach is statistically consistent even when the species tree is in the anomaly zone. We applied our method, Maximum Pseudo-likelihood for Estimating Species Trees (MP-EST to a mammal dataset. The four major clades found in the MP-EST tree are consistent with those in the Bayesian concatenation tree. The bootstrap supports for the species tree estimated by the MP-EST method are more reasonable than the posterior probability supports given by the Bayesian concatenation method in reflecting the level of uncertainty in gene trees and controversies over the relationship of four major groups of placental mammals. Conclusions MP-EST can consistently estimate the topology and branch lengths (in coalescent units of the species tree. Although the pseudo-likelihood is derived from coalescent theory, and assumes no gene flow or horizontal gene transfer (HGT, the MP-EST method is robust to a small amount of HGT in the
Directory of Open Access Journals (Sweden)
Maja Olsbjerg
2015-10-01
Full Text Available Item response theory models are often applied when a number items are used to measure a unidimensional latent variable. Originally proposed and used within educational research, they are also used when focus is on physical functioning or psychological wellbeing. Modern applications often need more general models, typically models for multidimensional latent variables or longitudinal models for repeated measurements. This paper describes a SAS macro that fits two-dimensional polytomous Rasch models using a specification of the model that is sufficiently flexible to accommodate longitudinal Rasch models. The macro estimates item parameters using marginal maximum likelihood estimation. A graphical presentation of item characteristic curves is included.
Maximum likelihood pixel labeling using a spatially variant finite mixture model
International Nuclear Information System (INIS)
Gopal, S.S.; Hebert, T.J.
1996-01-01
We propose a spatially-variant mixture model for pixel labeling. Based on this spatially-variant mixture model we derive an expectation maximization algorithm for maximum likelihood estimation of the pixel labels. While most algorithms using mixture models entail the subsequent use of a Bayes classifier for pixel labeling, the proposed algorithm yields maximum likelihood estimates of the labels themselves and results in unambiguous pixel labels. The proposed algorithm is fast, robust, easy to implement, flexible in that it can be applied to any arbitrary image data where the number of classes is known and, most importantly, obviates the need for an explicit labeling rule. The algorithm is evaluated both quantitatively and qualitatively on simulated data and on clinical magnetic resonance images of the human brain
Rice, John D; Tsodikov, Alex
2017-05-30
Continuous outcome data with a proportion of observations equal to zero (often referred to as semicontinuous data) arise frequently in biomedical studies. Typical approaches involve two-part models, with one part a logistic model for the probability of observing a zero and some parametric continuous distribution for modeling the positive part of the data. We propose a semiparametric model based on a biological system with competing damage manifestation and resistance processes. This allows us to derive a closed-form profile likelihood based on the retro-hazard function, leading to a flexible procedure for modeling continuous data with a point mass at zero. A simulation study is presented to examine the properties of the method in finite samples. We apply the method to a data set consisting of pulmonary capillary hemorrhage area in lab rats subjected to diagnostic ultrasound. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.
A review of studies on persuasion from the viewpoint of the Elaboration Likelihood Model (1)
Fukada, Hiromi; Kimura, Kenichi; Makino, Koshi; Higuchi, Masataka
2000-01-01
The purpose of this paper was to review studies on persuasion from the viewpoint of the Elaboration Likelihood Model based on Petty & Wegener (1998). The paper consists of the following four parts. 1. Introduction. 2. Multiple roles for persuasion variables. 3. Source variables: (1) credibility (expertise, trustworthiness), (2) attractiveness/likableness, (3) power, (4) additional source factors related to credibility, liking and power (speed of speech, demographic variables, majority/minorit...
Efficient simulation and likelihood methods for non-neutral multi-allele models.
Joyce, Paul; Genz, Alan; Buzbas, Erkan Ozge
2012-06-01
Throughout the 1980s, Simon Tavaré made numerous significant contributions to population genetics theory. As genetic data, in particular DNA sequence, became more readily available, a need to connect population-genetic models to data became the central issue. The seminal work of Griffiths and Tavaré (1994a , 1994b , 1994c) was among the first to develop a likelihood method to estimate the population-genetic parameters using full DNA sequences. Now, we are in the genomics era where methods need to scale-up to handle massive data sets, and Tavaré has led the way to new approaches. However, performing statistical inference under non-neutral models has proved elusive. In tribute to Simon Tavaré, we present an article in spirit of his work that provides a computationally tractable method for simulating and analyzing data under a class of non-neutral population-genetic models. Computational methods for approximating likelihood functions and generating samples under a class of allele-frequency based non-neutral parent-independent mutation models were proposed by Donnelly, Nordborg, and Joyce (DNJ) (Donnelly et al., 2001). DNJ (2001) simulated samples of allele frequencies from non-neutral models using neutral models as auxiliary distribution in a rejection algorithm. However, patterns of allele frequencies produced by neutral models are dissimilar to patterns of allele frequencies produced by non-neutral models, making the rejection method inefficient. For example, in some cases the methods in DNJ (2001) require 10(9) rejections before a sample from the non-neutral model is accepted. Our method simulates samples directly from the distribution of non-neutral models, making simulation methods a practical tool to study the behavior of the likelihood and to perform inference on the strength of selection.
Computation of the Likelihood in Biallelic Diffusion Models Using Orthogonal Polynomials
Directory of Open Access Journals (Sweden)
Claus Vogl
2014-11-01
Full Text Available In population genetics, parameters describing forces such as mutation, migration and drift are generally inferred from molecular data. Lately, approximate methods based on simulations and summary statistics have been widely applied for such inference, even though these methods waste information. In contrast, probabilistic methods of inference can be shown to be optimal, if their assumptions are met. In genomic regions where recombination rates are high relative to mutation rates, polymorphic nucleotide sites can be assumed to evolve independently from each other. The distribution of allele frequencies at a large number of such sites has been called “allele-frequency spectrum” or “site-frequency spectrum” (SFS. Conditional on the allelic proportions, the likelihoods of such data can be modeled as binomial. A simple model representing the evolution of allelic proportions is the biallelic mutation-drift or mutation-directional selection-drift diffusion model. With series of orthogonal polynomials, specifically Jacobi and Gegenbauer polynomials, or the related spheroidal wave function, the diffusion equations can be solved efficiently. In the neutral case, the product of the binomial likelihoods with the sum of such polynomials leads to finite series of polynomials, i.e., relatively simple equations, from which the exact likelihoods can be calculated. In this article, the use of orthogonal polynomials for inferring population genetic parameters is investigated.
Likelihood Inference of Nonlinear Models Based on a Class of Flexible Skewed Distributions
Directory of Open Access Journals (Sweden)
Xuedong Chen
2014-01-01
Full Text Available This paper deals with the issue of the likelihood inference for nonlinear models with a flexible skew-t-normal (FSTN distribution, which is proposed within a general framework of flexible skew-symmetric (FSS distributions by combining with skew-t-normal (STN distribution. In comparison with the common skewed distributions such as skew normal (SN, and skew-t (ST as well as scale mixtures of skew normal (SMSN, the FSTN distribution can accommodate more flexibility and robustness in the presence of skewed, heavy-tailed, especially multimodal outcomes. However, for this distribution, a usual approach of maximum likelihood estimates based on EM algorithm becomes unavailable and an alternative way is to return to the original Newton-Raphson type method. In order to improve the estimation as well as the way for confidence estimation and hypothesis test for the parameters of interest, a modified Newton-Raphson iterative algorithm is presented in this paper, based on profile likelihood for nonlinear regression models with FSTN distribution, and, then, the confidence interval and hypothesis test are also developed. Furthermore, a real example and simulation are conducted to demonstrate the usefulness and the superiority of our approach.
Directory of Open Access Journals (Sweden)
Zhang Zhang
2009-06-01
Full Text Available A major analytical challenge in computational biology is the detection and description of clusters of specified site types, such as polymorphic or substituted sites within DNA or protein sequences. Progress has been stymied by a lack of suitable methods to detect clusters and to estimate the extent of clustering in discrete linear sequences, particularly when there is no a priori specification of cluster size or cluster count. Here we derive and demonstrate a maximum likelihood method of hierarchical clustering. Our method incorporates a tripartite divide-and-conquer strategy that models sequence heterogeneity, delineates clusters, and yields a profile of the level of clustering associated with each site. The clustering model may be evaluated via model selection using the Akaike Information Criterion, the corrected Akaike Information Criterion, and the Bayesian Information Criterion. Furthermore, model averaging using weighted model likelihoods may be applied to incorporate model uncertainty into the profile of heterogeneity across sites. We evaluated our method by examining its performance on a number of simulated datasets as well as on empirical polymorphism data from diverse natural alleles of the Drosophila alcohol dehydrogenase gene. Our method yielded greater power for the detection of clustered sites across a breadth of parameter ranges, and achieved better accuracy and precision of estimation of clusters, than did the existing empirical cumulative distribution function statistics.
Elaboration Likelihood Model and an Analysis of the Contexts of Its Application
Directory of Open Access Journals (Sweden)
Aslıhan Kıymalıoğlu
2014-12-01
Full Text Available Elaboration Likelihood Model (ELM, which supports the existence of two routes to persuasion: central and peripheral routes, has been one of the major models on persuasion. As the number of studies in the Turkish literature on ELM is limited, a detailed explanation of the model together with a comprehensive literature review was considered to be contributory for this gap. The findings of the review reveal that the model was mostly used in marketing and advertising researches, that the concept most frequently used in elaboration process was involvement, and that argument quality and endorser credibility were the factors most often employed in measuring their effect on the dependant variables. The review provides valuable insights as it presents a holistic view of the model and the variables used in the model.
Maximum likelihood fitting of FROC curves under an initial-detection-and-candidate-analysis model
International Nuclear Information System (INIS)
Edwards, Darrin C.; Kupinski, Matthew A.; Metz, Charles E.; Nishikawa, Robert M.
2002-01-01
We have developed a model for FROC curve fitting that relates the observer's FROC performance not to the ROC performance that would be obtained if the observer's responses were scored on a per image basis, but rather to a hypothesized ROC performance that the observer would obtain in the task of classifying a set of 'candidate detections' as positive or negative. We adopt the assumptions of the Bunch FROC model, namely that the observer's detections are all mutually independent, as well as assumptions qualitatively similar to, but different in nature from, those made by Chakraborty in his AFROC scoring methodology. Under the assumptions of our model, we show that the observer's FROC performance is a linearly scaled version of the candidate analysis ROC curve, where the scaling factors are just given by the FROC operating point coordinates for detecting initial candidates. Further, we show that the likelihood function of the model parameters given observational data takes on a simple form, and we develop a maximum likelihood method for fitting a FROC curve to this data. FROC and AFROC curves are produced for computer vision observer datasets and compared with the results of the AFROC scoring method. Although developed primarily with computer vision schemes in mind, we hope that the methodology presented here will prove worthy of further study in other applications as well
Accuracy of maximum likelihood estimates of a two-state model in single-molecule FRET
Energy Technology Data Exchange (ETDEWEB)
Gopich, Irina V. [Laboratory of Chemical Physics, National Institute of Diabetes and Digestive and Kidney Diseases, National Institutes of Health, Bethesda, Maryland 20892 (United States)
2015-01-21
Photon sequences from single-molecule Förster resonance energy transfer (FRET) experiments can be analyzed using a maximum likelihood method. Parameters of the underlying kinetic model (FRET efficiencies of the states and transition rates between conformational states) are obtained by maximizing the appropriate likelihood function. In addition, the errors (uncertainties) of the extracted parameters can be obtained from the curvature of the likelihood function at the maximum. We study the standard deviations of the parameters of a two-state model obtained from photon sequences with recorded colors and arrival times. The standard deviations can be obtained analytically in a special case when the FRET efficiencies of the states are 0 and 1 and in the limiting cases of fast and slow conformational dynamics. These results are compared with the results of numerical simulations. The accuracy and, therefore, the ability to predict model parameters depend on how fast the transition rates are compared to the photon count rate. In the limit of slow transitions, the key parameters that determine the accuracy are the number of transitions between the states and the number of independent photon sequences. In the fast transition limit, the accuracy is determined by the small fraction of photons that are correlated with their neighbors. The relative standard deviation of the relaxation rate has a “chevron” shape as a function of the transition rate in the log-log scale. The location of the minimum of this function dramatically depends on how well the FRET efficiencies of the states are separated.
Assessing Individual Weather Risk-Taking and Its Role in Modeling Likelihood of Hurricane Evacuation
Stewart, A. E.
2017-12-01
This research focuses upon measuring an individual's level of perceived risk of different severe and extreme weather conditions using a new self-report measure, the Weather Risk-Taking Scale (WRTS). For 32 severe and extreme situations in which people could perform an unsafe behavior (e. g., remaining outside with lightning striking close by, driving over roadways covered with water, not evacuating ahead of an approaching hurricane, etc.), people rated: 1.their likelihood of performing the behavior, 2. The perceived risk of performing the behavior, 3. the expected benefits of performing the behavior, and 4. whether the behavior has actually been performed in the past. Initial development research with the measure using 246 undergraduate students examined its psychometric properties and found that it was internally consistent (Cronbach's a ranged from .87 to .93 for the four scales) and that the scales possessed good temporal (test-retest) reliability (r's ranged from .84 to .91). A second regression study involving 86 undergraduate students found that taking weather risks was associated with having taken similar risks in one's past and with the personality trait of sensation-seeking. Being more attentive to the weather and perceiving its risks when it became extreme was associated with lower likelihoods of taking weather risks (overall regression model, R2adj = 0.60). A third study involving 334 people examined the contributions of weather risk perceptions and risk-taking in modeling the self-reported likelihood of complying with a recommended evacuation ahead of a hurricane. Here, higher perceptions of hurricane risks and lower perceived benefits of risk-taking along with fear of severe weather and hurricane personal self-efficacy ratings were all statistically significant contributors to the likelihood of evacuating ahead of a hurricane. Psychological rootedness and attachment to one's home also tend to predict lack of evacuation. This research highlights the
Directory of Open Access Journals (Sweden)
Chang-bae Moon
2011-01-01
Full Text Available Although there have been many researches on mobile robot localization, it is still difficult to obtain reliable localization performance in a human co-existing real environment. Reliability of localization is highly dependent upon developer's experiences because uncertainty is caused by a variety of reasons. We have developed a range sensor based integrated localization scheme for various indoor service robots. Through the experience, we found out that there are several significant experimental issues. In this paper, we provide useful solutions for following questions which are frequently faced with in practical applications: 1 How to design an observation likelihood model? 2 How to detect the localization failure? 3 How to recover from the localization failure? We present design guidelines of observation likelihood model. Localization failure detection and recovery schemes are presented by focusing on abrupt wheel slippage. Experiments were carried out in a typical office building environment. The proposed scheme to identify the localizer status is useful in practical environments. Moreover, the semi-global localization is a computationally efficient recovery scheme from localization failure. The results of experiments and analysis clearly present the usefulness of proposed solutions.
Directory of Open Access Journals (Sweden)
Chang-bae Moon
2010-12-01
Full Text Available Although there have been many researches on mobile robot localization, it is still difficult to obtain reliable localization performance in a human co-existing real environment. Reliability of localization is highly dependent upon developer's experiences because uncertainty is caused by a variety of reasons. We have developed a range sensor based integrated localization scheme for various indoor service robots. Through the experience, we found out that there are several significant experimental issues. In this paper, we provide useful solutions for following questions which are frequently faced with in practical applications: 1 How to design an observation likelihood model? 2 How to detect the localization failure? 3 How to recover from the localization failure? We present design guidelines of observation likelihood model. Localization failure detection and recovery schemes are presented by focusing on abrupt wheel slippage. Experiments were carried out in a typical office building environment. The proposed scheme to identify the localizer status is useful in practical environments. Moreover, the semi-global localization is a computationally efficient recovery scheme from localization failure. The results of experiments and analysis clearly present the usefulness of proposed solutions.
Extended likelihood inference in reliability
International Nuclear Information System (INIS)
Martz, H.F. Jr.; Beckman, R.J.; Waller, R.A.
1978-10-01
Extended likelihood methods of inference are developed in which subjective information in the form of a prior distribution is combined with sampling results by means of an extended likelihood function. The extended likelihood function is standardized for use in obtaining extended likelihood intervals. Extended likelihood intervals are derived for the mean of a normal distribution with known variance, the failure-rate of an exponential distribution, and the parameter of a binomial distribution. Extended second-order likelihood methods are developed and used to solve several prediction problems associated with the exponential and binomial distributions. In particular, such quantities as the next failure-time, the number of failures in a given time period, and the time required to observe a given number of failures are predicted for the exponential model with a gamma prior distribution on the failure-rate. In addition, six types of life testing experiments are considered. For the binomial model with a beta prior distribution on the probability of nonsurvival, methods are obtained for predicting the number of nonsurvivors in a given sample size and for predicting the required sample size for observing a specified number of nonsurvivors. Examples illustrate each of the methods developed. Finally, comparisons are made with Bayesian intervals in those cases where these are known to exist
Model uncertainty estimation and risk assessment is essential to environmental management and informed decision making on pollution mitigation strategies. In this study, we apply a probabilistic methodology, which combines Bayesian Monte Carlo simulation and Maximum Likelihood e...
Sze, N N; Wong, S C; Lee, C Y
2014-12-01
In past several decades, many countries have set quantified road safety targets to motivate transport authorities to develop systematic road safety strategies and measures and facilitate the achievement of continuous road safety improvement. Studies have been conducted to evaluate the association between the setting of quantified road safety targets and road fatality reduction, in both the short and long run, by comparing road fatalities before and after the implementation of a quantified road safety target. However, not much work has been done to evaluate whether the quantified road safety targets are actually achieved. In this study, we used a binary logistic regression model to examine the factors - including vehicle ownership, fatality rate, and national income, in addition to level of ambition and duration of target - that contribute to a target's success. We analyzed 55 quantified road safety targets set by 29 countries from 1981 to 2009, and the results indicate that targets that are in progress and with lower level of ambitions had a higher likelihood of eventually being achieved. Moreover, possible interaction effects on the association between level of ambition and the likelihood of success are also revealed. Copyright © 2014 Elsevier Ltd. All rights reserved.
Frequency-Domain Maximum-Likelihood Estimation of High-Voltage Pulse Transformer Model Parameters
Aguglia, D; Martins, C.D.A.
2014-01-01
This paper presents an offline frequency-domain nonlinear and stochastic identification method for equivalent model parameter estimation of high-voltage pulse transformers. Such kinds of transformers are widely used in the pulsed-power domain, and the difficulty in deriving pulsed-power converter optimal control strategies is directly linked to the accuracy of the equivalent circuit parameters. These components require models which take into account electric fields energies represented by stray capacitance in the equivalent circuit. These capacitive elements must be accurately identified, since they greatly influence the general converter performances. A nonlinear frequency-based identification method, based on maximum-likelihood estimation, is presented, and a sensitivity analysis of the best experimental test to be considered is carried out. The procedure takes into account magnetic saturation and skin effects occurring in the windings during the frequency tests. The presented method is validated by experim...
Bayesian Inference using Neural Net Likelihood Models for Protein Secondary Structure Prediction
Directory of Open Access Journals (Sweden)
Seong-Gon Kim
2011-06-01
Full Text Available Several techniques such as Neural Networks, Genetic Algorithms, Decision Trees and other statistical or heuristic methods have been used to approach the complex non-linear task of predicting Alpha-helicies, Beta-sheets and Turns of a proteins secondary structure in the past. This project introduces a new machine learning method by using an offline trained Multilayered Perceptrons (MLP as the likelihood models within a Bayesian Inference framework to predict secondary structures proteins. Varying window sizes are used to extract neighboring amino acid information and passed back and forth between the Neural Net models and the Bayesian Inference process until there is a convergence of the posterior secondary structure probability.
Maximum likelihood estimation of semiparametric mixture component models for competing risks data.
Choi, Sangbum; Huang, Xuelin
2014-09-01
In the analysis of competing risks data, the cumulative incidence function is a useful quantity to characterize the crude risk of failure from a specific event type. In this article, we consider an efficient semiparametric analysis of mixture component models on cumulative incidence functions. Under the proposed mixture model, latency survival regressions given the event type are performed through a class of semiparametric models that encompasses the proportional hazards model and the proportional odds model, allowing for time-dependent covariates. The marginal proportions of the occurrences of cause-specific events are assessed by a multinomial logistic model. Our mixture modeling approach is advantageous in that it makes a joint estimation of model parameters associated with all competing risks under consideration, satisfying the constraint that the cumulative probability of failing from any cause adds up to one given any covariates. We develop a novel maximum likelihood scheme based on semiparametric regression analysis that facilitates efficient and reliable estimation. Statistical inferences can be conveniently made from the inverse of the observed information matrix. We establish the consistency and asymptotic normality of the proposed estimators. We validate small sample properties with simulations and demonstrate the methodology with a data set from a study of follicular lymphoma. © 2014, The International Biometric Society.
Zeilinger, Adam R; Olson, Dawn M; Andow, David A
2014-08-01
Consumer feeding preference among resource choices has critical implications for basic ecological and evolutionary processes, and can be highly relevant to applied problems such as ecological risk assessment and invasion biology. Within consumer choice experiments, also known as feeding preference or cafeteria experiments, measures of relative consumption and measures of consumer movement can provide distinct and complementary insights into the strength, causes, and consequences of preference. Despite the distinct value of inferring preference from measures of consumer movement, rigorous and biologically relevant analytical methods are lacking. We describe a simple, likelihood-based, biostatistical model for analyzing the transient dynamics of consumer movement in a paired-choice experiment. With experimental data consisting of repeated discrete measures of consumer location, the model can be used to estimate constant consumer attraction and leaving rates for two food choices, and differences in choice-specific attraction and leaving rates can be tested using model selection. The model enables calculation of transient and equilibrial probabilities of consumer-resource association, which could be incorporated into larger scale movement models. We explore the effect of experimental design on parameter estimation through stochastic simulation and describe methods to check that data meet model assumptions. Using a dataset of modest sample size, we illustrate the use of the model to draw inferences on consumer preference as well as underlying behavioral mechanisms. Finally, we include a user's guide and computer code scripts in R to facilitate use of the model by other researchers.
Chen, Baojiang; Qin, Jing
2014-05-10
In statistical analysis, a regression model is needed if one is interested in finding the relationship between a response variable and covariates. When the response depends on the covariate, then it may also depend on the function of this covariate. If one has no knowledge of this functional form but expect for monotonic increasing or decreasing, then the isotonic regression model is preferable. Estimation of parameters for isotonic regression models is based on the pool-adjacent-violators algorithm (PAVA), where the monotonicity constraints are built in. With missing data, people often employ the augmented estimating method to improve estimation efficiency by incorporating auxiliary information through a working regression model. However, under the framework of the isotonic regression model, the PAVA does not work as the monotonicity constraints are violated. In this paper, we develop an empirical likelihood-based method for isotonic regression model to incorporate the auxiliary information. Because the monotonicity constraints still hold, the PAVA can be used for parameter estimation. Simulation studies demonstrate that the proposed method can yield more efficient estimates, and in some situations, the efficiency improvement is substantial. We apply this method to a dementia study. Copyright © 2013 John Wiley & Sons, Ltd.
Directory of Open Access Journals (Sweden)
Shu-Hwa Chen
Full Text Available BACKGROUND: Selecting an appropriate substitution model and deriving a tree topology for a given sequence set are essential in phylogenetic analysis. However, such time consuming, computationally intensive tasks rely on knowledge of substitution model theories and related expertise to run through all possible combinations of several separate programs. To ensure a thorough and efficient analysis and avert tedious manipulations of various programs, this work presents an intuitive framework, the phylogenetic reconstruction with automatic likelihood model selectors (PALM, with convincing, updated algorithms and a best-fit model selection mechanism for seamless phylogenetic analysis. METHODOLOGY: As an integrated framework of ClustalW, PhyML, MODELTEST, ProtTest, and several in-house programs, PALM evaluates the fitness of 56 substitution models for nucleotide sequences and 112 substitution models for protein sequences with scores in various criteria. The input for PALM can be either sequences in FASTA format or a sequence alignment file in PHYLIP format. To accelerate the computing of maximum likelihood and bootstrapping, this work integrates MPICH2/PhyML, PalmMonitor and Palm job controller across several machines with multiple processors and adopts the task parallelism approach. Moreover, an intuitive and interactive web component, PalmTree, is developed for displaying and operating the output tree with options of tree rooting, branches swapping, viewing the branch length values, and viewing bootstrapping score, as well as removing nodes to restart analysis iteratively. SIGNIFICANCE: The workflow of PALM is straightforward and coherent. Via a succinct, user-friendly interface, researchers unfamiliar with phylogenetic analysis can easily use this server to submit sequences, retrieve the output, and re-submit a job based on a previous result if some sequences are to be deleted or added for phylogenetic reconstruction. PALM results in an inference of
Optimizing Likelihood Models for Particle Trajectory Segmentation in Multi-State Systems.
Young, Dylan Christopher; Scrimgeour, Jan
2018-06-19
Particle tracking offers significant insight into the molecular mechanics that govern the behav- ior of living cells. The analysis of molecular trajectories that transition between different motive states, such as diffusive, driven and tethered modes, is of considerable importance, with even single trajectories containing significant amounts of information about a molecule's environment and its interactions with cellular structures. Hidden Markov models (HMM) have been widely adopted to perform the segmentation of such complex tracks. In this paper, we show that extensive analysis of hidden Markov model outputs using data derived from multi-state Brownian dynamics simulations can be used both for the optimization of the likelihood models used to describe the states of the system and for characterization of the technique's failure mechanisms. This analysis was made pos- sible by the implementation of parallelized adaptive direct search algorithm on a Nvidia graphics processing unit. This approach provides critical information for the visualization of HMM failure and successful design of particle tracking experiments where trajectories contain multiple mobile states. © 2018 IOP Publishing Ltd.
Yuan, Ke-Hai; Tian, Yubin; Yanagihara, Hirokazu
2015-06-01
Survey data typically contain many variables. Structural equation modeling (SEM) is commonly used in analyzing such data. The most widely used statistic for evaluating the adequacy of a SEM model is T ML, a slight modification to the likelihood ratio statistic. Under normality assumption, T ML approximately follows a chi-square distribution when the number of observations (N) is large and the number of items or variables (p) is small. However, in practice, p can be rather large while N is always limited due to not having enough participants. Even with a relatively large N, empirical results show that T ML rejects the correct model too often when p is not too small. Various corrections to T ML have been proposed, but they are mostly heuristic. Following the principle of the Bartlett correction, this paper proposes an empirical approach to correct T ML so that the mean of the resulting statistic approximately equals the degrees of freedom of the nominal chi-square distribution. Results show that empirically corrected statistics follow the nominal chi-square distribution much more closely than previously proposed corrections to T ML, and they control type I errors reasonably well whenever N ≥ max(50,2p). The formulations of the empirically corrected statistics are further used to predict type I errors of T ML as reported in the literature, and they perform well.
Kieftenbeld, Vincent; Natesan, Prathiba
2012-01-01
Markov chain Monte Carlo (MCMC) methods enable a fully Bayesian approach to parameter estimation of item response models. In this simulation study, the authors compared the recovery of graded response model parameters using marginal maximum likelihood (MML) and Gibbs sampling (MCMC) under various latent trait distributions, test lengths, and…
Lee, Ya-Ting; Turcotte, Donald L; Holliday, James R; Sachs, Michael K; Rundle, John B; Chen, Chien-Chih; Tiampo, Kristy F
2011-10-04
The Regional Earthquake Likelihood Models (RELM) test of earthquake forecasts in California was the first competitive evaluation of forecasts of future earthquake occurrence. Participants submitted expected probabilities of occurrence of M ≥ 4.95 earthquakes in 0.1° × 0.1° cells for the period 1 January 1, 2006, to December 31, 2010. Probabilities were submitted for 7,682 cells in California and adjacent regions. During this period, 31 M ≥ 4.95 earthquakes occurred in the test region. These earthquakes occurred in 22 test cells. This seismic activity was dominated by earthquakes associated with the M = 7.2, April 4, 2010, El Mayor-Cucapah earthquake in northern Mexico. This earthquake occurred in the test region, and 16 of the other 30 earthquakes in the test region could be associated with it. Nine complete forecasts were submitted by six participants. In this paper, we present the forecasts in a way that allows the reader to evaluate which forecast is the most "successful" in terms of the locations of future earthquakes. We conclude that the RELM test was a success and suggest ways in which the results can be used to improve future forecasts.
Khakzad, Nima; Khan, Faisal; Amyotte, Paul
2015-07-01
Compared to the remarkable progress in risk analysis of normal accidents, the risk analysis of major accidents has not been so well-established, partly due to the complexity of such accidents and partly due to low probabilities involved. The issue of low probabilities normally arises from the scarcity of major accidents' relevant data since such accidents are few and far between. In this work, knowing that major accidents are frequently preceded by accident precursors, a novel precursor-based methodology has been developed for likelihood modeling of major accidents in critical infrastructures based on a unique combination of accident precursor data, information theory, and approximate reasoning. For this purpose, we have introduced an innovative application of information analysis to identify the most informative near accident of a major accident. The observed data of the near accident were then used to establish predictive scenarios to foresee the occurrence of the major accident. We verified the methodology using offshore blowouts in the Gulf of Mexico, and then demonstrated its application to dam breaches in the United Sates. © 2015 Society for Risk Analysis.
Using the Elaboration Likelihood Model to Address Drunkorexia Among College Students.
Glassman, Tavis; Paprzycki, Peter; Castor, Thomas; Wotring, Amy; Wagner-Greene, Victoria; Ritzman, Matthew; Diehr, Aaron J; Kruger, Jessica
2017-12-26
The many consequences related to alcohol consumption among college students are well documented. Drunkorexia, a relatively new term and area of research, is characterized by skipping meals to reduce caloric intake and/or exercising excessively in attempt to compensate for calories associated with high volume drinking. The objective of this study was to use the Elaboration Likelihood Model to compare the impact of central and peripheral prevention messages on alcohol consumption and drunkorexic behavior. Researchers employed a quasi-experimental design, collecting pre- or post-test data from 172 college students living in residence halls at a large Midwestern university, to assess the impact of the prevention messages. Participants in the treatment groups received the message in person (flyer), through email, and via a text message in weekly increments. Results showed that participants exposed to the peripherally framed message decreased the frequency of their alcohol consumption over a 30-day period (p =.003), the number of drinks they consumed the last time they drank (p =.029), the frequency they had more than five drinks over a 30-day period (p =.019), as well as the maximum number of drinks they had on any occasion in the past 30 days (p =.014). Conclusions/Importance: While more research is needed in this area, the findings from this study indicate that researchers and practitioners should design peripheral (short and succinct), rather than central (complex and detailed), messages to prevent drunkorexia and its associated behaviors.
Withers, Giselle F; Wertheim, Eleanor H
2004-01-01
This study applied principles from the Elaboration Likelihood Model of Persuasion to the prevention of disordered eating. Early adolescent girls watched either a preventive videotape only (n=114) or video plus post-video activity (verbal discussion, written exercises, or control discussion) (n=187); or had no intervention (n=104). Significantly more body image and knowledge improvements occurred at post video and follow-up in the intervention groups compared to no intervention. There were no outcome differences among intervention groups, or between girls with high or low elaboration likelihood. Further research is needed in integrating the videotape into a broader prevention package.
Kelderman, Henk
1991-01-01
In this paper, algorithms are described for obtaining the maximum likelihood estimates of the parameters in log-linear models. Modified versions of the iterative proportional fitting and Newton-Raphson algorithms are described that work on the minimal sufficient statistics rather than on the usual
Kelderman, Henk
1992-01-01
In this paper algorithms are described for obtaining the maximum likelihood estimates of the parameters in loglinear models. Modified versions of the iterative proportional fitting and Newton-Raphson algorithms are described that work on the minimal sufficient statistics rather than on the usual
Lee, Woong-Kyu
2012-01-01
The principal objective of this study was to gain insight into attitude changes occurring during IT acceptance from the perspective of elaboration likelihood model (ELM). In particular, the primary target of this study was the process of IT acceptance through an education program. Although the Internet and computers are now quite ubiquitous, and…
Andrews, Lester W.; Gutkin, Terry B.
1994-01-01
Investigates variables drawn from the Elaboration Likelihood Model (ELM) that might be manipulated to enhance the persuasiveness of a psychoeducational report. Results showed teachers in training were more persuaded by reports with high message quality. Findings are discussed in terms of the ELM and professional school psychology practice. (RJM)
Bergboer, N.H.; Verdult, V.; Verhaegen, M.H.G.
2002-01-01
We present a numerically efficient implementation of the nonlinear least squares and maximum likelihood identification of multivariable linear time-invariant (LTI) state-space models. This implementation is based on a local parameterization of the system and a gradient search in the resulting
Bolck, A.; Ni, H.; Lopatka, M.
2015-01-01
Likelihood ratio (LR) models are moving into the forefront of forensic evidence evaluation as these methods are adopted by a diverse range of application areas in forensic science. We examine the fundamentally different results that can be achieved when feature- and score-based methodologies are
Nakamura, M; Saito, K; Wakabayashi, M
1990-04-01
The purpose of this study was to investigate how attitude change is generated by the recipient's degree of attitude formation, evaluative-emotional elements contained in the persuasive messages, and source expertise as a peripheral cue in the persuasion context. Hypotheses based on the Attitude Formation Theory of Mizuhara (1982) and the Elaboration Likelihood Model of Petty and Cacioppo (1981, 1986) were examined. Eighty undergraduate students served as subjects in the experiment, the first stage of which involving manipulating the degree of attitude formation with respect to nuclear power development. Then, the experimenter presented persuasive messages with varying combinations of evaluative-emotional elements from a source with either high or low expertise on the subject. Results revealed a significant interaction effect on attitude change among attitude formation, persuasive message and the expertise of the message source. That is, high attitude formation subjects resisted evaluative-emotional persuasion from the high expertise source while low attitude formation subjects changed their attitude when exposed to the same persuasive message from a low expertise source. Results exceeded initial predictions based on the Attitude Formation Theory and the Elaboration Likelihood Model.
Casabianca, Jodi M.; Lewis, Charles
2015-01-01
Loglinear smoothing (LLS) estimates the latent trait distribution while making fewer assumptions about its form and maintaining parsimony, thus leading to more precise item response theory (IRT) item parameter estimates than standard marginal maximum likelihood (MML). This article provides the expectation-maximization algorithm for MML estimation…
Monte Carlo Maximum Likelihood Estimation for Generalized Long-Memory Time Series Models
Mesters, G.; Koopman, S.J.; Ooms, M.
2016-01-01
An exact maximum likelihood method is developed for the estimation of parameters in a non-Gaussian nonlinear density function that depends on a latent Gaussian dynamic process with long-memory properties. Our method relies on the method of importance sampling and on a linear Gaussian approximating
Maximum likelihood estimation for Cox's regression model under nested case-control sampling
DEFF Research Database (Denmark)
Scheike, Thomas; Juul, Anders
2004-01-01
Nested case-control sampling is designed to reduce the costs of large cohort studies. It is important to estimate the parameters of interest as efficiently as possible. We present a new maximum likelihood estimator (MLE) for nested case-control sampling in the context of Cox's proportional hazard...
Christiansen, Bo
2015-04-01
Linear regression methods are without doubt the most used approaches to describe and predict data in the physical sciences. They are often good first order approximations and they are in general easier to apply and interpret than more advanced methods. However, even the properties of univariate regression can lead to debate over the appropriateness of various models as witnessed by the recent discussion about climate reconstruction methods. Before linear regression is applied important choices have to be made regarding the origins of the noise terms and regarding which of the two variables under consideration that should be treated as the independent variable. These decisions are often not easy to make but they may have a considerable impact on the results. We seek to give a unified probabilistic - Bayesian with flat priors - treatment of univariate linear regression and prediction by taking, as starting point, the general errors-in-variables model (Christiansen, J. Clim., 27, 2014-2031, 2014). Other versions of linear regression can be obtained as limits of this model. We derive the likelihood of the model parameters and predictands of the general errors-in-variables model by marginalizing over the nuisance parameters. The resulting likelihood is relatively simple and easy to analyze and calculate. The well known unidentifiability of the errors-in-variables model is manifested as the absence of a well-defined maximum in the likelihood. However, this does not mean that probabilistic inference can not be made; the marginal likelihoods of model parameters and the predictands have, in general, well-defined maxima. We also include a probabilistic version of classical calibration and show how it is related to the errors-in-variables model. The results are illustrated by an example from the coupling between the lower stratosphere and the troposphere in the Northern Hemisphere winter.
Chen, Feng; Chen, Suren; Ma, Xiaoxiang
2018-06-01
Driving environment, including road surface conditions and traffic states, often changes over time and influences crash probability considerably. It becomes stretched for traditional crash frequency models developed in large temporal scales to capture the time-varying characteristics of these factors, which may cause substantial loss of critical driving environmental information on crash prediction. Crash prediction models with refined temporal data (hourly records) are developed to characterize the time-varying nature of these contributing factors. Unbalanced panel data mixed logit models are developed to analyze hourly crash likelihood of highway segments. The refined temporal driving environmental data, including road surface and traffic condition, obtained from the Road Weather Information System (RWIS), are incorporated into the models. Model estimation results indicate that the traffic speed, traffic volume, curvature and chemically wet road surface indicator are better modeled as random parameters. The estimation results of the mixed logit models based on unbalanced panel data show that there are a number of factors related to crash likelihood on I-25. Specifically, weekend indicator, November indicator, low speed limit and long remaining service life of rutting indicator are found to increase crash likelihood, while 5-am indicator and number of merging ramps per lane per mile are found to decrease crash likelihood. The study underscores and confirms the unique and significant impacts on crash imposed by the real-time weather, road surface, and traffic conditions. With the unbalanced panel data structure, the rich information from real-time driving environmental big data can be well incorporated. Copyright © 2018 National Safety Council and Elsevier Ltd. All rights reserved.
Lirio, R B; Dondériz, I C; Pérez Abalo, M C
1992-08-01
The methodology of Receiver Operating Characteristic curves based on the signal detection model is extended to evaluate the accuracy of two-stage diagnostic strategies. A computer program is developed for the maximum likelihood estimation of parameters that characterize the sensitivity and specificity of two-stage classifiers according to this extended methodology. Its use is briefly illustrated with data collected in a two-stage screening for auditory defects.
Bical, Adil; Yılmaz, R. Ayhan
2018-01-01
The purpose of the study is to reveal that how persuasion works in public service announcements on hazelnut and orange consumption ones broadcasted in Turkey. According to Petty and Cacioppo, Elaboration Likelihood Model explains the process of persuasion on two routes: central and peripheral. In-depth interviews were conducted to obtain the goal of the study. Respondents were asked whether they process the message of the PSA centrally or peripherally. Advertisements on consumption of hazelnu...
Cheng, Qin-Bo; Chen, Xi; Xu, Chong-Yu; Reinhardt-Imjela, Christian; Schulte, Achim
2014-11-01
In this study, the likelihood functions for uncertainty analysis of hydrological models are compared and improved through the following steps: (1) the equivalent relationship between the Nash-Sutcliffe Efficiency coefficient (NSE) and the likelihood function with Gaussian independent and identically distributed residuals is proved; (2) a new estimation method of the Box-Cox transformation (BC) parameter is developed to improve the effective elimination of the heteroscedasticity of model residuals; and (3) three likelihood functions-NSE, Generalized Error Distribution with BC (BC-GED) and Skew Generalized Error Distribution with BC (BC-SGED)-are applied for SWAT-WB-VSA (Soil and Water Assessment Tool - Water Balance - Variable Source Area) model calibration in the Baocun watershed, Eastern China. Performances of calibrated models are compared using the observed river discharges and groundwater levels. The result shows that the minimum variance constraint can effectively estimate the BC parameter. The form of the likelihood function significantly impacts on the calibrated parameters and the simulated results of high and low flow components. SWAT-WB-VSA with the NSE approach simulates flood well, but baseflow badly owing to the assumption of Gaussian error distribution, where the probability of the large error is low, but the small error around zero approximates equiprobability. By contrast, SWAT-WB-VSA with the BC-GED or BC-SGED approach mimics baseflow well, which is proved in the groundwater level simulation. The assumption of skewness of the error distribution may be unnecessary, because all the results of the BC-SGED approach are nearly the same as those of the BC-GED approach.
The phylogenetic likelihood library.
Flouri, T; Izquierdo-Carrasco, F; Darriba, D; Aberer, A J; Nguyen, L-T; Minh, B Q; Von Haeseler, A; Stamatakis, A
2015-03-01
We introduce the Phylogenetic Likelihood Library (PLL), a highly optimized application programming interface for developing likelihood-based phylogenetic inference and postanalysis software. The PLL implements appropriate data structures and functions that allow users to quickly implement common, error-prone, and labor-intensive tasks, such as likelihood calculations, model parameter as well as branch length optimization, and tree space exploration. The highly optimized and parallelized implementation of the phylogenetic likelihood function and a thorough documentation provide a framework for rapid development of scalable parallel phylogenetic software. By example of two likelihood-based phylogenetic codes we show that the PLL improves the sequential performance of current software by a factor of 2-10 while requiring only 1 month of programming time for integration. We show that, when numerical scaling for preventing floating point underflow is enabled, the double precision likelihood calculations in the PLL are up to 1.9 times faster than those in BEAGLE. On an empirical DNA dataset with 2000 taxa the AVX version of PLL is 4 times faster than BEAGLE (scaling enabled and required). The PLL is available at http://www.libpll.org under the GNU General Public License (GPL). © The Author(s) 2014. Published by Oxford University Press, on behalf of the Society of Systematic Biologists.
Directory of Open Access Journals (Sweden)
Wang Huai-Chun
2009-09-01
Full Text Available Abstract Background The covarion hypothesis of molecular evolution holds that selective pressures on a given amino acid or nucleotide site are dependent on the identity of other sites in the molecule that change throughout time, resulting in changes of evolutionary rates of sites along the branches of a phylogenetic tree. At the sequence level, covarion-like evolution at a site manifests as conservation of nucleotide or amino acid states among some homologs where the states are not conserved in other homologs (or groups of homologs. Covarion-like evolution has been shown to relate to changes in functions at sites in different clades, and, if ignored, can adversely affect the accuracy of phylogenetic inference. Results PROCOV (protein covarion analysis is a software tool that implements a number of previously proposed covarion models of protein evolution for phylogenetic inference in a maximum likelihood framework. Several algorithmic and implementation improvements in this tool over previous versions make computationally expensive tree searches with covarion models more efficient and analyses of large phylogenomic data sets tractable. PROCOV can be used to identify covarion sites by comparing the site likelihoods under the covarion process to the corresponding site likelihoods under a rates-across-sites (RAS process. Those sites with the greatest log-likelihood difference between a 'covarion' and an RAS process were found to be of functional or structural significance in a dataset of bacterial and eukaryotic elongation factors. Conclusion Covarion models implemented in PROCOV may be especially useful for phylogenetic estimation when ancient divergences between sequences have occurred and rates of evolution at sites are likely to have changed over the tree. It can also be used to study lineage-specific functional shifts in protein families that result in changes in the patterns of site variability among subtrees.
Modelling of risk events with uncertain likelihoods and impacts in large infrastructure projects
DEFF Research Database (Denmark)
Schjær-Jacobsen, Hans
2010-01-01
to prevent future budget overruns. One of the central ideas is to introduce improved risk management processes and the present paper addresses this particular issue. A relevant cost function in terms of unit prices and quantities is developed and an event impact matrix with uncertain impacts from independent......This paper presents contributions to the mathematical core of risk and uncertainty management in compliance with the principles of New Budgeting laid out in 2008 by the Danish Ministry of Transport to be used in large infrastructure projects. Basically, the new principles are proposed in order...... uncertain risk events is used to calculate the total uncertain risk budget. Cost impacts from the individual risk events on the individual project activities are kept precisely track of in order to comply with the requirements of New Budgeting. Additionally, uncertain likelihoods for the occurrence of risk...
Directory of Open Access Journals (Sweden)
Rajat Malik
Full Text Available A class of discrete-time models of infectious disease spread, referred to as individual-level models (ILMs, are typically fitted in a Bayesian Markov chain Monte Carlo (MCMC framework. These models quantify probabilistic outcomes regarding the risk of infection of susceptible individuals due to various susceptibility and transmissibility factors, including their spatial distance from infectious individuals. The infectious pressure from infected individuals exerted on susceptible individuals is intrinsic to these ILMs. Unfortunately, quantifying this infectious pressure for data sets containing many individuals can be computationally burdensome, leading to a time-consuming likelihood calculation and, thus, computationally prohibitive MCMC-based analysis. This problem worsens when using data augmentation to allow for uncertainty in infection times. In this paper, we develop sampling methods that can be used to calculate a fast, approximate likelihood when fitting such disease models. A simple random sampling approach is initially considered followed by various spatially-stratified schemes. We test and compare the performance of our methods with both simulated data and data from the 2001 foot-and-mouth disease (FMD epidemic in the U.K. Our results indicate that substantial computation savings can be obtained--albeit, of course, with some information loss--suggesting that such techniques may be of use in the analysis of very large epidemic data sets.
Deconvolving the wedge: maximum-likelihood power spectra via spherical-wave visibility modelling
Ghosh, A.; Mertens, F. G.; Koopmans, L. V. E.
2018-03-01
Direct detection of the Epoch of Reionization (EoR) via the red-shifted 21-cm line will have unprecedented implications on the study of structure formation in the infant Universe. To fulfil this promise, current and future 21-cm experiments need to detect this weak EoR signal in the presence of foregrounds that are several orders of magnitude larger. This requires extreme noise control and improved wide-field high dynamic-range imaging techniques. We propose a new imaging method based on a maximum likelihood framework which solves for the interferometric equation directly on the sphere, or equivalently in the uvw-domain. The method uses the one-to-one relation between spherical waves and spherical harmonics (SpH). It consistently handles signals from the entire sky, and does not require a w-term correction. The SpH coefficients represent the sky-brightness distribution and the visibilities in the uvw-domain, and provide a direct estimate of the spatial power spectrum. Using these spectrally smooth SpH coefficients, bright foregrounds can be removed from the signal, including their side-lobe noise, which is one of the limiting factors in high dynamics-range wide-field imaging. Chromatic effects causing the so-called `wedge' are effectively eliminated (i.e. deconvolved) in the cylindrical (k⊥, k∥) power spectrum, compared to a power spectrum computed directly from the images of the foreground visibilities where the wedge is clearly present. We illustrate our method using simulated Low-Frequency Array observations, finding an excellent reconstruction of the input EoR signal with minimal bias.
Pal, Suvra; Balakrishnan, N
2017-10-01
In this paper, we consider a competing cause scenario and assume the number of competing causes to follow a Conway-Maxwell Poisson distribution which can capture both over and under dispersion that is usually encountered in discrete data. Assuming the population of interest having a component cure and the form of the data to be interval censored, as opposed to the usually considered right-censored data, the main contribution is in developing the steps of the expectation maximization algorithm for the determination of the maximum likelihood estimates of the model parameters of the flexible Conway-Maxwell Poisson cure rate model with Weibull lifetimes. An extensive Monte Carlo simulation study is carried out to demonstrate the performance of the proposed estimation method. Model discrimination within the Conway-Maxwell Poisson distribution is addressed using the likelihood ratio test and information-based criteria to select a suitable competing cause distribution that provides the best fit to the data. A simulation study is also carried out to demonstrate the loss in efficiency when selecting an improper competing cause distribution which justifies the use of a flexible family of distributions for the number of competing causes. Finally, the proposed methodology and the flexibility of the Conway-Maxwell Poisson distribution are illustrated with two known data sets from the literature: smoking cessation data and breast cosmesis data.
Kling, Daniel; Tillmar, Andreas; Egeland, Thore; Mostad, Petter
2015-09-01
Several applications necessitate an unbiased determination of relatedness, be it in linkage or association studies or in a forensic setting. An appropriate model to compute the joint probability of some genetic data for a set of persons given some hypothesis about the pedigree structure is then required. The increasing number of markers available through high-density SNP microarray typing and NGS technologies intensifies the demand, where using a large number of markers may lead to biased results due to strong dependencies between closely located loci, both within pedigrees (linkage) and in the population (allelic association or linkage disequilibrium (LD)). We present a new general model, based on a Markov chain for inheritance patterns and another Markov chain for founder allele patterns, the latter allowing us to account for LD. We also demonstrate a specific implementation for X chromosomal markers that allows for computation of likelihoods based on hypotheses of alleged relationships and genetic marker data. The algorithm can simultaneously account for linkage, LD, and mutations. We demonstrate its feasibility using simulated examples. The algorithm is implemented in the software FamLinkX, providing a user-friendly GUI for Windows systems (FamLinkX, as well as further usage instructions, is freely available at www.famlink.se ). Our software provides the necessary means to solve cases where no previous implementation exists. In addition, the software has the possibility to perform simulations in order to further study the impact of linkage and LD on computed likelihoods for an arbitrary set of markers.
International Nuclear Information System (INIS)
He, Yi; Scheraga, Harold A.; Liwo, Adam
2015-01-01
Coarse-grained models are useful tools to investigate the structural and thermodynamic properties of biomolecules. They are obtained by merging several atoms into one interaction site. Such simplified models try to capture as much as possible information of the original biomolecular system in all-atom representation but the resulting parameters of these coarse-grained force fields still need further optimization. In this paper, a force field optimization method, which is based on maximum-likelihood fitting of the simulated to the experimental conformational ensembles and least-squares fitting of the simulated to the experimental heat-capacity curves, is applied to optimize the Nucleic Acid united-RESidue 2-point (NARES-2P) model for coarse-grained simulations of nucleic acids recently developed in our laboratory. The optimized NARES-2P force field reproduces the structural and thermodynamic data of small DNA molecules much better than the original force field
Exact sampling from conditional Boolean models with applications to maximum likelihood inference
Lieshout, van M.N.M.; Zwet, van E.W.
2001-01-01
We are interested in estimating the intensity parameter of a Boolean model of discs (the bombing model) from a single realization. To do so, we derive the conditional distribution of the points (germs) of the underlying Poisson process. We demonstrate how to apply coupling from the past to generate
The early maximum likelihood estimation model of audiovisual integration in speech perception
DEFF Research Database (Denmark)
Andersen, Tobias
2015-01-01
integration to speech perception along with three model variations. In early MLE, integration is based on a continuous internal representation before categorization, which can make the model more parsimonious by imposing constraints that reflect experimental designs. The study also shows that cross......Speech perception is facilitated by seeing the articulatory mouth movements of the talker. This is due to perceptual audiovisual integration, which also causes the McGurk−MacDonald illusion, and for which a comprehensive computational account is still lacking. Decades of research have largely......-validation can evaluate models of audiovisual integration based on typical data sets taking both goodness-of-fit and model flexibility into account. All models were tested on a published data set previously used for testing the FLMP. Cross-validation favored the early MLE while more conventional error measures...
Likelihood analysis of the next-to-minimal supergravity motivated model
International Nuclear Information System (INIS)
Balazs, Csaba; Carter, Daniel
2009-01-01
In anticipation of data from the Large Hadron Collider (LHC) and the potential discovery of supersymmetry, we calculate the odds of the next-to-minimal version of the popular supergravity motivated model (NmSuGra) being discovered at the LHC to be 4:3 (57%). We also demonstrate that viable regions of the NmSuGra parameter space outside the LHC reach can be covered by upgraded versions of dark matter direct detection experiments, such as super-CDMS, at 99% confidence level. Due to the similarities of the models, we expect very similar results for the constrained minimal supersymmetric standard model (CMSSM).
DEFF Research Database (Denmark)
Cavaliere, Giuseppe; Nielsen, Morten Ørregaard; Taylor, Robert
We consider the problem of conducting estimation and inference on the parameters of univariate heteroskedastic fractionally integrated time series models. We first extend existing results in the literature, developed for conditional sum-of squares estimators in the context of parametric fractional...... time series models driven by conditionally homoskedastic shocks, to allow for conditional and unconditional heteroskedasticity both of a quite general and unknown form. Global consistency and asymptotic normality are shown to still obtain; however, the covariance matrix of the limiting distribution...... of the estimator now depends on nuisance parameters derived both from the weak dependence and heteroskedasticity present in the shocks. We then investigate classical methods of inference based on the Wald, likelihood ratio and Lagrange multiplier tests for linear hypotheses on either or both of the long and short...
Campbell, D A; Chkrebtii, O
2013-12-01
Statistical inference for biochemical models often faces a variety of characteristic challenges. In this paper we examine state and parameter estimation for the JAK-STAT intracellular signalling mechanism, which exemplifies the implementation intricacies common in many biochemical inference problems. We introduce an extension to the Generalized Smoothing approach for estimating delay differential equation models, addressing selection of complexity parameters, choice of the basis system, and appropriate optimization strategies. Motivated by the JAK-STAT system, we further extend the generalized smoothing approach to consider a nonlinear observation process with additional unknown parameters, and highlight how the approach handles unobserved states and unevenly spaced observations. The methodology developed is generally applicable to problems of estimation for differential equation models with delays, unobserved states, nonlinear observation processes, and partially observed histories. Crown Copyright © 2013. Published by Elsevier Inc. All rights reserved.
Ou, Lu; Chow, Sy-Miin; Ji, Linying; Molenaar, Peter C M
2017-01-01
The autoregressive latent trajectory (ALT) model synthesizes the autoregressive model and the latent growth curve model. The ALT model is flexible enough to produce a variety of discrepant model-implied change trajectories. While some researchers consider this a virtue, others have cautioned that this may confound interpretations of the model's parameters. In this article, we show that some-but not all-of these interpretational difficulties may be clarified mathematically and tested explicitly via likelihood ratio tests (LRTs) imposed on the initial conditions of the model. We show analytically the nested relations among three variants of the ALT model and the constraints needed to establish equivalences. A Monte Carlo simulation study indicated that LRTs, particularly when used in combination with information criterion measures, can allow researchers to test targeted hypotheses about the functional forms of the change process under study. We further demonstrate when and how such tests may justifiably be used to facilitate our understanding of the underlying process of change using a subsample (N = 3,995) of longitudinal family income data from the National Longitudinal Survey of Youth.
Morales-Casique, E.; Neuman, S.P.; Vesselinov, V.V.
2010-01-01
We use log permeability and porosity data obtained from single-hole pneumatic packer tests in six boreholes drilled into unsaturated fractured tuff near Superior, Arizona, to postulate, calibrate and compare five alternative variogram models (exponential, exponential with linear drift, power,
Directory of Open Access Journals (Sweden)
Salces Judit
2011-08-01
Full Text Available Abstract Background Reference genes with stable expression are required to normalize expression differences of target genes in qPCR experiments. Several procedures and companion software have been proposed to find the most stable genes. Model based procedures are attractive because they provide a solid statistical framework. NormFinder, a widely used software, uses a model based method. The pairwise comparison procedure implemented in GeNorm is a simpler procedure but one of the most extensively used. In the present work a statistical approach based in Maximum Likelihood estimation under mixed models was tested and compared with NormFinder and geNorm softwares. Sixteen candidate genes were tested in whole blood samples from control and heat stressed sheep. Results A model including gene and treatment as fixed effects, sample (animal, gene by treatment, gene by sample and treatment by sample interactions as random effects with heteroskedastic residual variance in gene by treatment levels was selected using goodness of fit and predictive ability criteria among a variety of models. Mean Square Error obtained under the selected model was used as indicator of gene expression stability. Genes top and bottom ranked by the three approaches were similar; however, notable differences for the best pair of genes selected for each method and the remaining genes of the rankings were shown. Differences among the expression values of normalized targets for each statistical approach were also found. Conclusions Optimal statistical properties of Maximum Likelihood estimation joined to mixed model flexibility allow for more accurate estimation of expression stability of genes under many different situations. Accurate selection of reference genes has a direct impact over the normalized expression values of a given target gene. This may be critical when the aim of the study is to compare expression rate differences among samples under different environmental
Energy Technology Data Exchange (ETDEWEB)
Storm, Emma; Weniger, Christoph [GRAPPA, Institute of Physics, University of Amsterdam, Science Park 904, 1090 GL Amsterdam (Netherlands); Calore, Francesca, E-mail: e.m.storm@uva.nl, E-mail: c.weniger@uva.nl, E-mail: francesca.calore@lapth.cnrs.fr [LAPTh, CNRS, 9 Chemin de Bellevue, BP-110, Annecy-le-Vieux, 74941, Annecy Cedex (France)
2017-08-01
We present SkyFACT (Sky Factorization with Adaptive Constrained Templates), a new approach for studying, modeling and decomposing diffuse gamma-ray emission. Like most previous analyses, the approach relies on predictions from cosmic-ray propagation codes like GALPROP and DRAGON. However, in contrast to previous approaches, we account for the fact that models are not perfect and allow for a very large number (∼> 10{sup 5}) of nuisance parameters to parameterize these imperfections. We combine methods of image reconstruction and adaptive spatio-spectral template regression in one coherent hybrid approach. To this end, we use penalized Poisson likelihood regression, with regularization functions that are motivated by the maximum entropy method. We introduce methods to efficiently handle the high dimensionality of the convex optimization problem as well as the associated semi-sparse covariance matrix, using the L-BFGS-B algorithm and Cholesky factorization. We test the method both on synthetic data as well as on gamma-ray emission from the inner Galaxy, |ℓ|<90{sup o} and | b |<20{sup o}, as observed by the Fermi Large Area Telescope. We finally define a simple reference model that removes most of the residual emission from the inner Galaxy, based on conventional diffuse emission components as well as components for the Fermi bubbles, the Fermi Galactic center excess, and extended sources along the Galactic disk. Variants of this reference model can serve as basis for future studies of diffuse emission in and outside the Galactic disk.
Eggers, G. L.; Lewis, K. W.; Simons, F. J.; Olhede, S.
2013-12-01
Venus does not possess a plate-tectonic system like that observed on Earth, and many surface features--such as tesserae and coronae--lack terrestrial equivalents. To understand Venus' tectonics is to understand its lithosphere, requiring a study of topography and gravity, and how they relate. Past studies of topography dealt with mapping and classification of visually observed features, and studies of gravity dealt with inverting the relation between topography and gravity anomalies to recover surface density and elastic thickness in either the space (correlation) or the spectral (admittance, coherence) domain. In the former case, geological features could be delineated but not classified quantitatively. In the latter case, rectangular or circular data windows were used, lacking geological definition. While the estimates of lithospheric strength on this basis were quantitative, they lacked robust error estimates. Here, we remapped the surface into 77 regions visually and qualitatively defined from a combination of Magellan topography, gravity, and radar images. We parameterize the spectral covariance of the observed topography, treating it as a Gaussian process assumed to be stationary over the mapped regions, using a three-parameter isotropic Matern model, and perform maximum-likelihood based inversions for the parameters. We discuss the parameter distribution across the Venusian surface and across terrain types such as coronoae, dorsae, tesserae, and their relation with mean elevation and latitudinal position. We find that the three-parameter model, while mathematically established and applicable to Venus topography, is overparameterized, and thus reduce the results to a two-parameter description of the peak spectral variance and the range-to-half-peak variance (in function of the wavenumber). With the reduction the clustering of geological region types in two-parameter space becomes promising. Finally, we perform inversions for the JOINT spectral variance of
Energy Technology Data Exchange (ETDEWEB)
Hogden, J.
1996-11-05
The goal of the proposed research is to test a statistical model of speech recognition that incorporates the knowledge that speech is produced by relatively slow motions of the tongue, lips, and other speech articulators. This model is called Maximum Likelihood Continuity Mapping (Malcom). Many speech researchers believe that by using constraints imposed by articulator motions, we can improve or replace the current hidden Markov model based speech recognition algorithms. Unfortunately, previous efforts to incorporate information about articulation into speech recognition algorithms have suffered because (1) slight inaccuracies in our knowledge or the formulation of our knowledge about articulation may decrease recognition performance, (2) small changes in the assumptions underlying models of speech production can lead to large changes in the speech derived from the models, and (3) collecting measurements of human articulator positions in sufficient quantity for training a speech recognition algorithm is still impractical. The most interesting (and in fact, unique) quality of Malcom is that, even though Malcom makes use of a mapping between acoustics and articulation, Malcom can be trained to recognize speech using only acoustic data. By learning the mapping between acoustics and articulation using only acoustic data, Malcom avoids the difficulties involved in collecting articulator position measurements and does not require an articulatory synthesizer model to estimate the mapping between vocal tract shapes and speech acoustics. Preliminary experiments that demonstrate that Malcom can learn the mapping between acoustics and articulation are discussed. Potential applications of Malcom aside from speech recognition are also discussed. Finally, specific deliverables resulting from the proposed research are described.
Horton, Rachael Jane; Minniti, Antoinette; Mireylees, Stewart; McEntegart, Damian
2008-11-01
Non-compliance in clinical studies is a significant issue, but causes remain unclear. Utilizing the Elaboration Likelihood Model of persuasion, this study assessed the psychophysical peripheral cue 'Interactive Voice Response System (IVRS) call frequency' on compliance. 71 participants were randomized to once daily (OD), twice daily (BID) or three times daily (TID) call schedules over two weeks. Participants completed 30-item cognitive function tests at each call. Compliance was defined as proportion of expected calls within a narrow window (+/- 30 min around scheduled time), and within a relaxed window (-30 min to +4 h). Data were analyzed by ANOVA and pairwise comparisons adjusted by the Bonferroni correction. There was a relationship between call frequency and compliance. Bonferroni adjusted pairwise comparisons showed significantly higher compliance (p=0.03) for the BID (51.0%) than TID (30.3%) for the narrow window; for the extended window, compliance was higher (p=0.04) with OD (59.5%), than TID (38.4%). The IVRS psychophysical peripheral cue call frequency supported the ELM as a route to persuasion. The results also support OD strategy for optimal compliance. Models suggest specific indicators to enhance compliance with medication dosing and electronic patient diaries to improve health outcomes and data integrity respectively.
Directory of Open Access Journals (Sweden)
Katarzyna A Dembek
Full Text Available BACKGROUND: Medical management of critically ill equine neonates (foals can be expensive and labor intensive. Predicting the odds of foal survival using clinical information could facilitate the decision-making process for owners and clinicians. Numerous prognostic indicators and mathematical models to predict outcome in foals have been published; however, a validated scoring method to predict survival in sick foals has not been reported. The goal of this study was to develop and validate a scoring system that can be used by clinicians to predict likelihood of survival of equine neonates based on clinical data obtained on admission. METHODS AND RESULTS: Data from 339 hospitalized foals of less than four days of age admitted to three equine hospitals were included to develop the model. Thirty seven variables including historical information, physical examination and laboratory findings were analyzed by generalized boosted regression modeling (GBM to determine which ones would be included in the survival score. Of these, six variables were retained in the final model. The weight for each variable was calculated using a generalized linear model and the probability of survival for each total score was determined. The highest (7 and the lowest (0 scores represented 97% and 3% probability of survival, respectively. Accuracy of this survival score was validated in a prospective study on data from 283 hospitalized foals from the same three hospitals. Sensitivity, specificity, positive and negative predictive values for the survival score in the prospective population were 96%, 71%, 91%, and 85%, respectively. CONCLUSIONS: The survival score developed in our study was validated in a large number of foals with a wide range of diseases and can be easily implemented using data available in most equine hospitals. GBM was a useful tool to develop the survival score. Further evaluations of this scoring system in field conditions are needed.
Bhutada, Nilesh S; Rollins, Brent L; Perri, Matthew
2017-04-01
A randomized, posttest-only online survey study of adult U.S. consumers determined the advertising effectiveness (attitude toward ad, brand, company, spokes-characters, attention paid to the ad, drug inquiry intention, and perceived product risk) of animated spokes-characters in print direct-to-consumer (DTC) advertising of prescription drugs and the moderating effects of consumers' involvement. Consumers' responses (n = 490) were recorded for animated versus nonanimated (human) spokes-characters in a fictitious DTC ad. Guided by the elaboration likelihood model, data were analyzed using a 2 (spokes-character type: animated/human) × 2 (involvement: high/low) factorial multivariate analysis of covariance (MANCOVA). The MANCOVA indicated significant main effects of spokes-character type and involvement on the dependent variables after controlling for covariate effects. Of the several ad effectiveness variables, consumers only differed on their attitude toward the spokes-characters between the two spokes-character types (specifically, more favorable attitudes toward the human spokes-character). Apart from perceived product risk, high-involvement consumers reacted more favorably to the remaining ad effectiveness variables compared to the low-involvement consumers, and exhibited significantly stronger drug inquiry intentions during their next doctor visit. Further, the moderating effect of consumers' involvement was not observed (nonsignificant interaction effect between spokes-character type and involvement).
Multilevel maximum likelihood estimation with application to covariance matrices
Czech Academy of Sciences Publication Activity Database
Turčičová, Marie; Mandel, J.; Eben, Kryštof
Published online: 23 January ( 2018 ) ISSN 0361-0926 R&D Projects: GA ČR GA13-34856S Institutional support: RVO:67985807 Keywords : Fisher information * High dimension * Hierarchical maximum likelihood * Nested parameter spaces * Spectral diagonal covariance model * Sparse inverse covariance model Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 0.311, year: 2016
A time series intervention analysis (TSIA) of dendrochronological data to infer the tree growth-climate-disturbance relations and forest disturbance history is described. Maximum likelihood is used to estimate the parameters of a structural time series model with components for ...
De Rooi, J.J.; Van der Pers, N.M.; Hendrikx, R.W.A.; Delhez, R.; Bottger, A.J.; Eilers, P.H.C.
2014-01-01
X-ray diffraction scans consist of series of counts; these numbers obey Poisson distributions with varying expected values. These scans are often smoothed and the K2 component is removed. This article proposes a framework in which both issues are treated. Penalized likelihood estimation is used to
Directory of Open Access Journals (Sweden)
Jesús Vega Encabo
2015-11-01
Full Text Available In this paper, I claim that subjectivity is a way of being that is constituted through a set of practices in which the self is subject to the dangers of fictionalizing and plotting her life and self-image. I examine some ways of becoming subject through narratives and through theatrical performance before others. Through these practices, a real and active subjectivity is revealed, capable of self-knowledge and self-transformation.
Boden, Lauren M; Boden, Stephanie A; Premkumar, Ajay; Gottschalk, Michael B; Boden, Scott D
2018-02-09
Retrospective analysis of prospectively collected data. To create a data-driven triage system stratifying patients by likelihood of undergoing spinal surgery within one year of presentation. Low back pain (LBP) and radicular lower extremity (LE) symptoms are common musculoskeletal problems. There is currently no standard data-derived triage process based on information that can be obtained prior to the initial physician-patient encounter to direct patients to the optimal physician type. We analyzed patient-reported data from 8006 patients with a chief complaint of LBP and/or LE radicular symptoms who presented to surgeons at a large multidisciplinary spine center between September 1, 2005 and June 30, 2016. Univariate and multivariate analysis identified independent risk factors for undergoing spinal surgery within one year of initial visit. A model incorporating these risk factors was created using a random sample of 80% of the total patients in our cohort, and validated on the remaining 20%. The baseline one-year surgery rate within our cohort was 39% for all patients and 42% for patients with LE symptoms. Those identified as high likelihood by the center's existing triage process had a surgery rate of 45%. The new triage scoring system proposed in this study was able to identify a high likelihood group in which 58% underwent surgery, which is a 46% higher surgery rate than in non-triaged patients and a 29% improvement from our institution's existing triage system. The data-driven triage model and scoring system derived and validated in this study (Spine Surgery Likelihood model [SSL-11]), significantly improved existing processes in predicting the likelihood of undergoing spinal surgery within one year of initial presentation. This triage system will allow centers to more selectively screen for surgical candidates and more effectively direct patients to surgeons or non-operative spine specialists. 4.
Obtaining reliable Likelihood Ratio tests from simulated likelihood functions
DEFF Research Database (Denmark)
Andersen, Laura Mørch
It is standard practice by researchers and the default option in many statistical programs to base test statistics for mixed models on simulations using asymmetric draws (e.g. Halton draws). This paper shows that when the estimated likelihood functions depend on standard deviations of mixed param...
Modelling of individual subject ozone exposure response kinetics.
Schelegle, Edward S; Adams, William C; Walby, William F; Marion, M Susan
2012-06-01
A better understanding of individual subject ozone (O(3)) exposure response kinetics will provide insight into how to improve models used in the risk assessment of ambient ozone exposure. To develop a simple two compartment exposure-response model that describes individual subject decrements in forced expiratory volume in one second (FEV(1)) induced by the acute inhalation of O(3) lasting up to 8 h. FEV(1) measurements of 220 subjects who participated in 14 previously completed studies were fit to the model using both particle swarm and nonlinear least squares optimization techniques to identify three subject-specific coefficients producing minimum "global" and local errors, respectively. Observed and predicted decrements in FEV(1) of the 220 subjects were used for validation of the model. Further validation was provided by comparing the observed O(3)-induced FEV(1) decrements in an additional eight studies with predicted values obtained using model coefficients estimated from the 220 subjects used in cross validation. Overall the individual subject measured and modeled FEV(1) decrements were highly correlated (mean R(2) of 0.69 ± 0.24). In addition, it was shown that a matrix of individual subject model coefficients can be used to predict the mean and variance of group decrements in FEV(1). This modeling approach provides insight into individual subject O(3) exposure response kinetics and provides a potential starting point for improving the risk assessment of environmental O(3) exposure.
International Nuclear Information System (INIS)
Wall, M.J.W.
1992-01-01
The notion of open-quotes probabilityclose quotes is generalized to that of open-quotes likelihood,close quotes and a natural logical structure is shown to exist for any physical theory which predicts likelihoods. Two physically based axioms are given for this logical structure to form an orthomodular poset, with an order-determining set of states. The results strengthen the basis of the quantum logic approach to axiomatic quantum theory. 25 refs
Directory of Open Access Journals (Sweden)
Jurgen A. Doornik
2017-11-01
Full Text Available This paper provides some test cases, called circuits, for the evaluation of Gaussian likelihood maximization algorithms of the cointegrated vector autoregressive model. Both I(1 and I(2 models are considered. The performance of algorithms is compared first in terms of effectiveness, defined as the ability to find the overall maximum. The next step is to compare their efficiency and reliability across experiments. The aim of the paper is to commence a collective learning project by the profession on the actual properties of algorithms for cointegrated vector autoregressive model estimation, in order to improve their quality and, as a consequence, also the reliability of empirical research.
Thompson, Bryony A; Goldgar, David E; Paterson, Carol; Clendenning, Mark; Walters, Rhiannon; Arnold, Sven; Parsons, Michael T; Michael D, Walsh; Gallinger, Steven; Haile, Robert W; Hopper, John L; Jenkins, Mark A; Lemarchand, Loic; Lindor, Noralane M; Newcomb, Polly A; Thibodeau, Stephen N; Young, Joanne P; Buchanan, Daniel D; Tavtigian, Sean V; Spurdle, Amanda B
2013-01-01
Mismatch repair (MMR) gene sequence variants of uncertain clinical significance are often identified in suspected Lynch syndrome families, and this constitutes a challenge for both researchers and clinicians. Multifactorial likelihood model approaches provide a quantitative measure of MMR variant pathogenicity, but first require input of likelihood ratios (LRs) for different MMR variation-associated characteristics from appropriate, well-characterized reference datasets. Microsatellite instability (MSI) and somatic BRAF tumor data for unselected colorectal cancer probands of known pathogenic variant status were used to derive LRs for tumor characteristics using the Colon Cancer Family Registry (CFR) resource. These tumor LRs were combined with variant segregation within families, and estimates of prior probability of pathogenicity based on sequence conservation and position, to analyze 44 unclassified variants identified initially in Australasian Colon CFR families. In addition, in vitro splicing analyses were conducted on the subset of variants based on bioinformatic splicing predictions. The LR in favor of pathogenicity was estimated to be ~12-fold for a colorectal tumor with a BRAF mutation-negative MSI-H phenotype. For 31 of the 44 variants, the posterior probabilities of pathogenicity were such that altered clinical management would be indicated. Our findings provide a working multifactorial likelihood model for classification that carefully considers mode of ascertainment for gene testing. © 2012 Wiley Periodicals, Inc.
Bernhardt, Paul W; Wang, Huixia Judy; Zhang, Daowen
2014-01-01
Models for survival data generally assume that covariates are fully observed. However, in medical studies it is not uncommon for biomarkers to be censored at known detection limits. A computationally-efficient multiple imputation procedure for modeling survival data with covariates subject to detection limits is proposed. This procedure is developed in the context of an accelerated failure time model with a flexible seminonparametric error distribution. The consistency and asymptotic normality of the multiple imputation estimator are established and a consistent variance estimator is provided. An iterative version of the proposed multiple imputation algorithm that approximates the EM algorithm for maximum likelihood is also suggested. Simulation studies demonstrate that the proposed multiple imputation methods work well while alternative methods lead to estimates that are either biased or more variable. The proposed methods are applied to analyze the dataset from a recently-conducted GenIMS study.
Soli, Sigfrid D; Giguère, Christian; Laroche, Chantal; Vaillancourt, Véronique; Dreschler, Wouter A; Rhebergen, Koenraad S; Harkins, Kevin; Ruckstuhl, Mark; Ramulu, Pradeep; Meyers, Lawrence S
corrections environments. The likelihood of effective speech communication at communication distances of 0.5 and 1 m was often less than 0.50 for normal vocal effort. Likelihood values often increased to 0.80 or more when raised or loud vocal effort was used. Effective speech communication at and beyond 5 m was often unlikely, regardless of vocal effort. ESII modeling of nonstationary real-world noise environments may prove an objective means of characterizing their impact on the likelihood of effective speech communication. The normative reference provided by these measures predicts the extent to which hearing impairments that increase the ESII value required for effective speech communication also decrease the likelihood of effective speech communication. These predictions may provide an objective evidence-based link between the essential hearing-critical job task requirements of public safety and law enforcement personnel and ESII-based hearing assessment of individuals who seek to perform these jobs.
2013-03-01
Proliferation Treaty OSINT Open Source Intelligence SAFF Safing, Arming, Fuzing, Firing SIAM Situational Influence Assessment Module SME Subject...expertise. One of the analysts can also be trained to tweak CAST logic as needed. In this initial build, only open-source intelligence ( OSINT ) will
DEFF Research Database (Denmark)
Silvennoinen, Annestiina; Terasvirta, Timo
A new multivariate volatility model that belongs to the family of conditional correlation GARCH models is introduced. The GARCH equations of this model contain a multiplicative deterministic component to describe long-run movements in volatility and, in addition, the correlations...
Subjective Expected Utility: A Model of Decision-Making.
Fischoff, Baruch; And Others
1981-01-01
Outlines a model of decision making known to researchers in the field of behavioral decision theory (BDT) as subjective expected utility (SEU). The descriptive and predictive validity of the SEU model, probability and values assessment using SEU, and decision contexts are examined, and a 54-item reference list is provided. (JL)
Directory of Open Access Journals (Sweden)
Seung Oh Lee
2013-10-01
Full Text Available Collection and investigation of flood information are essential to understand the nature of floods, but this has proved difficult in data-poor environments, or in developing or under-developed countries due to economic and technological limitations. The development of remote sensing data, GIS, and modeling techniques have, therefore, proved to be useful tools in the analysis of the nature of floods. Accordingly, this study attempts to estimate a flood discharge using the generalized likelihood uncertainty estimation (GLUE methodology and a 1D hydraulic model, with remote sensing data and topographic data, under the assumed condition that there is no gauge station in the Missouri river, Nebraska, and Wabash River, Indiana, in the United States. The results show that the use of Landsat leads to a better discharge approximation on a large-scale reach than on a small-scale. Discharge approximation using the GLUE depended on the selection of likelihood measures. Consideration of physical conditions in study reaches could, therefore, contribute to an appropriate selection of informal likely measurements. The river discharge assessed by using Landsat image and the GLUE Methodology could be useful in supplementing flood information for flood risk management at a planning level in ungauged basins. However, it should be noted that this approach to the real-time application might be difficult due to the GLUE procedure.
Böhning, Dankmar; Karasek, Sarah; Terschüren, Claudia; Annuß, Rolf; Fehr, Rainer
2013-03-09
Life expectancy is of increasing prime interest for a variety of reasons. In many countries, life expectancy is growing linearly, without any indication of reaching a limit. The state of North Rhine-Westphalia (NRW) in Germany with its 54 districts is considered here where the above mentioned growth in life expectancy is occurring as well. However, there is also empirical evidence that life expectancy is not growing linearly at the same level for different regions. To explore this situation further a likelihood-based cluster analysis is suggested and performed. The modelling uses a nonparametric mixture approach for the latent random effect. Maximum likelihood estimates are determined by means of the EM algorithm and the number of components in the mixture model are found on the basis of the Bayesian Information Criterion. Regions are classified into the mixture components (clusters) using the maximum posterior allocation rule. For the data analyzed here, 7 components are found with a spatial concentration of lower life expectancy levels in a centre of NRW, formerly an enormous conglomerate of heavy industry, still the most densely populated area with Gelsenkirchen having the lowest level of life expectancy growth for both genders. The paper offers some explanations for this fact including demographic and socio-economic sources. This case study shows that life expectancy growth is widely linear, but it might occur on different levels.
Directory of Open Access Journals (Sweden)
Luan Yihui
2009-09-01
Full Text Available Abstract Background Many aspects of biological functions can be modeled by biological networks, such as protein interaction networks, metabolic networks, and gene coexpression networks. Studying the statistical properties of these networks in turn allows us to infer biological function. Complex statistical network models can potentially more accurately describe the networks, but it is not clear whether such complex models are better suited to find biologically meaningful subnetworks. Results Recent studies have shown that the degree distribution of the nodes is not an adequate statistic in many molecular networks. We sought to extend this statistic with 2nd and 3rd order degree correlations and developed a pseudo-likelihood approach to estimate the parameters. The approach was used to analyze the MIPS and BIOGRID yeast protein interaction networks, and two yeast coexpression networks. We showed that 2nd order degree correlation information gave better predictions of gene interactions in both protein interaction and gene coexpression networks. However, in the biologically important task of predicting functionally homogeneous modules, degree correlation information performs marginally better in the case of the MIPS and BIOGRID protein interaction networks, but worse in the case of gene coexpression networks. Conclusion Our use of dK models showed that incorporation of degree correlations could increase predictive power in some contexts, albeit sometimes marginally, but, in all contexts, the use of third-order degree correlations decreased accuracy. However, it is possible that other parameter estimation methods, such as maximum likelihood, will show the usefulness of incorporating 2nd and 3rd degree correlations in predicting functionally homogeneous modules.
Wang, Wenhui; Nunez-Iglesias, Juan; Luan, Yihui; Sun, Fengzhu
2009-09-03
Many aspects of biological functions can be modeled by biological networks, such as protein interaction networks, metabolic networks, and gene coexpression networks. Studying the statistical properties of these networks in turn allows us to infer biological function. Complex statistical network models can potentially more accurately describe the networks, but it is not clear whether such complex models are better suited to find biologically meaningful subnetworks. Recent studies have shown that the degree distribution of the nodes is not an adequate statistic in many molecular networks. We sought to extend this statistic with 2nd and 3rd order degree correlations and developed a pseudo-likelihood approach to estimate the parameters. The approach was used to analyze the MIPS and BIOGRID yeast protein interaction networks, and two yeast coexpression networks. We showed that 2nd order degree correlation information gave better predictions of gene interactions in both protein interaction and gene coexpression networks. However, in the biologically important task of predicting functionally homogeneous modules, degree correlation information performs marginally better in the case of the MIPS and BIOGRID protein interaction networks, but worse in the case of gene coexpression networks. Our use of dK models showed that incorporation of degree correlations could increase predictive power in some contexts, albeit sometimes marginally, but, in all contexts, the use of third-order degree correlations decreased accuracy. However, it is possible that other parameter estimation methods, such as maximum likelihood, will show the usefulness of incorporating 2nd and 3rd degree correlations in predicting functionally homogeneous modules.
Maximum Likelihood and Restricted Likelihood Solutions in Multiple-Method Studies.
Rukhin, Andrew L
2011-01-01
A formulation of the problem of combining data from several sources is discussed in terms of random effects models. The unknown measurement precision is assumed not to be the same for all methods. We investigate maximum likelihood solutions in this model. By representing the likelihood equations as simultaneous polynomial equations, the exact form of the Groebner basis for their stationary points is derived when there are two methods. A parametrization of these solutions which allows their comparison is suggested. A numerical method for solving likelihood equations is outlined, and an alternative to the maximum likelihood method, the restricted maximum likelihood, is studied. In the situation when methods variances are considered to be known an upper bound on the between-method variance is obtained. The relationship between likelihood equations and moment-type equations is also discussed.
Schoups, G.; Vrugt, J.A.
2010-01-01
Estimation of parameter and predictive uncertainty of hydrologic models has traditionally relied on several simplifying assumptions. Residual errors are often assumed to be independent and to be adequately described by a Gaussian probability distribution with a mean of zero and a constant variance.
Likelihood estimators for multivariate extremes
Huser, Raphaë l; Davison, Anthony C.; Genton, Marc G.
2015-01-01
The main approach to inference for multivariate extremes consists in approximating the joint upper tail of the observations by a parametric family arising in the limit for extreme events. The latter may be expressed in terms of componentwise maxima, high threshold exceedances or point processes, yielding different but related asymptotic characterizations and estimators. The present paper clarifies the connections between the main likelihood estimators, and assesses their practical performance. We investigate their ability to estimate the extremal dependence structure and to predict future extremes, using exact calculations and simulation, in the case of the logistic model.
Likelihood estimators for multivariate extremes
Huser, Raphaël
2015-11-17
The main approach to inference for multivariate extremes consists in approximating the joint upper tail of the observations by a parametric family arising in the limit for extreme events. The latter may be expressed in terms of componentwise maxima, high threshold exceedances or point processes, yielding different but related asymptotic characterizations and estimators. The present paper clarifies the connections between the main likelihood estimators, and assesses their practical performance. We investigate their ability to estimate the extremal dependence structure and to predict future extremes, using exact calculations and simulation, in the case of the logistic model.
Cabreira, Verónica; Pinto, Carla; Pinheiro, Manuela; Lopes, Paula; Peixoto, Ana; Santos, Catarina; Veiga, Isabel; Rocha, Patrícia; Pinto, Pedro; Henrique, Rui; Teixeira, Manuel R
2017-01-01
Lynch syndrome (LS) accounts for up to 4 % of all colorectal cancers (CRC). Detection of a pathogenic germline mutation in one of the mismatch repair genes is the definitive criterion for LS diagnosis, but it is time-consuming and expensive. Immunohistochemistry is the most sensitive prescreening test and its predictive value is very high for loss of expression of MSH2, MSH6, and (isolated) PMS2, but not for MLH1. We evaluated if LS predictive models have a role to improve the molecular testing algorithm in this specific setting by studying 38 individuals referred for molecular testing and who were subsequently shown to have loss of MLH1 immunoexpression in their tumors. For each proband we calculated a risk score, which represents the probability that the patient with CRC carries a pathogenic MLH1 germline mutation, using the PREMM 1,2,6 and MMRpro predictive models. Of the 38 individuals, 18.4 % had a pathogenic MLH1 germline mutation. MMRpro performed better for the purpose of this study, presenting a AUC of 0.83 (95 % CI 0.67-0.9; P < 0.001) compared with a AUC of 0.68 (95 % CI 0.51-0.82, P = 0.09) for PREMM 1,2,6 . Considering a threshold of 5 %, MMRpro would eliminate unnecessary germline mutation analysis in a significant proportion of cases while keeping very high sensitivity. We conclude that MMRpro is useful to correctly predict who should be screened for a germline MLH1 gene mutation and propose an algorithm to improve the cost-effectiveness of LS diagnosis.
Individual Subjective Initiative Merge Model Based on Cellular Automaton
Directory of Open Access Journals (Sweden)
Yin-Jie Xu
2013-01-01
Full Text Available The merge control models proposed for work zones are classified into two types (Hard Control Merge (HCM model and Soft Control Merge (SCM model according to their own control intensity and are compared with a new model, called Individual Subjective Initiative Merge (ISIM model, which is based on the linear lane-changing probability strategy in the merging area. The attention of this paper is paid to the positive impact of the individual subjective initiative for the whole traffic system. Three models (ISIM, HCM, and SCM are established and compared with each other by two order parameters, that is, system output and average vehicle travel time. Finally, numerical results show that both ISIM and SCM perform better than HCM. Compared with SCM, the output of ISIM is 20 vehicles per hour higher under the symmetric input condition and is more stable under the asymmetric input condition. Meanwhile, the average travel time of ISIM is 2000 time steps less under the oversaturated input condition.
A composite likelihood approach for spatially correlated survival data
Paik, Jane; Ying, Zhiliang
2013-01-01
The aim of this paper is to provide a composite likelihood approach to handle spatially correlated survival data using pairwise joint distributions. With e-commerce data, a recent question of interest in marketing research has been to describe spatially clustered purchasing behavior and to assess whether geographic distance is the appropriate metric to describe purchasing dependence. We present a model for the dependence structure of time-to-event data subject to spatial dependence to characterize purchasing behavior from the motivating example from e-commerce data. We assume the Farlie-Gumbel-Morgenstern (FGM) distribution and then model the dependence parameter as a function of geographic and demographic pairwise distances. For estimation of the dependence parameters, we present pairwise composite likelihood equations. We prove that the resulting estimators exhibit key properties of consistency and asymptotic normality under certain regularity conditions in the increasing-domain framework of spatial asymptotic theory. PMID:24223450
A composite likelihood approach for spatially correlated survival data.
Paik, Jane; Ying, Zhiliang
2013-01-01
The aim of this paper is to provide a composite likelihood approach to handle spatially correlated survival data using pairwise joint distributions. With e-commerce data, a recent question of interest in marketing research has been to describe spatially clustered purchasing behavior and to assess whether geographic distance is the appropriate metric to describe purchasing dependence. We present a model for the dependence structure of time-to-event data subject to spatial dependence to characterize purchasing behavior from the motivating example from e-commerce data. We assume the Farlie-Gumbel-Morgenstern (FGM) distribution and then model the dependence parameter as a function of geographic and demographic pairwise distances. For estimation of the dependence parameters, we present pairwise composite likelihood equations. We prove that the resulting estimators exhibit key properties of consistency and asymptotic normality under certain regularity conditions in the increasing-domain framework of spatial asymptotic theory.
Chen, Qingxia; Ibrahim, Joseph G
2014-07-01
Multiple Imputation, Maximum Likelihood and Fully Bayesian methods are the three most commonly used model-based approaches in missing data problems. Although it is easy to show that when the responses are missing at random (MAR), the complete case analysis is unbiased and efficient, the aforementioned methods are still commonly used in practice for this setting. To examine the performance of and relationships between these three methods in this setting, we derive and investigate small sample and asymptotic expressions of the estimates and standard errors, and fully examine how these estimates are related for the three approaches in the linear regression model when the responses are MAR. We show that when the responses are MAR in the linear model, the estimates of the regression coefficients using these three methods are asymptotically equivalent to the complete case estimates under general conditions. One simulation and a real data set from a liver cancer clinical trial are given to compare the properties of these methods when the responses are MAR.
Likelihood inference for unions of interacting discs
DEFF Research Database (Denmark)
Møller, Jesper; Helisová, Katarina
To the best of our knowledge, this is the first paper which discusses likelihood inference or a random set using a germ-grain model, where the individual grains are unobservable edge effects occur, and other complications appear. We consider the case where the grains form a disc process modelled...... is specified with respect to a given marked Poisson model (i.e. a Boolean model). We show how edge effects and other complications can be handled by considering a certain conditional likelihood. Our methodology is illustrated by analyzing Peter Diggle's heather dataset, where we discuss the results...... of simulation-based maximum likelihood inference and the effect of specifying different reference Poisson models....
Likelihood inference for unions of interacting discs
DEFF Research Database (Denmark)
Møller, Jesper; Helisova, K.
2010-01-01
This is probably the first paper which discusses likelihood inference for a random set using a germ-grain model, where the individual grains are unobservable, edge effects occur and other complications appear. We consider the case where the grains form a disc process modelled by a marked point...... process, where the germs are the centres and the marks are the associated radii of the discs. We propose to use a recent parametric class of interacting disc process models, where the minimal sufficient statistic depends on various geometric properties of the random set, and the density is specified......-based maximum likelihood inference and the effect of specifying different reference Poisson models....
[Homeostasis model assessment (HOMA) values in Chilean elderly subjects].
Garmendia, María Luisa; Lera, Lydia; Sánchez, Hugo; Uauy, Ricardo; Albala, Cecilia
2009-11-01
The homeostasis assessment model for insulin resistance (HOMA-IR) estimates insulin resistance using basal insulin and glucose values and has a good concordance with values obtained with the euglycemic clamp. However it has a high variability that depends on environmental, genetic and physiologic factors. Therefore it is imperative to establish normal HOMA values in different populations. To report HOMA-IR values in Chilean elderly subjects and to determine the best cutoff point to diagnose insulin resistance. Cross sectional study of 1003 subjects older than 60 years of whom 803 (71% women) did not have diabetes. In 154 subjects, an oral glucose tolerance test was also performed. Insulin resistance (IR) was defined as the HOMA value corresponding to percentile 75 of subjects without over or underweight. The behavior of HOMA-IR in metabolic syndrome was studied and receiver operating curves (ROC) were calculated, using glucose intolerance defined as a blood glucose over 140 mg/dl and hyperinsulinemia, defined as a serum insulin over 60 microU/ml, two hours after the glucose load. Median HOMA-IR values were 1.7. Percentile 75 in subjects without obesity or underweight was 2.57. The area under the ROC curve, when comparing HOMA-IR with glucose intolerance and hyperinsulinemia, was 0.8 (95% confidence values 0.72-0.87), with HOMA-IR values ranging from 2.04 to 2.33. HOMA-IR is a useful method to determine insulin resistance in epidemiological studies. The HOMA-IR cutoff point for insulin resistance defined in thi spopulation was 2.6.
Thermal modeling: at the crossroads of several subjects of physics
International Nuclear Information System (INIS)
1997-01-01
The modeling of thermal phenomena is of prime importance for the dimensioning of industrial facilities. However, the understanding of thermal processes requires to refer to other subjects of physics like electromagnetism, matter transformation, fluid mechanics, chemistry etc.. The aim of this workshop organized by the industrial electro-thermal engineering section of the French society of thermal engineers is to take stock of current or forthcoming advances in the coupling of thermal engineering codes with electromagnetic, fluid mechanics, chemical and mechanical engineering codes. The modeling of phenomena remains the essential link between the laboratory research of new processes and their industrial developments. From the 9 talks given during this workshop, 2 of them deal with thermal processes in nuclear reactors and fall into the INIS scope and the others concern the modeling of industrial heating or electrical processes and were selected for ETDE. (J.S.)
Modelling of subject specific based segmental dynamics of knee joint
Nasir, N. H. M.; Ibrahim, B. S. K. K.; Huq, M. S.; Ahmad, M. K. I.
2017-09-01
This study determines segmental dynamics parameters based on subject specific method. Five hemiplegic patients participated in the study, two men and three women. Their ages ranged from 50 to 60 years, weights from 60 to 70 kg and heights from 145 to 170 cm. Sample group included patients with different side of stroke. The parameters of the segmental dynamics resembling the knee joint functions measured via measurement of Winter and its model generated via the employment Kane's equation of motion. Inertial parameters in the form of the anthropometry can be identified and measured by employing Standard Human Dimension on the subjects who are in hemiplegia condition. The inertial parameters are the location of centre of mass (COM) at the length of the limb segment, inertia moment around the COM and masses of shank and foot to generate accurate motion equations. This investigation has also managed to dig out a few advantages of employing the table of anthropometry in movement biomechanics of Winter's and Kane's equation of motion. A general procedure is presented to yield accurate measurement of estimation for the inertial parameters for the joint of the knee of certain subjects with stroke history.
Empirical Likelihood in Nonignorable Covariate-Missing Data Problems.
Xie, Yanmei; Zhang, Biao
2017-04-20
Missing covariate data occurs often in regression analysis, which frequently arises in the health and social sciences as well as in survey sampling. We study methods for the analysis of a nonignorable covariate-missing data problem in an assumed conditional mean function when some covariates are completely observed but other covariates are missing for some subjects. We adopt the semiparametric perspective of Bartlett et al. (Improving upon the efficiency of complete case analysis when covariates are MNAR. Biostatistics 2014;15:719-30) on regression analyses with nonignorable missing covariates, in which they have introduced the use of two working models, the working probability model of missingness and the working conditional score model. In this paper, we study an empirical likelihood approach to nonignorable covariate-missing data problems with the objective of effectively utilizing the two working models in the analysis of covariate-missing data. We propose a unified approach to constructing a system of unbiased estimating equations, where there are more equations than unknown parameters of interest. One useful feature of these unbiased estimating equations is that they naturally incorporate the incomplete data into the data analysis, making it possible to seek efficient estimation of the parameter of interest even when the working regression function is not specified to be the optimal regression function. We apply the general methodology of empirical likelihood to optimally combine these unbiased estimating equations. We propose three maximum empirical likelihood estimators of the underlying regression parameters and compare their efficiencies with other existing competitors. We present a simulation study to compare the finite-sample performance of various methods with respect to bias, efficiency, and robustness to model misspecification. The proposed empirical likelihood method is also illustrated by an analysis of a data set from the US National Health and
Maximum likelihood estimation for integrated diffusion processes
DEFF Research Database (Denmark)
Baltazar-Larios, Fernando; Sørensen, Michael
We propose a method for obtaining maximum likelihood estimates of parameters in diffusion models when the data is a discrete time sample of the integral of the process, while no direct observations of the process itself are available. The data are, moreover, assumed to be contaminated...... EM-algorithm to obtain maximum likelihood estimates of the parameters in the diffusion model. As part of the algorithm, we use a recent simple method for approximate simulation of diffusion bridges. In simulation studies for the Ornstein-Uhlenbeck process and the CIR process the proposed method works...... by measurement errors. Integrated volatility is an example of this type of observations. Another example is ice-core data on oxygen isotopes used to investigate paleo-temperatures. The data can be viewed as incomplete observations of a model with a tractable likelihood function. Therefore we propose a simulated...
Xie, Xianhong; Xue, Xiaonan; Strickler, Howard D
2018-01-15
Longitudinal measurement of biomarkers is important in determining risk factors for binary endpoints such as infection or disease. However, biomarkers are subject to measurement error, and some are also subject to left-censoring due to a lower limit of detection. Statistical methods to address these issues are few. We herein propose a generalized linear mixed model and estimate the model parameters using the Monte Carlo Newton-Raphson (MCNR) method. Inferences regarding the parameters are made by applying Louis's method and the delta method. Simulation studies were conducted to compare the proposed MCNR method with existing methods including the maximum likelihood (ML) method and the ad hoc approach of replacing the left-censored values with half of the detection limit (HDL). The results showed that the performance of the MCNR method is superior to ML and HDL with respect to the empirical standard error, as well as the coverage probability for the 95% confidence interval. The HDL method uses an incorrect imputation method, and the computation is constrained by the number of quadrature points; while the ML method also suffers from the constrain for the number of quadrature points, the MCNR method does not have this limitation and approximates the likelihood function better than the other methods. The improvement of the MCNR method is further illustrated with real-world data from a longitudinal study of local cervicovaginal HIV viral load and its effects on oncogenic HPV detection in HIV-positive women. Copyright © 2017 John Wiley & Sons, Ltd.
Modeling of crack in concrete structures subjected to severe loadings
International Nuclear Information System (INIS)
Nguyen, T.G.
2012-01-01
Concrete is a construction materials are prevalent in the world. However, in many industries, it is becoming more common to study the safety margins of a structure with respect to solicitations. It becomes important to predict the failure mode of the structure. Much work has already been made in the world on this subject, leading to operational models in computer codes using finite elements. Nevertheless, difficulties remain, mainly related to concrete cracking. These difficulties lead to open problems concerning the location, initiation and crack propagation. The thesis explores two ways of improving methods of numerical simulation of crack propagation. The first possibility of improvement is the use of the extended finite element method, XFEM. A modeling of mechanical behavior of crack is introduced and leads to a description of crack propagation from one element to another. The second possibility is based on damage mechanics. As part of the modeling of damage generalized standard type, the localization phenomenon has been studied numerically for various behaviors: viscous or damage fragile. These behaviors are described in the same spirit that the laws of the visco-elastic or visco-plasticity or plasticity classics, from a general thermodynamic interpretation. In particular, the laws gradient of damage are also considered in conjunction with recent results from the literature. It is well known that a gradient model for interpreting the effects of scale structures under mechanical loading. It also plays an interesting role in the effects of strain localization. (author)
Directory of Open Access Journals (Sweden)
Bailey, Wayne
2011-01-01
Full Text Available This short article explores whether using a mentoring model supports our Subject Specialist Mentors (SSMs with their role of mentoring trainees on Initial Teacher Training (ITT courses. Although there are many mentoring models to choose from, our model is based around mentoring within the Lifelong Learning Sector (LLS where trainees need support for their subject specialism as well as their generic teaching skills. The main focus is the use of coaching and mentoring skills taking into consideration guiding, supporting and challenging the trainee during the lifetime of the mentor/trainee relationship. The SSMs found that using our model as a tool helped to structure meetings and to ensure that the trainee had the necessary support to enable them to become proficient, competent subject specialist teachers. In conclusion, it was found that there is a need for the use of a model or a framework to help the Subject Specialist Mentor (SSM with such an important role.
Likelihood devices in spatial statistics
Zwet, E.W. van
1999-01-01
One of the main themes of this thesis is the application to spatial data of modern semi- and nonparametric methods. Another, closely related theme is maximum likelihood estimation from spatial data. Maximum likelihood estimation is not common practice in spatial statistics. The method of moments
Boyce, Jessica A; Kuijer, Roeline G
2014-04-01
Although research consistently shows that images of thin women in the media (media body ideals) affect women negatively (e.g., increased weight dissatisfaction and food intake), this effect is less clear among restrained eaters. The majority of experiments demonstrate that restrained eaters - identified with the Restraint Scale - consume more food than do other participants after viewing media body ideal images; whereas a minority of experiments suggest that such images trigger restrained eaters' dietary restraint. Weight satisfaction and mood results are just as variable. One reason for these inconsistent results might be that different methods of image exposure (e.g., slideshow vs. film) afford varying levels of attention. Therefore, we manipulated attention levels and measured participants' weight satisfaction and food intake. We based our hypotheses on the elaboration likelihood model and on restraint theory. We hypothesised that advertent (i.e., processing the images via central routes of persuasion) and inadvertent (i.e., processing the images via peripheral routes of persuasion) exposure would trigger differing degrees of weight dissatisfaction and dietary disinhibition among restrained eaters (cf. restraint theory). Participants (N = 174) were assigned to one of four conditions: advertent or inadvertent exposure to media or control images. The dependent variables were measured in a supposedly unrelated study. Although restrained eaters' weight satisfaction was not significantly affected by either media exposure condition, advertent (but not inadvertent) media exposure triggered restrained eaters' eating. These results suggest that teaching restrained eaters how to pay less attention to media body ideal images might be an effective strategy in media-literary interventions. Copyright © 2014 Elsevier Ltd. All rights reserved.
Maintaining symmetry of simulated likelihood functions
DEFF Research Database (Denmark)
Andersen, Laura Mørch
This paper suggests solutions to two different types of simulation errors related to Quasi-Monte Carlo integration. Likelihood functions which depend on standard deviations of mixed parameters are symmetric in nature. This paper shows that antithetic draws preserve this symmetry and thereby...... improves precision substantially. Another source of error is that models testing away mixing dimensions must replicate the relevant dimensions of the quasi-random draws in the simulation of the restricted likelihood. These simulation errors are ignored in the standard estimation procedures used today...
The behavior of the likelihood ratio test for testing missingness
Hens, Niel; Aerts, Marc; Molenberghs, Geert; Thijs, Herbert
2003-01-01
To asses the sensitivity of conclusions to model choices in the context of selection models for non-random dropout, one can oppose the different missing mechanisms to each other; e.g. by the likelihood ratio tests. The finite sample behavior of the null distribution and the power of the likelihood ratio test is studied under a variety of missingness mechanisms. missing data; sensitivity analysis; likelihood ratio test; missing mechanisms
Model to Analyze Micro Circular Plate Subjected to Electrostatic Force
Directory of Open Access Journals (Sweden)
Cao Tian-Jie
2013-06-01
Full Text Available In this paper a distributed model with three possible static modes was presented to investigate the behavior of the plate subjected to electrostatic force and uniform hydrostatic pressure both before pull in and beyond pull in. The differential governing equation of the micro circular plate specifically used for numerical solution of the three modes, in which the singularity at the center of the micro plate did not occur, was presented based on the classical thin plate theory, Taylor's series expansion and Saint-Venant's principle. The numerical solution to the differential governing equation for the different mode was mainly attributed to solve for one unknown boundary condition and the applied voltage, which could be obtained by using a two-fold method of bisection based on the shooting method. The voltage ranges over which the three modes could exist and the points where transitions occurred between the modes were computed. Incorporating the above numerical solution to the applied voltage at the normal mode with some constrained optimization method, pull-in voltage and the corresponding pull-in position can automatically be obtained. In examples, the entire mechanical behavior of the circular plate over the operational voltage ranges was investigated and the effects of different parameters on pull-in voltage were studied. The obtained results were compared with the existing results and good agreement has been achieved.
An opportunity cost model of subjective effort and task performance
Kurzban, Robert; Duckworth, Angela; Kable, Joseph W.; Myers, Justus
2013-01-01
Why does performing certain tasks cause the aversive experience of mental effort and concomitant deterioration in task performance? One explanation posits a physical resource that is depleted over time. We propose an alternate explanation that centers on mental representations of the costs and benefits associated with task performance. Specifically, certain computational mechanisms, especially those associated with executive function, can be deployed for only a limited number of simultaneous tasks at any given moment. Consequently, the deployment of these computational mechanisms carries an opportunity cost – that is, the next-best use to which these systems might be put. We argue that the phenomenology of effort can be understood as the felt output of these cost/benefit computations. In turn, the subjective experience of effort motivates reduced deployment of these computational mechanisms in the service of the present task. These opportunity cost representations, then, together with other cost/benefit calculations, determine effort expended and, everything else equal, result in performance reductions. In making our case for this position, we review alternate explanations both for the phenomenology of effort associated with these tasks and for performance reductions over time. Likewise, we review the broad range of relevant empirical results from across subdisciplines, especially psychology and neuroscience. We hope that our proposal will help to build links among the diverse fields that have been addressing similar questions from different perspectives, and we emphasize ways in which alternate models might be empirically distinguished. PMID:24304775
Generalized empirical likelihood methods for analyzing longitudinal data
Wang, S.; Qian, L.; Carroll, R. J.
2010-01-01
Efficient estimation of parameters is a major objective in analyzing longitudinal data. We propose two generalized empirical likelihood based methods that take into consideration within-subject correlations. A nonparametric version of the Wilks
Composite likelihood estimation of demographic parameters
Directory of Open Access Journals (Sweden)
Garrigan Daniel
2009-11-01
Full Text Available Abstract Background Most existing likelihood-based methods for fitting historical demographic models to DNA sequence polymorphism data to do not scale feasibly up to the level of whole-genome data sets. Computational economies can be achieved by incorporating two forms of pseudo-likelihood: composite and approximate likelihood methods. Composite likelihood enables scaling up to large data sets because it takes the product of marginal likelihoods as an estimator of the likelihood of the complete data set. This approach is especially useful when a large number of genomic regions constitutes the data set. Additionally, approximate likelihood methods can reduce the dimensionality of the data by summarizing the information in the original data by either a sufficient statistic, or a set of statistics. Both composite and approximate likelihood methods hold promise for analyzing large data sets or for use in situations where the underlying demographic model is complex and has many parameters. This paper considers a simple demographic model of allopatric divergence between two populations, in which one of the population is hypothesized to have experienced a founder event, or population bottleneck. A large resequencing data set from human populations is summarized by the joint frequency spectrum, which is a matrix of the genomic frequency spectrum of derived base frequencies in two populations. A Bayesian Metropolis-coupled Markov chain Monte Carlo (MCMCMC method for parameter estimation is developed that uses both composite and likelihood methods and is applied to the three different pairwise combinations of the human population resequence data. The accuracy of the method is also tested on data sets sampled from a simulated population model with known parameters. Results The Bayesian MCMCMC method also estimates the ratio of effective population size for the X chromosome versus that of the autosomes. The method is shown to estimate, with reasonable
Directory of Open Access Journals (Sweden)
Edwin J. Niklitschek
2016-10-01
Full Text Available Background Mixture models (MM can be used to describe mixed stocks considering three sets of parameters: the total number of contributing sources, their chemical baseline signatures and their mixing proportions. When all nursery sources have been previously identified and sampled for juvenile fish to produce baseline nursery-signatures, mixing proportions are the only unknown set of parameters to be estimated from the mixed-stock data. Otherwise, the number of sources, as well as some/all nursery-signatures may need to be also estimated from the mixed-stock data. Our goal was to assess bias and uncertainty in these MM parameters when estimated using unconditional maximum likelihood approaches (ML-MM, under several incomplete sampling and nursery-signature separation scenarios. Methods We used a comprehensive dataset containing otolith elemental signatures of 301 juvenile Sparus aurata, sampled in three contrasting years (2008, 2010, 2011, from four distinct nursery habitats. (Mediterranean lagoons Artificial nursery-source and mixed-stock datasets were produced considering: five different sampling scenarios where 0–4 lagoons were excluded from the nursery-source dataset and six nursery-signature separation scenarios that simulated data separated 0.5, 1.5, 2.5, 3.5, 4.5 and 5.5 standard deviations among nursery-signature centroids. Bias (BI and uncertainty (SE were computed to assess reliability for each of the three sets of MM parameters. Results Both bias and uncertainty in mixing proportion estimates were low (BI ≤ 0.14, SE ≤ 0.06 when all nursery-sources were sampled but exhibited large variability among cohorts and increased with the number of non-sampled sources up to BI = 0.24 and SE = 0.11. Bias and variability in baseline signature estimates also increased with the number of non-sampled sources, but tended to be less biased, and more uncertain than mixing proportion ones, across all sampling scenarios (BI < 0.13, SE < 0
Darnaude, Audrey M.
2016-01-01
Background Mixture models (MM) can be used to describe mixed stocks considering three sets of parameters: the total number of contributing sources, their chemical baseline signatures and their mixing proportions. When all nursery sources have been previously identified and sampled for juvenile fish to produce baseline nursery-signatures, mixing proportions are the only unknown set of parameters to be estimated from the mixed-stock data. Otherwise, the number of sources, as well as some/all nursery-signatures may need to be also estimated from the mixed-stock data. Our goal was to assess bias and uncertainty in these MM parameters when estimated using unconditional maximum likelihood approaches (ML-MM), under several incomplete sampling and nursery-signature separation scenarios. Methods We used a comprehensive dataset containing otolith elemental signatures of 301 juvenile Sparus aurata, sampled in three contrasting years (2008, 2010, 2011), from four distinct nursery habitats. (Mediterranean lagoons) Artificial nursery-source and mixed-stock datasets were produced considering: five different sampling scenarios where 0–4 lagoons were excluded from the nursery-source dataset and six nursery-signature separation scenarios that simulated data separated 0.5, 1.5, 2.5, 3.5, 4.5 and 5.5 standard deviations among nursery-signature centroids. Bias (BI) and uncertainty (SE) were computed to assess reliability for each of the three sets of MM parameters. Results Both bias and uncertainty in mixing proportion estimates were low (BI ≤ 0.14, SE ≤ 0.06) when all nursery-sources were sampled but exhibited large variability among cohorts and increased with the number of non-sampled sources up to BI = 0.24 and SE = 0.11. Bias and variability in baseline signature estimates also increased with the number of non-sampled sources, but tended to be less biased, and more uncertain than mixing proportion ones, across all sampling scenarios (BI nursery signatures improved reliability
Modelling and management of subjective information in a fuzzy setting
Bouchon-Meunier, Bernadette; Lesot, Marie-Jeanne; Marsala, Christophe
2013-01-01
Subjective information is very natural for human beings. It is an issue at the crossroad of cognition, semiotics, linguistics, and psycho-physiology. Its management requires dedicated methods, among which we point out the usefulness of fuzzy and possibilistic approaches and related methods, such as evidence theory. We distinguish three aspects of subjectivity: the first deals with perception and sensory information, including the elicitation of quality assessment and the establishment of a link between physical and perceived properties; the second is related to emotions, their fuzzy nature, and their identification; and the last aspect stems from natural language and takes into account information quality and reliability of information.
Energy Technology Data Exchange (ETDEWEB)
Athron, Peter; Balazs, Csaba [Monash University, School of Physics and Astronomy, Melbourne, VIC (Australia); Australian Research Council Centre of Excellence for Particle Physics at the Tera-scale (Australia); Bringmann, Torsten; Dal, Lars A.; Gonzalo, Tomas E.; Krislock, Abram; Raklev, Are [University of Oslo, Department of Physics, Oslo (Norway); Buckley, Andy [University of Glasgow, SUPA, School of Physics and Astronomy, Glasgow (United Kingdom); Chrzaszcz, Marcin [Universitaet Zuerich, Physik-Institut, Zurich (Switzerland); Polish Academy of Sciences, H. Niewodniczanski Institute of Nuclear Physics, Krakow (Poland); Conrad, Jan; Edsjoe, Joakim; Farmer, Ben; Lundberg, Johan [AlbaNova University Centre, Oskar Klein Centre for Cosmoparticle Physics, Stockholm (Sweden); Stockholm University, Department of Physics, Stockholm (Sweden); Cornell, Jonathan M. [McGill University, Department of Physics, Montreal, QC (Canada); Dickinson, Hugh [University of Minnesota, Minnesota Institute for Astrophysics, Minneapolis, MN (United States); Jackson, Paul; White, Martin [Australian Research Council Centre of Excellence for Particle Physics at the Tera-scale (Australia); University of Adelaide, Department of Physics, Adelaide, SA (Australia); Kvellestad, Anders; Savage, Christopher [NORDITA, Stockholm (Sweden); McKay, James [Imperial College London, Department of Physics, Blackett Laboratory, London (United Kingdom); Mahmoudi, Farvah [Lyon 1 Univ., ENS de Lyon, CNRS, Centre de Recherche Astrophysique de Lyon UMR5574, Saint-Genis-Laval (France); CERN, Theoretical Physics Department, Geneva (Switzerland); Institut Universitaire de France, Paris (France); Martinez, Gregory D. [University of California, Physics and Astronomy Department, Los Angeles, CA (United States); Putze, Antje [LAPTh, Universite de Savoie, CNRS, Annecy-le-Vieux (France); Ripken, Joachim [Max Planck Institute for Solar System Research, Goettingen (Germany); Rogan, Christopher [Harvard University, Department of Physics, Cambridge, MA (United States); Saavedra, Aldo [Australian Research Council Centre of Excellence for Particle Physics at the Tera-scale (Australia); University of Sydney, Centre for Translational Data Science, Faculty of Engineering and Information Technologies, School of Physics, Sydney, NSW (Australia); Scott, Pat [Imperial College London, Department of Physics, Blackett Laboratory, London (United Kingdom); Seo, Seon-Hee [Seoul National University, Department of Physics and Astronomy, Seoul (Korea, Republic of); Serra, Nicola [Universitaet Zuerich, Physik-Institut, Zurich (Switzerland); Weniger, Christoph [University of Amsterdam, GRAPPA, Institute of Physics, Amsterdam (Netherlands); Wild, Sebastian [DESY, Hamburg (Germany); Collaboration: The GAMBIT Collaboration
2018-02-15
In Ref. (GAMBIT Collaboration: Athron et. al., Eur. Phys. J. C.arXiv:1705.07908, 2017) we introduced the global-fitting framework GAMBIT. In this addendum, we describe a new minor version increment of this package. GAMBIT 1.1 includes full support for Mathematica backends, which we describe in some detail here. As an example, we backend SUSYHD (Vega and Villadoro, JHEP 07:159, 2015), which calculates the mass of the Higgs boson in the MSSM from effective field theory. We also describe updated likelihoods in PrecisionBit and DarkBit, and updated decay data included in DecayBit. (orig.)
Athron, Peter; Balazs, Csaba; Bringmann, Torsten; Buckley, Andy; Chrząszcz, Marcin; Conrad, Jan; Cornell, Jonathan M.; Dal, Lars A.; Dickinson, Hugh; Edsjö, Joakim; Farmer, Ben; Gonzalo, Tomás E.; Jackson, Paul; Krislock, Abram; Kvellestad, Anders; Lundberg, Johan; McKay, James; Mahmoudi, Farvah; Martinez, Gregory D.; Putze, Antje; Raklev, Are; Ripken, Joachim; Rogan, Christopher; Saavedra, Aldo; Savage, Christopher; Scott, Pat; Seo, Seon-Hee; Serra, Nicola; Weniger, Christoph; White, Martin; Wild, Sebastian
2018-02-01
In Ref. (GAMBIT Collaboration: Athron et. al., Eur. Phys. J. C. arXiv:1705.07908, 2017) we introduced the global-fitting framework GAMBIT. In this addendum, we describe a new minor version increment of this package. GAMBIT 1.1 includes full support for Mathematica backends, which we describe in some detail here. As an example, we backend SUSYHD (Vega and Villadoro, JHEP 07:159, 2015), which calculates the mass of the Higgs boson in the MSSM from effective field theory. We also describe updated likelihoods in PrecisionBit and DarkBit, and updated decay data included in DecayBit.
International Nuclear Information System (INIS)
Athron, Peter; Balazs, Csaba; Bringmann, Torsten; Dal, Lars A.; Gonzalo, Tomas E.; Krislock, Abram; Raklev, Are; Buckley, Andy; Chrzaszcz, Marcin; Conrad, Jan; Edsjoe, Joakim; Farmer, Ben; Lundberg, Johan; Cornell, Jonathan M.; Dickinson, Hugh; Jackson, Paul; White, Martin; Kvellestad, Anders; Savage, Christopher; McKay, James; Mahmoudi, Farvah; Martinez, Gregory D.; Putze, Antje; Ripken, Joachim; Rogan, Christopher; Saavedra, Aldo; Scott, Pat; Seo, Seon-Hee; Serra, Nicola; Weniger, Christoph; Wild, Sebastian
2018-01-01
In Ref. (GAMBIT Collaboration: Athron et. al., Eur. Phys. J. C.arXiv:1705.07908, 2017) we introduced the global-fitting framework GAMBIT. In this addendum, we describe a new minor version increment of this package. GAMBIT 1.1 includes full support for Mathematica backends, which we describe in some detail here. As an example, we backend SUSYHD (Vega and Villadoro, JHEP 07:159, 2015), which calculates the mass of the Higgs boson in the MSSM from effective field theory. We also describe updated likelihoods in PrecisionBit and DarkBit, and updated decay data included in DecayBit. (orig.)
Penalized Maximum Likelihood Estimation for univariate normal mixture distributions
International Nuclear Information System (INIS)
Ridolfi, A.; Idier, J.
2001-01-01
Due to singularities of the likelihood function, the maximum likelihood approach for the estimation of the parameters of normal mixture models is an acknowledged ill posed optimization problem. Ill posedness is solved by penalizing the likelihood function. In the Bayesian framework, it amounts to incorporating an inverted gamma prior in the likelihood function. A penalized version of the EM algorithm is derived, which is still explicit and which intrinsically assures that the estimates are not singular. Numerical evidence of the latter property is put forward with a test
Analysis of Subjective Conceptualizations Towards Collective Conceptual Modelling
DEFF Research Database (Denmark)
Glückstad, Fumiko Kano; Herlau, Tue; Schmidt, Mikkel Nørgaard
2013-01-01
This work is conducted as a preliminary study for a project where individuals' conceptualizations of domain knowledge will thoroughly be analyzed across 150 subjects from 6 countries. The project aims at investigating how humans' conceptualizations differ according to different types of mother la...
Richards growth model and viability indicators for populations subject to interventions
Directory of Open Access Journals (Sweden)
Selene Loibel
2010-12-01
Full Text Available In this work we study the problem of modeling identification of a population employing a discrete dynamic model based on the Richards growth model. The population is subjected to interventions due to consumption, such as hunting or farming animals. The model identification allows us to estimate the probability or the average time for a population number to reach a certain level. The parameter inference for these models are obtained with the use of the likelihood profile technique as developed in this paper. The identification method here developed can be applied to evaluate the productivity of animal husbandry or to evaluate the risk of extinction of autochthon populations. It is applied to data of the Brazilian beef cattle herd population, and the the population number to reach a certain goal level is investigated.Neste trabalho estudamos o problema de identificação do modelo de uma população utilizando um modelo dinâmico discreto baseado no modelo de crescimento de Richards. A população é submetida a intervenções devido ao consumo, como no caso de caça ou na criação de animais. A identificação do modelo permite-nos estimar a probabilidade ou o tempo médio de ocorrência para que se atinja um certo número populacional. A inferência paramétrica dos modelos é obtida através da técnica de perfil de máxima verossimilhança como desenvolvida neste trabalho. O método de identificação desenvolvido pode ser aplicado para avaliar a produtividade de criação animal ou o risco de extinção de uma população autóctone. Ele foi aplicado aos dados da população global de gado de corte bovino brasileiro, e é utilizado na investigação de a população atingir um certo número desejado de cabeças.
Phylogenetic analysis using parsimony and likelihood methods.
Yang, Z
1996-02-01
The assumptions underlying the maximum-parsimony (MP) method of phylogenetic tree reconstruction were intuitively examined by studying the way the method works. Computer simulations were performed to corroborate the intuitive examination. Parsimony appears to involve very stringent assumptions concerning the process of sequence evolution, such as constancy of substitution rates between nucleotides, constancy of rates across nucleotide sites, and equal branch lengths in the tree. For practical data analysis, the requirement of equal branch lengths means similar substitution rates among lineages (the existence of an approximate molecular clock), relatively long interior branches, and also few species in the data. However, a small amount of evolution is neither a necessary nor a sufficient requirement of the method. The difficulties involved in the application of current statistical estimation theory to tree reconstruction were discussed, and it was suggested that the approach proposed by Felsenstein (1981, J. Mol. Evol. 17: 368-376) for topology estimation, as well as its many variations and extensions, differs fundamentally from the maximum likelihood estimation of a conventional statistical parameter. Evidence was presented showing that the Felsenstein approach does not share the asymptotic efficiency of the maximum likelihood estimator of a statistical parameter. Computer simulations were performed to study the probability that MP recovers the true tree under a hierarchy of models of nucleotide substitution; its performance relative to the likelihood method was especially noted. The results appeared to support the intuitive examination of the assumptions underlying MP. When a simple model of nucleotide substitution was assumed to generate data, the probability that MP recovers the true topology could be as high as, or even higher than, that for the likelihood method. When the assumed model became more complex and realistic, e.g., when substitution rates were
Transferring and generalizing deep-learning-based neural encoding models across subjects.
Wen, Haiguang; Shi, Junxing; Chen, Wei; Liu, Zhongming
2018-08-01
Recent studies have shown the value of using deep learning models for mapping and characterizing how the brain represents and organizes information for natural vision. However, modeling the relationship between deep learning models and the brain (or encoding models), requires measuring cortical responses to large and diverse sets of natural visual stimuli from single subjects. This requirement limits prior studies to few subjects, making it difficult to generalize findings across subjects or for a population. In this study, we developed new methods to transfer and generalize encoding models across subjects. To train encoding models specific to a target subject, the models trained for other subjects were used as the prior models and were refined efficiently using Bayesian inference with a limited amount of data from the target subject. To train encoding models for a population, the models were progressively trained and updated with incremental data from different subjects. For the proof of principle, we applied these methods to functional magnetic resonance imaging (fMRI) data from three subjects watching tens of hours of naturalistic videos, while a deep residual neural network driven by image recognition was used to model visual cortical processing. Results demonstrate that the methods developed herein provide an efficient and effective strategy to establish both subject-specific and population-wide predictive models of cortical representations of high-dimensional and hierarchical visual features. Copyright © 2018 Elsevier Inc. All rights reserved.
Physical Modelling of Bucket Foundations Subjected to Axial Loading
DEFF Research Database (Denmark)
Vaitkunaite, Evelina
Compared to oil and gas structures, marine renewable energy devices are usually much lighter, operate in shallower waters and are subjected to severe cyclic loading and dynamic excitations. These factors result in different structural behaviours. Bucket foundations are a potentially cost......-effective solution for various offshore structures, and not least marine renewables. The present thesis focuses on several critical design problems related to the behaviour of bucket foundations exposed to tensile loading. Among those are the soil-structure interface parameters, tensile loading under various...
Multi-Channel Maximum Likelihood Pitch Estimation
DEFF Research Database (Denmark)
Christensen, Mads Græsbøll
2012-01-01
In this paper, a method for multi-channel pitch estimation is proposed. The method is a maximum likelihood estimator and is based on a parametric model where the signals in the various channels share the same fundamental frequency but can have different amplitudes, phases, and noise characteristics....... This essentially means that the model allows for different conditions in the various channels, like different signal-to-noise ratios, microphone characteristics and reverberation. Moreover, the method does not assume that a certain array structure is used but rather relies on a more general model and is hence...
A Predictive Likelihood Approach to Bayesian Averaging
Directory of Open Access Journals (Sweden)
Tomáš Jeřábek
2015-01-01
Full Text Available Multivariate time series forecasting is applied in a wide range of economic activities related to regional competitiveness and is the basis of almost all macroeconomic analysis. In this paper we combine multivariate density forecasts of GDP growth, inflation and real interest rates from four various models, two type of Bayesian vector autoregression (BVAR models, a New Keynesian dynamic stochastic general equilibrium (DSGE model of small open economy and DSGE-VAR model. The performance of models is identified using historical dates including domestic economy and foreign economy, which is represented by countries of the Eurozone. Because forecast accuracy of observed models are different, the weighting scheme based on the predictive likelihood, the trace of past MSE matrix, model ranks are used to combine the models. The equal-weight scheme is used as a simple combination scheme. The results show that optimally combined densities are comparable to the best individual models.
Damage modelling in concrete subject to sulfate attack
Directory of Open Access Journals (Sweden)
N. Cefis
2014-07-01
Full Text Available In this paper, we consider the mechanical effect of the sulfate attack on concrete. The durability analysis of concrete structures in contact to external sulfate solutions requires the definition of a proper diffusion-reaction model, for the computation of the varying sulfate concentration and of the consequent ettringite formation, coupled to a mechanical model for the prediction of swelling and material degradation. In this work, we make use of a two-ions formulation of the reactive-diffusion problem and we propose a bi-phase chemo-elastic damage model aimed to simulate the mechanical response of concrete and apt to be used in structural analyses.
Model reduction of nonlinear systems subject to input disturbances
Ndoye, Ibrahima
2017-07-10
The method of convex optimization is used as a tool for model reduction of a class of nonlinear systems in the presence of disturbances. It is shown that under some conditions the nonlinear disturbed system can be approximated by a reduced order nonlinear system with similar disturbance-output properties to the original plant. The proposed model reduction strategy preserves the nonlinearity and the input disturbance nature of the model. It guarantees a sufficiently small error between the outputs of the original and the reduced-order systems, and also maintains the properties of input-to-state stability. The matrices of the reduced order system are given in terms of a set of linear matrix inequalities (LMIs). The paper concludes with a demonstration of the proposed approach on model reduction of a nonlinear electronic circuit with additive disturbances.
Ego involvement increases doping likelihood.
Ring, Christopher; Kavussanu, Maria
2018-08-01
Achievement goal theory provides a framework to help understand how individuals behave in achievement contexts, such as sport. Evidence concerning the role of motivation in the decision to use banned performance enhancing substances (i.e., doping) is equivocal on this issue. The extant literature shows that dispositional goal orientation has been weakly and inconsistently associated with doping intention and use. It is possible that goal involvement, which describes the situational motivational state, is a stronger determinant of doping intention. Accordingly, the current study used an experimental design to examine the effects of goal involvement, manipulated using direct instructions and reflective writing, on doping likelihood in hypothetical situations in college athletes. The ego-involving goal increased doping likelihood compared to no goal and a task-involving goal. The present findings provide the first evidence that ego involvement can sway the decision to use doping to improve athletic performance.
Efficient Detection of Repeating Sites to Accelerate Phylogenetic Likelihood Calculations.
Kobert, K; Stamatakis, A; Flouri, T
2017-03-01
The phylogenetic likelihood function (PLF) is the major computational bottleneck in several applications of evolutionary biology such as phylogenetic inference, species delimitation, model selection, and divergence times estimation. Given the alignment, a tree and the evolutionary model parameters, the likelihood function computes the conditional likelihood vectors for every node of the tree. Vector entries for which all input data are identical result in redundant likelihood operations which, in turn, yield identical conditional values. Such operations can be omitted for improving run-time and, using appropriate data structures, reducing memory usage. We present a fast, novel method for identifying and omitting such redundant operations in phylogenetic likelihood calculations, and assess the performance improvement and memory savings attained by our method. Using empirical and simulated data sets, we show that a prototype implementation of our method yields up to 12-fold speedups and uses up to 78% less memory than one of the fastest and most highly tuned implementations of the PLF currently available. Our method is generic and can seamlessly be integrated into any phylogenetic likelihood implementation. [Algorithms; maximum likelihood; phylogenetic likelihood function; phylogenetics]. © The Author(s) 2016. Published by Oxford University Press, on behalf of the Society of Systematic Biologists.
K-Means Subject Matter Expert Refined Topic Model Methodology
2017-01-01
computing environment the Visual Basic for Applications ( VBA ) programming language presents the option as our programming language of choice. We propose...background, or access to other computational programming environments, to build topic models from free text datasets using a familiar Excel based...environment the restricts access to other software based text analytic tools. Opportunities to deploy developmental versions of the methodology and
Heat transfer modelling of first walls subject to plasma disruption
International Nuclear Information System (INIS)
Fillo, J.A.; Makowitz, H.
1981-01-01
A brief description of the plasma disruption problem and potential thermal consequences to the first wall is given. Thermal models reviewed include: a) melting of a solid with melt layer in place; b) melting of a solid with complete removal of melt (ablation); c) melting/vaporization of a solid; and d) vaporization of a solid but no phase change affecting the temperature profile
Model reduction of nonlinear systems subject to input disturbances
Ndoye, Ibrahima; Laleg-Kirati, Taous-Meriem
2017-01-01
The method of convex optimization is used as a tool for model reduction of a class of nonlinear systems in the presence of disturbances. It is shown that under some conditions the nonlinear disturbed system can be approximated by a reduced order
Grajeda, Laura M; Ivanescu, Andrada; Saito, Mayuko; Crainiceanu, Ciprian; Jaganath, Devan; Gilman, Robert H; Crabtree, Jean E; Kelleher, Dermott; Cabrera, Lilia; Cama, Vitaliano; Checkley, William
2016-01-01
Childhood growth is a cornerstone of pediatric research. Statistical models need to consider individual trajectories to adequately describe growth outcomes. Specifically, well-defined longitudinal models are essential to characterize both population and subject-specific growth. Linear mixed-effect models with cubic regression splines can account for the nonlinearity of growth curves and provide reasonable estimators of population and subject-specific growth, velocity and acceleration. We provide a stepwise approach that builds from simple to complex models, and account for the intrinsic complexity of the data. We start with standard cubic splines regression models and build up to a model that includes subject-specific random intercepts and slopes and residual autocorrelation. We then compared cubic regression splines vis-à-vis linear piecewise splines, and with varying number of knots and positions. Statistical code is provided to ensure reproducibility and improve dissemination of methods. Models are applied to longitudinal height measurements in a cohort of 215 Peruvian children followed from birth until their fourth year of life. Unexplained variability, as measured by the variance of the regression model, was reduced from 7.34 when using ordinary least squares to 0.81 (p linear mixed-effect models with random slopes and a first order continuous autoregressive error term. There was substantial heterogeneity in both the intercept (p modeled with a first order continuous autoregressive error term as evidenced by the variogram of the residuals and by a lack of association among residuals. The final model provides a parametric linear regression equation for both estimation and prediction of population- and individual-level growth in height. We show that cubic regression splines are superior to linear regression splines for the case of a small number of knots in both estimation and prediction with the full linear mixed effect model (AIC 19,352 vs. 19
Scale modeling of reinforced concrete structures subjected to seismic loading
International Nuclear Information System (INIS)
Dove, R.C.
1983-01-01
Reinforced concrete, Category I structures are so large that the possibility of seismicly testing the prototype structures under controlled conditions is essentially nonexistent. However, experimental data, from which important structural properties can be determined and existing and new methods of seismic analysis benchmarked, are badly needed. As a result, seismic experiments on scaled models are of considerable interest. In this paper, the scaling laws are developed in some detail so that assumptions and choices based on judgement can be clearly recognized and their effects discussed. The scaling laws developed are then used to design a reinforced concrete model of a Category I structure. Finally, how scaling is effected by various types of damping (viscous, structural, and Coulomb) is discussed
Likelihood analysis of parity violation in the compound nucleus
International Nuclear Information System (INIS)
Bowman, D.; Sharapov, E.
1993-01-01
We discuss the determination of the root mean-squared matrix element of the parity-violating interaction between compound-nuclear states using likelihood analysis. We briefly review the relevant features of the statistical model of the compound nucleus and the formalism of likelihood analysis. We then discuss the application of likelihood analysis to data on panty-violating longitudinal asymmetries. The reliability of the extracted value of the matrix element and errors assigned to the matrix element is stressed. We treat the situations where the spins of the p-wave resonances are not known and known using experimental data and Monte Carlo techniques. We conclude that likelihood analysis provides a reliable way to determine M and its confidence interval. We briefly discuss some problems associated with the normalization of the likelihood function
Generalized empirical likelihood methods for analyzing longitudinal data
Wang, S.
2010-02-16
Efficient estimation of parameters is a major objective in analyzing longitudinal data. We propose two generalized empirical likelihood based methods that take into consideration within-subject correlations. A nonparametric version of the Wilks theorem for the limiting distributions of the empirical likelihood ratios is derived. It is shown that one of the proposed methods is locally efficient among a class of within-subject variance-covariance matrices. A simulation study is conducted to investigate the finite sample properties of the proposed methods and compare them with the block empirical likelihood method by You et al. (2006) and the normal approximation with a correctly estimated variance-covariance. The results suggest that the proposed methods are generally more efficient than existing methods which ignore the correlation structure, and better in coverage compared to the normal approximation with correctly specified within-subject correlation. An application illustrating our methods and supporting the simulation study results is also presented.
Maximum likelihood of phylogenetic networks.
Jin, Guohua; Nakhleh, Luay; Snir, Sagi; Tuller, Tamir
2006-11-01
Horizontal gene transfer (HGT) is believed to be ubiquitous among bacteria, and plays a major role in their genome diversification as well as their ability to develop resistance to antibiotics. In light of its evolutionary significance and implications for human health, developing accurate and efficient methods for detecting and reconstructing HGT is imperative. In this article we provide a new HGT-oriented likelihood framework for many problems that involve phylogeny-based HGT detection and reconstruction. Beside the formulation of various likelihood criteria, we show that most of these problems are NP-hard, and offer heuristics for efficient and accurate reconstruction of HGT under these criteria. We implemented our heuristics and used them to analyze biological as well as synthetic data. In both cases, our criteria and heuristics exhibited very good performance with respect to identifying the correct number of HGT events as well as inferring their correct location on the species tree. Implementation of the criteria as well as heuristics and hardness proofs are available from the authors upon request. Hardness proofs can also be downloaded at http://www.cs.tau.ac.il/~tamirtul/MLNET/Supp-ML.pdf
Wu, L.; Tam, V. H.; Chow, D. S. L.; Putcha, L.
2014-01-01
An intranasal gel formulation of scopolamine (INSCOP) was developed for the treatment of Space Motion Sickness. The bioavailability and pharmacokinetics (PK) were evaluated under the Food and Drug Administration guidelines for clinical trials with an Investigative New Drug (IND) protocol. The aim of this project was to develop a PK model that can predict the relationship between plasma, saliva and urinary scopolamine concentrations using data collected from the IND clinical trials with INSCOP. Methods: Twelve healthy human subjects were administered three dose levels (0.1, 0.2 and 0.4 mg) of INSCOP. Serial blood, saliva and urine samples were collected between 5 min and 24 h after dosing and scopolamine concentrations were measured by using a validated LC-MS-MS assay. Pharmacokinetic Compartmental models, using actual dosing and sampling times, were built using Phoenix (version 1.2). Model selection was based on the likelihood ratio test on the difference of criteria (-2LL) and comparison of the quality of fit plots. Results: The best structural model for INSCOP (minimal -2LL= 502.8) was established. It consisted of one compartment each for plasma, saliva and urine, respectively, which were connected with linear transport processes except the nonlinear PK process from plasma to saliva compartment. The best-fit estimates of PK parameters from individual PK compartmental analysis and Population PK model analysis were shown in Tables 1 and 2, respectively. Conclusion: A population PK model that could predict population and individual PK of scopolamine in plasma, saliva and urine after dosing was developed and validated. Incorporating a non-linear transfer from plasma to saliva compartments resulted in a significantly improved model fitting. The model could be used to predict scopolamine plasma concentrations from salivary and urinary drug levels, allowing non-invasive therapeutic monitoring of scopolamine in space and other remote environments.
DEFF Research Database (Denmark)
Iwankiewicz, R.; Nielsen, Søren R. K.; Skjærbæk, P. S.
The subject of the paper is the investigation of the sensitivity of structural reliability estimation by a reduced hysteretic model for a reinforced concrete frame under an earthquake excitation.......The subject of the paper is the investigation of the sensitivity of structural reliability estimation by a reduced hysteretic model for a reinforced concrete frame under an earthquake excitation....
A Model for Subjective Well-Being in Adolescence: Need Satisfaction and Reasons for Living
Eryilmaz, Ali
2012-01-01
Subjective well-being is as important for adolescents as it is in other stages of life. This study thus aims to develop a model for subjective well-being, which is limited to need satisfaction in adolescence and reasons for living, and to test the validity of the model. Participants were a total of 227 individuals, 120 females and 107 males. Data…
An isotonic partial credit model for ordering subjects on the basis of their sum scores
Ligtvoet, R.
2012-01-01
In practice, the sum of the item scores is often used as a basis for comparing subjects. For items that have more than two ordered score categories, only the partial credit model (PCM) and special cases of this model imply that the subjects are stochastically ordered on the common latent variable.
An Isotonic Partial Credit Model for Ordering Subjects on the Basis of Their Sum Scores
Ligtvoet, Rudy
2012-01-01
In practice, the sum of the item scores is often used as a basis for comparing subjects. For items that have more than two ordered score categories, only the partial credit model (PCM) and special cases of this model imply that the subjects are stochastically ordered on the common latent variable. However, the PCM is very restrictive with respect…
Sen, Sedat
2018-01-01
Recent research has shown that over-extraction of latent classes can be observed in the Bayesian estimation of the mixed Rasch model when the distribution of ability is non-normal. This study examined the effect of non-normal ability distributions on the number of latent classes in the mixed Rasch model when estimated with maximum likelihood…
Employee subjective well-being and physiological functioning: An integrative model.
Kuykendall, Lauren; Tay, Louis
2015-01-01
Research shows that worker subjective well-being influences physiological functioning-an early signal of poor health outcomes. While several theoretical perspectives provide insights on this relationship, the literature lacks an integrative framework explaining the relationship. We develop a conceptual model explaining the link between subjective well-being and physiological functioning in the context of work. Integrating positive psychology and occupational stress perspectives, our model explains the relationship between subjective well-being and physiological functioning as a result of the direct influence of subjective well-being on physiological functioning and of their common relationships with work stress and personal resources, both of which are influenced by job conditions.
A maximum likelihood framework for protein design
Directory of Open Access Journals (Sweden)
Philippe Hervé
2006-06-01
Full Text Available Abstract Background The aim of protein design is to predict amino-acid sequences compatible with a given target structure. Traditionally envisioned as a purely thermodynamic question, this problem can also be understood in a wider context, where additional constraints are captured by learning the sequence patterns displayed by natural proteins of known conformation. In this latter perspective, however, we still need a theoretical formalization of the question, leading to general and efficient learning methods, and allowing for the selection of fast and accurate objective functions quantifying sequence/structure compatibility. Results We propose a formulation of the protein design problem in terms of model-based statistical inference. Our framework uses the maximum likelihood principle to optimize the unknown parameters of a statistical potential, which we call an inverse potential to contrast with classical potentials used for structure prediction. We propose an implementation based on Markov chain Monte Carlo, in which the likelihood is maximized by gradient descent and is numerically estimated by thermodynamic integration. The fit of the models is evaluated by cross-validation. We apply this to a simple pairwise contact potential, supplemented with a solvent-accessibility term, and show that the resulting models have a better predictive power than currently available pairwise potentials. Furthermore, the model comparison method presented here allows one to measure the relative contribution of each component of the potential, and to choose the optimal number of accessibility classes, which turns out to be much higher than classically considered. Conclusion Altogether, this reformulation makes it possible to test a wide diversity of models, using different forms of potentials, or accounting for other factors than just the constraint of thermodynamic stability. Ultimately, such model-based statistical analyses may help to understand the forces
Understanding the properties of diagnostic tests - Part 2: Likelihood ratios.
Ranganathan, Priya; Aggarwal, Rakesh
2018-01-01
Diagnostic tests are used to identify subjects with and without disease. In a previous article in this series, we examined some attributes of diagnostic tests - sensitivity, specificity, and predictive values. In this second article, we look at likelihood ratios, which are useful for the interpretation of diagnostic test results in everyday clinical practice.
LIKELIHOOD ESTIMATION OF PARAMETERS USING SIMULTANEOUSLY MONITORED PROCESSES
DEFF Research Database (Denmark)
Friis-Hansen, Peter; Ditlevsen, Ove Dalager
2004-01-01
The topic is maximum likelihood inference from several simultaneously monitored response processes of a structure to obtain knowledge about the parameters of other not monitored but important response processes when the structure is subject to some Gaussian load field in space and time. The consi....... The considered example is a ship sailing with a given speed through a Gaussian wave field....
Voss, Jesse S; Iqbal, Seher; Jenkins, Sarah M; Henry, Michael R; Clayton, Amy C; Jett, James R; Kipp, Benjamin R; Halling, Kevin C; Maldonado, Fabien
2014-01-01
Studies have shown that fluorescence in situ hybridization (FISH) testing increases lung cancer detection on cytology specimens in peripheral nodules. The goal of this study was to determine whether a predictive model using clinical features and routine cytology with FISH results could predict lung malignancy after a nondiagnostic bronchoscopic evaluation. Patients with an indeterminate peripheral lung nodule that had a nondiagnostic bronchoscopic evaluation were included in this study (N = 220). FISH was performed on residual bronchial brushing cytology specimens diagnosed as negative (n = 195), atypical (n = 16), or suspicious (n = 9). FISH results included hypertetrasomy (n = 30) and negative (n = 190). Primary study end points included lung cancer status along with time to diagnosis of lung cancer or date of last clinical follow-up. Hazard ratios (HRs) were calculated using Cox proportional hazards regression model analyses, and P values < .05 were considered statistically significant. The mean age of the 220 patients was 66.7 years (range, 35-91), and most (58%) were men. Most patients (79%) were current or former smokers with a mean pack year history of 43.2 years (median, 40; range, 1-200). After multivariate analysis, hypertetrasomy FISH (HR = 2.96, P < .001), pack years (HR = 1.03 per pack year up to 50, P = .001), age (HR = 1.04 per year, P = .02), atypical or suspicious cytology (HR = 2.02, P = .04), and nodule spiculation (HR = 2.36, P = .003) were independent predictors of malignancy over time and were used to create a prediction model (C-statistic = 0.78). These results suggest that this multivariate model including test results and clinical features may be useful following a nondiagnostic bronchoscopic examination. © 2013.
The Dimensions of Subjective Well-Being among Black Americans: A Structural Model Analysis.
Tran, Thanh V.; And Others
1994-01-01
Analysis of data from 668 black adult respondents to the 1980 National Survey of Black Americans suggests that subjective well-being among black Americans is multidimensional. A three-factor model of subjective well-being encompassing strain (depressive symptoms), life satisfaction, and self-esteem was empirically supported and consistently…
A Panel Data Model for Subjective Information on Household Income Growth
Das, J.W.M.; van Soest, A.H.O.
1996-01-01
Subjective expectations about future income changes are analyzed, using household panel data.The models used are extensions of existing binary choice panel data models to the case of ordered response.We consider both random and fixed individual effects.The random effects model is estimated by
English language-in-education: A lesson planning model for subject ...
African Journals Online (AJOL)
English language-in-education: A lesson planning model for subject teachers. ... lack of critical academic language skills in English as the Language of Learning and ... process of lesson design and the 'forward' process of lesson presentation.
Evaluation of subject contrast and normalized average glandular dose by semi-analytical models
International Nuclear Information System (INIS)
Tomal, A.; Poletti, M.E.; Caldas, L.V.E.
2010-01-01
In this work, two semi-analytical models are described to evaluate the subject contrast of nodules and the normalized average glandular dose in mammography. Both models were used to study the influence of some parameters, such as breast characteristics (thickness and composition) and incident spectra (kVp and target-filter combination) on the subject contrast of a nodule and on the normalized average glandular dose. From the subject contrast results, detection limits of nodules were also determined. Our results are in good agreement with those reported by other authors, who had used Monte Carlo simulation, showing the robustness of our semi-analytical method.
Optimized Large-scale CMB Likelihood and Quadratic Maximum Likelihood Power Spectrum Estimation
Gjerløw, E.; Colombo, L. P. L.; Eriksen, H. K.; Górski, K. M.; Gruppuso, A.; Jewell, J. B.; Plaszczynski, S.; Wehus, I. K.
2015-11-01
We revisit the problem of exact cosmic microwave background (CMB) likelihood and power spectrum estimation with the goal of minimizing computational costs through linear compression. This idea was originally proposed for CMB purposes by Tegmark et al., and here we develop it into a fully functioning computational framework for large-scale polarization analysis, adopting WMAP as a working example. We compare five different linear bases (pixel space, harmonic space, noise covariance eigenvectors, signal-to-noise covariance eigenvectors, and signal-plus-noise covariance eigenvectors) in terms of compression efficiency, and find that the computationally most efficient basis is the signal-to-noise eigenvector basis, which is closely related to the Karhunen-Loeve and Principal Component transforms, in agreement with previous suggestions. For this basis, the information in 6836 unmasked WMAP sky map pixels can be compressed into a smaller set of 3102 modes, with a maximum error increase of any single multipole of 3.8% at ℓ ≤ 32 and a maximum shift in the mean values of a joint distribution of an amplitude-tilt model of 0.006σ. This compression reduces the computational cost of a single likelihood evaluation by a factor of 5, from 38 to 7.5 CPU seconds, and it also results in a more robust likelihood by implicitly regularizing nearly degenerate modes. Finally, we use the same compression framework to formulate a numerically stable and computationally efficient variation of the Quadratic Maximum Likelihood implementation, which requires less than 3 GB of memory and 2 CPU minutes per iteration for ℓ ≤ 32, rendering low-ℓ QML CMB power spectrum analysis fully tractable on a standard laptop.
Nearly Efficient Likelihood Ratio Tests of the Unit Root Hypothesis
DEFF Research Database (Denmark)
Jansson, Michael; Nielsen, Morten Ørregaard
Seemingly absent from the arsenal of currently available "nearly efficient" testing procedures for the unit root hypothesis, i.e. tests whose local asymptotic power functions are indistinguishable from the Gaussian power envelope, is a test admitting a (quasi-)likelihood ratio interpretation. We...... show that the likelihood ratio unit root test derived in a Gaussian AR(1) model with standard normal innovations is nearly efficient in that model. Moreover, these desirable properties carry over to more complicated models allowing for serially correlated and/or non-Gaussian innovations....
A closed-loop hybrid physiological model relating to subjects under physical stress.
El-Samahy, Emad; Mahfouf, Mahdi; Linkens, Derek A
2006-11-01
The objective of this research study is to derive a comprehensive physiological model relating to subjects under physical stress conditions. The model should describe the behaviour of the cardiovascular system, respiratory system, thermoregulation and brain activity in response to physical workload. An experimental testing rig was built which consists of recumbent high performance bicycle for inducing the physical load and a data acquisition system comprising monitors and PCs. The signals acquired and used within this study are the blood pressure, heart rate, respiration, body temperature, and EEG signals. The proposed model is based on a grey-box based modelling approach which was used because of the sufficient level of details it provides. Cardiovascular and EEG Data relating to 16 healthy subject volunteers (data from 12 subjects were used for training/validation and the data from 4 subjects were used for model testing) were collected using the Finapres and the ProComp+ monitors. For model validation, residual analysis via the computing of the confidence intervals as well as related histograms was performed. Closed-loop simulations for different subjects showed that the model can provide reliable predictions for heart rate, blood pressure, body temperature, respiration, and the EEG signals. These findings were also reinforced by the residual analyses data obtained, which suggested that the residuals were within the 90% confidence bands and that the corresponding histograms were of a normal distribution. A higher intelligent level was added to the model, based on neural networks, to extend the capabilities of the model to predict over a wide range of subjects dynamics. The elicited physiological model describing the effect of physiological stress on several physiological variables can be used to predict performance breakdown of operators in critical environments. Such a model architecture lends itself naturally to exploitation via feedback control in a 'reverse
International Nuclear Information System (INIS)
Piepel, Gregory F.; Heredia-Langner, Alejandro; Cooley, Scott K.
2008-01-01
Properties such as viscosity and electrical conductivity of glass melts are functions of melt temperature as well as glass composition. When measuring such a property for several glasses, the property is typically measured at several temperatures for one glass, then at several temperatures for the next glass, and so on. This data-collection process involves a restriction on randomization, which is referred to as split-plot experiment. The split-plot data structure must be accounted for in developing property-composition-temperature models and the corresponding uncertainty equations for model predictions. Instead of ordinary least squares (OLS) regression methods, generalized least squares (GLS) regression methods using restricted maximum likelihood (REML) estimation must be used. This article describes the methodology for developing property-composition-temperature models and corresponding prediction uncertainty equations using the GLS/REML regression approach. Viscosity data collected on 197 simulated nuclear waste glasses are used to illustrate the GLS/REML methods for developing a viscosity-composition-temperature model and corresponding equations for model prediction uncertainties. The correct results using GLS/REML regression are compared to the incorrect results obtained using OLS regression
DEFF Research Database (Denmark)
Vahidi, O; Kwok, K E; Gopaluni, R B
2016-01-01
We have expanded a former compartmental model of blood glucose regulation for healthy and type 2 diabetic subjects. The former model was a detailed physiological model which considered the interactions of three substances, glucose, insulin and glucagon on regulating the blood sugar. The main...... variations of blood glucose concentrations following an oral glucose intake. Another model representing the incretins production in the gastrointestinal tract along with their hormonal effects on boosting pancreatic insulin production is also added to the former model. We have used two sets of clinical data...... obtained during oral glucose tolerance test and isoglycemic intravenous glucose infusion test from both type 2 diabetic and healthy subjects to estimate the model parameters and to validate the model results. The estimation of model parameters is accomplished through solving a nonlinear optimization...
Obtaining reliable likelihood ratio tests from simulated likelihood functions
DEFF Research Database (Denmark)
Andersen, Laura Mørch
2014-01-01
Mixed models: Models allowing for continuous heterogeneity by assuming that value of one or more parameters follow a specified distribution have become increasingly popular. This is known as ‘mixing’ parameters, and it is standard practice by researchers - and the default option in many statistic...
The Laplace Likelihood Ratio Test for Heteroscedasticity
Directory of Open Access Journals (Sweden)
J. Martin van Zyl
2011-01-01
Full Text Available It is shown that the likelihood ratio test for heteroscedasticity, assuming the Laplace distribution, gives good results for Gaussian and fat-tailed data. The likelihood ratio test, assuming normality, is very sensitive to any deviation from normality, especially when the observations are from a distribution with fat tails. Such a likelihood test can also be used as a robust test for a constant variance in residuals or a time series if the data is partitioned into groups.
MXLKID: a maximum likelihood parameter identifier
International Nuclear Information System (INIS)
Gavel, D.T.
1980-07-01
MXLKID (MaXimum LiKelihood IDentifier) is a computer program designed to identify unknown parameters in a nonlinear dynamic system. Using noisy measurement data from the system, the maximum likelihood identifier computes a likelihood function (LF). Identification of system parameters is accomplished by maximizing the LF with respect to the parameters. The main body of this report briefly summarizes the maximum likelihood technique and gives instructions and examples for running the MXLKID program. MXLKID is implemented LRLTRAN on the CDC7600 computer at LLNL. A detailed mathematical description of the algorithm is given in the appendices. 24 figures, 6 tables
Maximum likelihood estimation of the attenuated ultrasound pulse
DEFF Research Database (Denmark)
Rasmussen, Klaus Bolding
1994-01-01
The attenuated ultrasound pulse is divided into two parts: a stationary basic pulse and a nonstationary attenuation pulse. A standard ARMA model is used for the basic pulse, and a nonstandard ARMA model is derived for the attenuation pulse. The maximum likelihood estimator of the attenuated...
Robust Gaussian Process Regression with a Student-t Likelihood
Jylänki, P.P.; Vanhatalo, J.; Vehtari, A.
2011-01-01
This paper considers the robust and efficient implementation of Gaussian process regression with a Student-t observation model, which has a non-log-concave likelihood. The challenge with the Student-t model is the analytically intractable inference which is why several approximative methods have
Likelihood-based Dynamic Factor Analysis for Measurement and Forecasting
Jungbacker, B.M.J.P.; Koopman, S.J.
2015-01-01
We present new results for the likelihood-based analysis of the dynamic factor model. The latent factors are modelled by linear dynamic stochastic processes. The idiosyncratic disturbance series are specified as autoregressive processes with mutually correlated innovations. The new results lead to
Composite likelihood and two-stage estimation in family studies
DEFF Research Database (Denmark)
Andersen, Elisabeth Anne Wreford
2004-01-01
In this paper register based family studies provide the motivation for linking a two-stage estimation procedure in copula models for multivariate failure time data with a composite likelihood approach. The asymptotic properties of the estimators in both parametric and semi-parametric models are d...
An improved likelihood model for eye tracking
DEFF Research Database (Denmark)
Hammoud, Riad I.; Hansen, Dan Witzner
2007-01-01
While existing eye detection and tracking algorithms can work reasonably well in a controlled environment, they tend to perform poorly under real world imaging conditions where the lighting produces shadows and the person's eyes can be occluded by e.g. glasses or makeup. As a result, pixel clusters...... associated with the eyes tend to be grouped together with background-features. This problem occurs both for eye detection and eye tracking. Problems that especially plague eye tracking include head movement, eye blinking and light changes, all of which can cause the eyes to suddenly disappear. The usual...... approach in such cases is to abandon the tracking routine and re-initialize eye detection. Of course this may be a difficult process due to missed data problem. Accordingly, what is needed is an efficient method of reliably tracking a person's eyes between successively produced video image frames, even...
Employee subjective well-being and physiological functioning: An integrative model
Directory of Open Access Journals (Sweden)
Lauren Kuykendall
2015-06-01
Full Text Available Research shows that worker subjective well-being influences physiological functioning—an early signal of poor health outcomes. While several theoretical perspectives provide insights on this relationship, the literature lacks an integrative framework explaining the relationship. We develop a conceptual model explaining the link between subjective well-being and physiological functioning in the context of work. Integrating positive psychology and occupational stress perspectives, our model explains the relationship between subjective well-being and physiological functioning as a result of the direct influence of subjective well-being on physiological functioning and of their common relationships with work stress and personal resources, both of which are influenced by job conditions.
Likelihood ratio decisions in memory: three implied regularities.
Glanzer, Murray; Hilford, Andrew; Maloney, Laurence T
2009-06-01
We analyze four general signal detection models for recognition memory that differ in their distributional assumptions. Our analyses show that a basic assumption of signal detection theory, the likelihood ratio decision axis, implies three regularities in recognition memory: (1) the mirror effect, (2) the variance effect, and (3) the z-ROC length effect. For each model, we present the equations that produce the three regularities and show, in computed examples, how they do so. We then show that the regularities appear in data from a range of recognition studies. The analyses and data in our study support the following generalization: Individuals make efficient recognition decisions on the basis of likelihood ratios.
Hysteretic MDOF Model to Quantify Damage for RC Shear Frames Subject to Earthquakes
DEFF Research Database (Denmark)
Köylüoglu, H. Ugur; Nielsen, Søren R.K.; Cakmak, Ahmet S.
A hysteretic mechanical formulation is derived to quantify local, modal and overall damage in reinforced concrete (RC) shear frames subject to seismic excitation. Each interstorey is represented by a Clough and Johnston (1966) hysteretic constitutive relation with degrading elastic fraction of th...... shear frame is subject to simulated earthquake excitations, which are modelled as a stationary Gaussian stochastic process with Kanai-Tajimi spectrum, multiplied by an envelope function. The relationship between local, modal and overall damage indices is investigated statistically....
Essays on empirical likelihood in economics
Gao, Z.
2012-01-01
This thesis intends to exploit the roots of empirical likelihood and its related methods in mathematical programming and computation. The roots will be connected and the connections will induce new solutions for the problems of estimation, computation, and generalization of empirical likelihood.
Modeling the evolution of natural cliffs subject to weathering. 1, Limit analysis approach
Utili, Stefano; Crosta, Giovanni B.
2011-01-01
Retrogressive landsliding evolution of natural slopes subjected to weathering has been modeled by assuming Mohr-Coulomb material behavior and by using an analytical method. The case of weathering-limited slope conditions, with complete erosion of the accumulated debris, has been modeled. The limit analysis upper-bound method is used to study slope instability induced by a homogeneous decrease of material strength in space and time. The only assumption required in the model concerns the degree...
Vahidi, O; Kwok, K E; Gopaluni, R B; Knop, F K
2016-09-01
We have expanded a former compartmental model of blood glucose regulation for healthy and type 2 diabetic subjects. The former model was a detailed physiological model which considered the interactions of three substances, glucose, insulin and glucagon on regulating the blood sugar. The main drawback of the former model was its restriction on the route of glucose entrance to the body which was limited to the intravenous glucose injection. To handle the oral glucose intake, we have added a model of glucose absorption in the gastrointestinal tract to the former model to address the resultant variations of blood glucose concentrations following an oral glucose intake. Another model representing the incretins production in the gastrointestinal tract along with their hormonal effects on boosting pancreatic insulin production is also added to the former model. We have used two sets of clinical data obtained during oral glucose tolerance test and isoglycemic intravenous glucose infusion test from both type 2 diabetic and healthy subjects to estimate the model parameters and to validate the model results. The estimation of model parameters is accomplished through solving a nonlinear optimization problem. The results show acceptable precision of the estimated model parameters and demonstrate the capability of the model in accurate prediction of the body response during the clinical studies.
A Subject-Specific Kinematic Model to Predict Human Motion in Exoskeleton-Assisted Gait.
Torricelli, Diego; Cortés, Camilo; Lete, Nerea; Bertelsen, Álvaro; Gonzalez-Vargas, Jose E; Del-Ama, Antonio J; Dimbwadyo, Iris; Moreno, Juan C; Florez, Julian; Pons, Jose L
2018-01-01
The relative motion between human and exoskeleton is a crucial factor that has remarkable consequences on the efficiency, reliability and safety of human-robot interaction. Unfortunately, its quantitative assessment has been largely overlooked in the literature. Here, we present a methodology that allows predicting the motion of the human joints from the knowledge of the angular motion of the exoskeleton frame. Our method combines a subject-specific skeletal model with a kinematic model of a lower limb exoskeleton (H2, Technaid), imposing specific kinematic constraints between them. To calibrate the model and validate its ability to predict the relative motion in a subject-specific way, we performed experiments on seven healthy subjects during treadmill walking tasks. We demonstrate a prediction accuracy lower than 3.5° globally, and around 1.5° at the hip level, which represent an improvement up to 66% compared to the traditional approach assuming no relative motion between the user and the exoskeleton.
Corazza, Stefano; Gambaretto, Emiliano; Mündermann, Lars; Andriacchi, Thomas P
2010-04-01
A novel approach for the automatic generation of a subject-specific model consisting of morphological and joint location information is described. The aim is to address the need for efficient and accurate model generation for markerless motion capture (MMC) and biomechanical studies. The algorithm applied and expanded on previous work on human shapes space by embedding location information for ten joint centers in a subject-specific free-form surface. The optimal locations of joint centers in the 3-D mesh were learned through linear regression over a set of nine subjects whose joint centers were known. The model was shown to be sufficiently accurate for both kinematic (joint centers) and morphological (shape of the body) information to allow accurate tracking with MMC systems. The automatic model generation algorithm was applied to 3-D meshes of different quality and resolution such as laser scans and visual hulls. The complete method was tested using nine subjects of different gender, body mass index (BMI), age, and ethnicity. Experimental training error and cross-validation errors were 19 and 25 mm, respectively, on average over the joints of the ten subjects analyzed in the study.
Modelling and subject-specific validation of the heart-arterial tree system.
Guala, Andrea; Camporeale, Carlo; Tosello, Francesco; Canuto, Claudio; Ridolfi, Luca
2015-01-01
A modeling approach integrated with a novel subject-specific characterization is here proposed for the assessment of hemodynamic values of the arterial tree. A 1D model is adopted to characterize large-to-medium arteries, while the left ventricle, aortic valve and distal micro-circulation sectors are described by lumped submodels. A new velocity profile and a new formulation of the non-linear viscoelastic constitutive relation suitable for the {Q, A} modeling are also proposed. The model is firstly verified semi-quantitatively against literature data. A simple but effective procedure for obtaining subject-specific model characterization from non-invasive measurements is then designed. A detailed subject-specific validation against in vivo measurements from a population of six healthy young men is also performed. Several key quantities of heart dynamics-mean ejected flow, ejection fraction, and left-ventricular end-diastolic, end-systolic and stroke volumes-and the pressure waveforms (at the central, radial, brachial, femoral, and posterior tibial sites) are compared with measured data. Mean errors around 5 and 8%, obtained for the heart and arterial quantities, respectively, testify the effectiveness of the model and its subject-specific characterization.
Are subject-specific musculoskeletal models robust to the uncertainties in parameter identification?
Directory of Open Access Journals (Sweden)
Giordano Valente
Full Text Available Subject-specific musculoskeletal modeling can be applied to study musculoskeletal disorders, allowing inclusion of personalized anatomy and properties. Independent of the tools used for model creation, there are unavoidable uncertainties associated with parameter identification, whose effect on model predictions is still not fully understood. The aim of the present study was to analyze the sensitivity of subject-specific model predictions (i.e., joint angles, joint moments, muscle and joint contact forces during walking to the uncertainties in the identification of body landmark positions, maximum muscle tension and musculotendon geometry. To this aim, we created an MRI-based musculoskeletal model of the lower limbs, defined as a 7-segment, 10-degree-of-freedom articulated linkage, actuated by 84 musculotendon units. We then performed a Monte-Carlo probabilistic analysis perturbing model parameters according to their uncertainty, and solving a typical inverse dynamics and static optimization problem using 500 models that included the different sets of perturbed variable values. Model creation and gait simulations were performed by using freely available software that we developed to standardize the process of model creation, integrate with OpenSim and create probabilistic simulations of movement. The uncertainties in input variables had a moderate effect on model predictions, as muscle and joint contact forces showed maximum standard deviation of 0.3 times body-weight and maximum range of 2.1 times body-weight. In addition, the output variables significantly correlated with few input variables (up to 7 out of 312 across the gait cycle, including the geometry definition of larger muscles and the maximum muscle tension in limited gait portions. Although we found subject-specific models not markedly sensitive to parameter identification, researchers should be aware of the model precision in relation to the intended application. In fact, force
Thermal analysis of dry eye subjects and the thermal impulse perturbation model of ocular surface.
Zhang, Aizhong; Maki, Kara L; Salahura, Gheorghe; Kottaiyan, Ranjini; Yoon, Geunyoung; Hindman, Holly B; Aquavella, James V; Zavislan, James M
2015-03-01
In this study, we explore the usage of ocular surface temperature (OST) decay patterns to distinguished between dry eye patients with aqueous deficient dry eye (ADDE) and meibomian gland dysfunction (MGD). The OST profiles of 20 dry eye subjects were measured by a long-wave infrared thermal camera in a standardized environment (24 °C, and relative humidity (RH) 40%). The subjects were instructed to blink every 5 s after 20 ∼ 25 min acclimation. Exponential decay curves were fit to the average temperature within a region of the central cornea. We find the MGD subjects have both a higher initial temperature (p model, referred to as the thermal impulse perturbation (TIP) model. We conclude that long-wave-infrared thermal imaging is a plausible tool in assisting with the classification of dry eye patient. Copyright © 2015 Elsevier Ltd. All rights reserved.
Vector model for mapping of visual space to subjective 4-D sphere
International Nuclear Information System (INIS)
Matuzevicius, Dalius; Vaitkevicius, Henrikas
2014-01-01
Here we present a mathematical model of binocular vision that maps a visible physical world to a subjective perception of it. The subjective space is a set of 4-D vectors whose components are outputs of four monocular neurons from each of the two eyes. Monocular neurons have one of the four types of concentric receptive fields with Gabor-like weighting coefficients. Next this vector representation of binocular vision is implemented as a pool of neurons where each of them is selective to the object's particular location in a 3-D visual space. Formally each point of the visual space is being projected onto a 4-D sphere. Proposed model allows determination of subjective distances in depth and direction, provides computational means for determination of Panum's area and explains diplopia and allelotropia
Likelihood Analysis of Supersymmetric SU(5) GUTs
Bagnaschi, E.
2017-01-01
We perform a likelihood analysis of the constraints from accelerator experiments and astrophysical observations on supersymmetric (SUSY) models with SU(5) boundary conditions on soft SUSY-breaking parameters at the GUT scale. The parameter space of the models studied has 7 parameters: a universal gaugino mass $m_{1/2}$, distinct masses for the scalar partners of matter fermions in five- and ten-dimensional representations of SU(5), $m_5$ and $m_{10}$, and for the $\\mathbf{5}$ and $\\mathbf{\\bar 5}$ Higgs representations $m_{H_u}$ and $m_{H_d}$, a universal trilinear soft SUSY-breaking parameter $A_0$, and the ratio of Higgs vevs $\\tan \\beta$. In addition to previous constraints from direct sparticle searches, low-energy and flavour observables, we incorporate constraints based on preliminary results from 13 TeV LHC searches for jets + MET events and long-lived particles, as well as the latest PandaX-II and LUX searches for direct Dark Matter detection. In addition to previously-identified mechanisms for bringi...
Sensitivity of subject-specific models to Hill muscle-tendon model parameters in simulations of gait
Carbone, V.; Krogt, M.M. van der; Koopman, H.F.J.M.; Verdonschot, N.J.
2016-01-01
Subject-specific musculoskeletal (MS) models of the lower extremity are essential for applications such as predicting the effects of orthopedic surgery. We performed an extensive sensitivity analysis to assess the effects of potential errors in Hill muscle-tendon (MT) model parameters for each of
Sensitivity of subject-specific models to Hill muscle-tendon model parameters in simulations of gait
Carbone, Vincenzo; van der Krogt, Marjolein; Koopman, Hubertus F.J.M.; Verdonschot, Nicolaas Jacobus Joseph
2016-01-01
Subject-specific musculoskeletal (MS) models of the lower extremity are essential for applications such as predicting the effects of orthopedic surgery. We performed an extensive sensitivity analysis to assess the effects of potential errors in Hill muscle–tendon (MT) model parameters for each of
Likelihood-based inference for clustered line transect data
DEFF Research Database (Denmark)
Waagepetersen, Rasmus; Schweder, Tore
2006-01-01
The uncertainty in estimation of spatial animal density from line transect surveys depends on the degree of spatial clustering in the animal population. To quantify the clustering we model line transect data as independent thinnings of spatial shot-noise Cox processes. Likelihood-based inference...
Likelihood-based inference for clustered line transect data
DEFF Research Database (Denmark)
Waagepetersen, Rasmus Plenge; Schweder, Tore
The uncertainty in estimation of spatial animal density from line transect surveys depends on the degree of spatial clustering in the animal population. To quantify the clustering we model line transect data as independent thinnings of spatial shot-noise Cox processes. Likelihood-based inference...
Mackinnon, Sean P; Kehayes, Ivy-Lee L; Leonard, Kenneth E; Fraser, Ronald; Stewart, Sherry H
2017-06-01
Partner-specific perfectionistic concerns (PC) include concern over mistakes, self-criticism, and socially prescribed perfectionism as it pertains to one's partner. The social disconnection model proposes that PC influences well-being indirectly through interpersonal problems. Thus, we hypothesized that social negativity (expressed anger, hostility, and rejection) would mediate the relationship between dyadic PC and subjective well-being. Data from 203 romantic dyads (92.1% heterosexual) were collected using self-report surveys and a four-wave, 4-week longitudinal design. Participants were predominantly female (53.1%), young (M = 22.69 years), and Caucasian (82.3%). Data were analyzed using an actor-partner interdependence model with multilevel structural equation modeling. There were significant actor effects at the between-subjects and within-subjects levels, and significant partner effects for the relationship between PC and social negativity at the within-subject level. Social negativity mediated the relationships between PC and both negative affect and life satisfaction. However, positive affect was more weakly related to PC and social negativity. The social disconnection model was supported. PC was positively associated with one's own social negativity and evoked hostile behaviors from one's partner. Hostile, rejecting behaviors reduced the well-being of the actor, but not the partner. Results suggest perfectionism may be best understood within an interpersonal context. © 2016 Wiley Periodicals, Inc.
Sensitivity of subject-specific models to errors in musculo-skeletal geometry
Carbone, V.; van der Krogt, M.M.; Koopman, H.F.J.M.; Verdonschot, N.
2012-01-01
Subject-specific musculo-skeletal models of the lower extremity are an important tool for investigating various biomechanical problems, for instance the results of surgery such as joint replacements and tendon transfers. The aim of this study was to assess the potential effects of errors in
Schepers, J.J.L.; Wetzels, M.G.M.
2007-01-01
We conducted a quantitative meta-analysis of previous research on the technology acceptance model (TAM) in an attempt to make well-grounded statements on the role of subjective norm. Furthermore, we compared TAM results by taking into account moderating effects of one individual-related factor (type
Quantifying functional connectivity in multi-subject fMRI data using component models
DEFF Research Database (Denmark)
Madsen, Kristoffer Hougaard; Churchill, Nathan William; Mørup, Morten
2017-01-01
of functional connectivity, evaluated on both simulated and experimental resting-state fMRI data. It was demonstrated that highly flexible subject-specific component subspaces, as well as very constrained average models, are poor predictors of whole-brain functional connectivity, whereas the best...
Gait kinematics of subjects with ankle instability using a multisegmented foot model.
De Ridder, Roel; Willems, Tine; Vanrenterghem, Jos; Robinson, Mark; Pataky, Todd; Roosen, Philip
2013-11-01
Many patients who sustain an acute lateral ankle sprain develop chronic ankle instability (CAI). Altered ankle kinematics have been reported to play a role in the underlying mechanisms of CAI. In previous studies, however, the foot was modeled as one rigid segment, ignoring the complexity of the ankle and foot anatomy and kinematics. The purpose of this study was to evaluate stance phase kinematics of subjects with CAI, copers, and controls during walking and running using both a rigid and a multisegmented foot model. Foot and ankle kinematics of 77 subjects (29 subjects with self-reported CAI, 24 copers, and 24 controls) were measured during barefoot walking and running using a rigid foot model and a six-segment Ghent Foot Model. Data were collected on a 20-m-long instrumented runway embedded with a force plate and a six-camera optoelectronic system. Groups were compared using statistical parametric mapping. Both the CAI and the coper group showed similar differences during midstance and late stance compared with the control group (P foot segment showed a more everted position during walking compared with the control group. Based on the Ghent Foot Model, the rear foot also showed a more everted position during running. The medial forefoot showed a more inverted position for both running and walking compared with the control group. Our study revealed significant midstance and late stance differences in rigid foot, rear foot, and medial forefoot kinematics The multisegmented foot model demonstrated intricate behavior of the foot that is not detectable with rigid foot modeling. Further research using these models is necessary to expand knowledge of foot kinematics in subjects with CAI.
nmsBuilder: Freeware to create subject-specific musculoskeletal models for OpenSim.
Valente, Giordano; Crimi, Gianluigi; Vanella, Nicola; Schileo, Enrico; Taddei, Fulvia
2017-12-01
Musculoskeletal modeling and simulations of movement have been increasingly used in orthopedic and neurological scenarios, with increased attention to subject-specific applications. In general, musculoskeletal modeling applications have been facilitated by the development of dedicated software tools; however, subject-specific studies have been limited also by time-consuming modeling workflows and high skilled expertise required. In addition, no reference tools exist to standardize the process of musculoskeletal model creation and make it more efficient. Here we present a freely available software application, nmsBuilder 2.0, to create musculoskeletal models in the file format of OpenSim, a widely-used open-source platform for musculoskeletal modeling and simulation. nmsBuilder 2.0 is the result of a major refactoring of a previous implementation that moved a first step toward an efficient workflow for subject-specific model creation. nmsBuilder includes a graphical user interface that provides access to all functionalities, based on a framework for computer-aided medicine written in C++. The operations implemented can be used in a workflow to create OpenSim musculoskeletal models from 3D surfaces. A first step includes data processing to create supporting objects necessary to create models, e.g. surfaces, anatomical landmarks, reference systems; and a second step includes the creation of OpenSim objects, e.g. bodies, joints, muscles, and the corresponding model. We present a case study using nmsBuilder 2.0: the creation of an MRI-based musculoskeletal model of the lower limb. The model included four rigid bodies, five degrees of freedom and 43 musculotendon actuators, and was created from 3D surfaces of the segmented images of a healthy subject through the modeling workflow implemented in the software application. We have presented nmsBuilder 2.0 for the creation of musculoskeletal OpenSim models from image-based data, and made it freely available via nmsbuilder
Directory of Open Access Journals (Sweden)
Kang-Wook Lee
2017-05-01
Full Text Available An important issue for international businesses and academia is selecting countries in which to expand in order to achieve entrepreneurial sustainability. This study develops a country selection model for sustainable construction businesses using both objective and subjective information. The objective information consists of 14 variables related to country risk and project performance in 32 countries over 25 years. This hybrid model applies subjective weighting from industrial experts to objective information using a fuzzy LinPreRa-based Analytic Hierarchy Process. The hybrid model yields a more accurate country selection compared to a purely objective information-based model in experienced countries. Interestingly, the hybrid model provides some different predictions with only subjective opinions in unexperienced countries, which implies that expert opinion is not always reliable. In addition, feedback from five experts in top international companies is used to validate the model’s completeness, effectiveness, generality, and applicability. The model is expected to aid decision makers in selecting better candidate countries that lead to sustainable business success.
Experiential Learning Model on Entrepreneurship Subject to Improve Students’ Soft Skills
Directory of Open Access Journals (Sweden)
Lina Rifda Naufalin
2016-06-01
Full Text Available This research aims to improve students’ soft skills on entrepreneurship subject by using experiential learning model. It was expected that the learning model could upgrade students’ soft skills which were indicated by the higher confidence, result and job oriented, being courageous to take risks, leadership, originality, and future-oriented. It was a class action research using Kemmis and Mc Tagart’s design model. The research was conducted for two cycles. The subject of the study was economics education students in the year of 2015/2016. Findings show that the experiential learning model could improve students’ soft skills. The research showed that there is increased at the dimension of confidence by 52.1%, result-oriented by 22.9%, being courageous to take risks by 10.4%, leadership by 12.5%, originality by 10.4%, and future-oriented by 18.8%. It could be concluded that the experiential learning model is effective model to improve students’ soft skills on entrepreneurship subject. Dimension of confidence has the highest rise. Students’ soft skills are shaped through the continuous stimulus when they get involved at the implementation.
Gestalt isomorphism and the primacy of subjective conscious experience: a Gestalt Bubble model.
Lehar, Steven
2003-08-01
A serious crisis is identified in theories of neurocomputation, marked by a persistent disparity between the phenomenological or experiential account of visual perception and the neurophysiological level of description of the visual system. In particular, conventional concepts of neural processing offer no explanation for the holistic global aspects of perception identified by Gestalt theory. The problem is paradigmatic and can be traced to contemporary concepts of the functional role of the neural cell, known as the Neuron Doctrine. In the absence of an alternative neurophysiologically plausible model, I propose a perceptual modeling approach, to model the percept as experienced subjectively, rather than modeling the objective neurophysiological state of the visual system that supposedly subserves that experience. A Gestalt Bubble model is presented to demonstrate how the elusive Gestalt principles of emergence, reification, and invariance can be expressed in a quantitative model of the subjective experience of visual consciousness. That model in turn reveals a unique computational strategy underlying visual processing, which is unlike any algorithm devised by man, and certainly unlike the atomistic feed-forward model of neurocomputation offered by the Neuron Doctrine paradigm. The perceptual modeling approach reveals the primary function of perception as that of generating a fully spatial virtual-reality replica of the external world in an internal representation. The common objections to this "picture-in-the-head" concept of perceptual representation are shown to be ill founded.
Instantaneous Metabolic Cost of Walking: Joint-Space Dynamic Model with Subject-Specific Heat Rate.
Directory of Open Access Journals (Sweden)
Dustyn Roberts
Full Text Available A subject-specific model of instantaneous cost of transport (ICOT is introduced from the joint-space formulation of metabolic energy expenditure using the laws of thermodynamics and the principles of multibody system dynamics. Work and heat are formulated in generalized coordinates as functions of joint kinematic and dynamic variables. Generalized heat rates mapped from muscle energetics are estimated from experimental walking metabolic data for the whole body, including upper-body and bilateral data synchronization. Identified subject-specific energetic parameters-mass, height, (estimated maximum oxygen uptake, and (estimated maximum joint torques-are incorporated into the heat rate, as opposed to the traditional in vitro and subject-invariant muscle parameters. The total model metabolic energy expenditure values are within 5.7 ± 4.6% error of the measured values with strong (R2 > 0.90 inter- and intra-subject correlations. The model reliably predicts the characteristic convexity and magnitudes (0.326-0.348 of the experimental total COT (0.311-0.358 across different subjects and speeds. The ICOT as a function of time provides insights into gait energetic causes and effects (e.g., normalized comparison and sensitivity with respect to walking speed and phase-specific COT, which are unavailable from conventional metabolic measurements or muscle models. Using the joint-space variables from commonly measured or simulated data, the models enable real-time and phase-specific evaluations of transient or non-periodic general tasks that use a range of (aerobic energy pathway similar to that of steady-state walking.
Maximum likelihood convolutional decoding (MCD) performance due to system losses
Webster, L.
1976-01-01
A model for predicting the computational performance of a maximum likelihood convolutional decoder (MCD) operating in a noisy carrier reference environment is described. This model is used to develop a subroutine that will be utilized by the Telemetry Analysis Program to compute the MCD bit error rate. When this computational model is averaged over noisy reference phase errors using a high-rate interpolation scheme, the results are found to agree quite favorably with experimental measurements.
Likelihood-based methods for evaluating principal surrogacy in augmented vaccine trials.
Liu, Wei; Zhang, Bo; Zhang, Hui; Zhang, Zhiwei
2017-04-01
There is growing interest in assessing immune biomarkers, which are quick to measure and potentially predictive of long-term efficacy, as surrogate endpoints in randomized, placebo-controlled vaccine trials. This can be done under a principal stratification approach, with principal strata defined using a subject's potential immune responses to vaccine and placebo (the latter may be assumed to be zero). In this context, principal surrogacy refers to the extent to which vaccine efficacy varies across principal strata. Because a placebo recipient's potential immune response to vaccine is unobserved in a standard vaccine trial, augmented vaccine trials have been proposed to produce the information needed to evaluate principal surrogacy. This article reviews existing methods based on an estimated likelihood and a pseudo-score (PS) and proposes two new methods based on a semiparametric likelihood (SL) and a pseudo-likelihood (PL), for analyzing augmented vaccine trials. Unlike the PS method, the SL method does not require a model for missingness, which can be advantageous when immune response data are missing by happenstance. The SL method is shown to be asymptotically efficient, and it performs similarly to the PS and PL methods in simulation experiments. The PL method appears to have a computational advantage over the PS and SL methods.
Asymptotic Likelihood Distribution for Correlated & Constrained Systems
Agarwal, Ujjwal
2016-01-01
It describes my work as summer student at CERN. The report discusses the asymptotic distribution of the likelihood ratio for total no. of parameters being h and 2 out of these being are constrained and correlated.
Maximum-Likelihood Detection Of Noncoherent CPM
Divsalar, Dariush; Simon, Marvin K.
1993-01-01
Simplified detectors proposed for use in maximum-likelihood-sequence detection of symbols in alphabet of size M transmitted by uncoded, full-response continuous phase modulation over radio channel with additive white Gaussian noise. Structures of receivers derived from particular interpretation of maximum-likelihood metrics. Receivers include front ends, structures of which depends only on M, analogous to those in receivers of coherent CPM. Parts of receivers following front ends have structures, complexity of which would depend on N.
Zoraghi, Nima; Amiri, Maghsoud; Talebi, Golnaz; Zowghi, Mahdi
2013-12-01
This paper presents a fuzzy multi-criteria decision-making (FMCDM) model by integrating both subjective and objective weights for ranking and evaluating the service quality in hotels. The objective method selects weights of criteria through mathematical calculation, while the subjective method uses judgments of decision makers. In this paper, we use a combination of weights obtained by both approaches in evaluating service quality in hotel industries. A real case study that considered ranking five hotels is illustrated. Examples are shown to indicate capabilities of the proposed method.
The Patient-Worker: A Model for Human Research Subjects and Gestational Surrogates.
Ryman, Emma; Fulfer, Katy
2017-01-13
We propose the 'patient-worker' as a theoretical construct that responds to moral problems that arise with the globalization of healthcare and medical research. The patient-worker model recognizes that some participants in global medical industries are workers and are owed worker's rights. Further, these participants are patient-like insofar as they are beneficiaries of fiduciary relationships with healthcare professionals. We apply the patient-worker model to human subjects research and commercial gestational surrogacy. In human subjects research, subjects are usually characterized as either patients or as workers. Through questioning this dichotomy, we argue that some subject populations fit into both categories. With respect to commercial surrogacy, we enrich feminist discussions of embodied labor by describing how surrogates are beneficiaries of fiduciary obligations. They are not just workers, but patient-workers. Through these applications, the patient-worker model offers a helpful normative framework for exploring what globalized medical industries owe to the individuals who bear the bodily burdens of medical innovation. © 2017 John Wiley & Sons Ltd.
Process criticality accident likelihoods, consequences and emergency planning
International Nuclear Information System (INIS)
McLaughlin, T.P.
1992-01-01
Evaluation of criticality accident risks in the processing of significant quantities of fissile materials is both complex and subjective, largely due to the lack of accident statistics. Thus, complying with national and international standards and regulations which require an evaluation of the net benefit of a criticality accident alarm system, is also subjective. A review of guidance found in the literature on potential accident magnitudes is presented for different material forms and arrangements. Reasoned arguments are also presented concerning accident prevention and accident likelihoods for these material forms and arrangements. (Author)
Process criticality accident likelihoods, consequences, and emergency planning
Energy Technology Data Exchange (ETDEWEB)
McLaughlin, T.P.
1991-01-01
Evaluation of criticality accident risks in the processing of significant quantities of fissile materials is both complex and subjective, largely due to the lack of accident statistics. Thus, complying with standards such as ISO 7753 which mandates that the need for an alarm system be evaluated, is also subjective. A review of guidance found in the literature on potential accident magnitudes is presented for different material forms and arrangements. Reasoned arguments are also presented concerning accident prevention and accident likelihoods for these material forms and arrangements. 13 refs., 1 fig., 1 tab.
Gerpott, Fabiola H; Balliet, Daniel; Columbus, Simon; Molho, Catherine; de Vries, Reinout E
2017-09-04
Interdependence is a fundamental characteristic of social interactions. Interdependence Theory states that 6 dimensions describe differences between social situations. Here we examine if these 6 dimensions describe how people think about their interdependence with others in a situation. We find that people (in situ and ex situ) can reliably differentiate situations according to 5, but not 6, dimensions of interdependence: (a) mutual dependence, (b) power, (c) conflict, (d) future interdependence, and (e) information certainty. This model offers a unique framework for understanding how people think about social situations compared to another recent model of situation construal (DIAMONDS). Furthermore, we examine factors that are theorized to shape perceptions of interdependence, such as situational cues (e.g., nonverbal behavior) and personality (e.g., HEXACO and Social Value Orientation). We also study the implications of subjective interdependence for emotions and cooperative behavior during social interactions. This model of subjective interdependence explains substantial variation in the emotions people experience in situations (i.e., happiness, sadness, anger, and disgust), and explains 24% of the variance in cooperation, above and beyond the DIAMONDS model. Throughout these studies, we develop and validate a multidimensional measure of subjective outcome interdependence that can be used in diverse situations and relationships-the Situational Interdependence Scale (SIS). We discuss how this model of interdependence can be used to better understand how people think about social situations encountered in close relationships, organizations, and society. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Conlin, Sarah E; Douglass, Richard P; Ouch, Staci
2017-10-26
The present study examined the link between discrimination and the three components of subjective wellbeing (positive and negative affect and life satisfaction) among a cisgender sample of lesbian, gay, and bisexual (LGB) adults. Specifically, we investigated internalized homonegativity and expectations of rejection as potential mediators of the links between discrimination and subjective wellbeing among a sample of 215 participants. Results from our structural equation model demonstrated a strong, positive direct link between discrimination and negative affect. Discrimination also had small, negative indirect effects on life satisfaction through our two mediators. Interestingly, neither discrimination nor our two mediators were related with positive affect, demonstrating the need for future research to uncover potential buffers of this link. Finally, our model evidenced configural, metric, and scalar invariance, suggesting that our model applies well for both women and men. Practical implications and future directions for research are discussed.
Model-based active control of a continuous structure subjected to moving loads
Stancioiu, D.; Ouyang, H.
2016-09-01
Modelling of a structure is an important preliminary step of structural control. The main objectives of the modelling, which are almost always antagonistic are accuracy and simplicity of the model. The first part of this study focuses on the experimental and theoretical modelling of a structure subjected to the action of one or two decelerating moving carriages modelled as masses. The aim of this part is to obtain a simple but accurate model which will include not only the structure-moving load interaction but also the actuators dynamics. A small scale rig is designed to represent a four-span continuous metallic bridge structure with miniature guiding rails. A series of tests are run subjecting the structure to the action of one or two minicarriages with different loads that were launched along the structure at different initial speeds. The second part is dedicated to model based control design where a feedback controller is designed and tested against the validated model. The study shows that a positive position feedback is able to improve system dynamics but also shows some of the limitations of state- space methods for this type of system.
Concrete model for finite element analysis of structures subjected to severe damages
International Nuclear Information System (INIS)
Jamet, Ph.; Millard, A.; Hoffmann, A.; Nahas, G.; Barbe, B.
1984-01-01
A specific concrete model has been developed, in order to perform mechanical analysis of civil engineering structures, when subjected to accidental loadings, leading to severe damages. Its formulation is based on the physical mechanisms, which have been observed on laboratory specimens. The model has been implemented into the CASTEM finite element system, and the case of a concrete slab perforation by a rigid missile has been considered. The qualitative behaviour of the structure is well predicted by the model. Comparison between numerical and experimental results is also performed, using two main curves: missile velocity versus penetration depth; reaction forces versus time. (Author) [pt
Likelihood analysis of supersymmetric SU(5) GUTs
Energy Technology Data Exchange (ETDEWEB)
Bagnaschi, E.; Weiglein, G. [DESY, Hamburg (Germany); Costa, J.C.; Buchmueller, O.; Citron, M.; Richards, A.; De Vries, K.J. [Imperial College, High Energy Physics Group, Blackett Laboratory, London (United Kingdom); Sakurai, K. [University of Durham, Science Laboratories, Department of Physics, Institute for Particle Physics Phenomenology, Durham (United Kingdom); University of Warsaw, Faculty of Physics, Institute of Theoretical Physics, Warsaw (Poland); Borsato, M.; Chobanova, V.; Lucio, M.; Martinez Santos, D. [Universidade de Santiago de Compostela, Santiago de Compostela (Spain); Cavanaugh, R. [Fermi National Accelerator Laboratory, Batavia, IL (United States); University of Illinois at Chicago, Physics Department, Chicago, IL (United States); Roeck, A. de [CERN, Experimental Physics Department, Geneva (Switzerland); Antwerp University, Wilrijk (Belgium); Dolan, M.J. [University of Melbourne, ARC Centre of Excellence for Particle Physics at the Terascale, School of Physics, Parkville (Australia); Ellis, J.R. [King' s College London, Theoretical Particle Physics and Cosmology Group, Department of Physics, London (United Kingdom); Theoretical Physics Department, CERN, Geneva 23 (Switzerland); Flaecher, H. [University of Bristol, H.H. Wills Physics Laboratory, Bristol (United Kingdom); Heinemeyer, S. [Campus of International Excellence UAM+CSIC, Cantoblanco, Madrid (Spain); Instituto de Fisica Teorica UAM-CSIC, Madrid (Spain); Instituto de Fisica de Cantabria (CSIC-UC), Santander (Spain); Isidori, G. [Universitaet Zuerich, Physik-Institut, Zurich (Switzerland); Olive, K.A. [University of Minnesota, William I. Fine Theoretical Physics Institute, School of Physics and Astronomy, Minneapolis, MN (United States)
2017-02-15
We perform a likelihood analysis of the constraints from accelerator experiments and astrophysical observations on supersymmetric (SUSY) models with SU(5) boundary conditions on soft SUSY-breaking parameters at the GUT scale. The parameter space of the models studied has seven parameters: a universal gaugino mass m{sub 1/2}, distinct masses for the scalar partners of matter fermions in five- and ten-dimensional representations of SU(5), m{sub 5} and m{sub 10}, and for the 5 and anti 5 Higgs representations m{sub H{sub u}} and m{sub H{sub d}}, a universal trilinear soft SUSY-breaking parameter A{sub 0}, and the ratio of Higgs vevs tan β. In addition to previous constraints from direct sparticle searches, low-energy and flavour observables, we incorporate constraints based on preliminary results from 13 TeV LHC searches for jets + E{sub T} events and long-lived particles, as well as the latest PandaX-II and LUX searches for direct Dark Matter detection. In addition to previously identified mechanisms for bringing the supersymmetric relic density into the range allowed by cosmology, we identify a novel u{sub R}/c{sub R} - χ{sup 0}{sub 1} coannihilation mechanism that appears in the supersymmetric SU(5) GUT model and discuss the role of ν{sub τ} coannihilation. We find complementarity between the prospects for direct Dark Matter detection and SUSY searches at the LHC. (orig.)
Likelihood analysis of supersymmetric SU(5) GUTs
Energy Technology Data Exchange (ETDEWEB)
Bagnaschi, E. [DESY, Hamburg (Germany); Costa, J.C. [Imperial College, London (United Kingdom). Blackett Lab.; Sakurai, K. [Durham Univ. (United Kingdom). Inst. for Particle Physics Phenomonology; Warsaw Univ. (Poland). Inst. of Theoretical Physics; Collaboration: MasterCode Collaboration; and others
2016-10-15
We perform a likelihood analysis of the constraints from accelerator experiments and astrophysical observations on supersymmetric (SUSY) models with SU(5) boundary conditions on soft SUSY-breaking parameters at the GUT scale. The parameter space of the models studied has 7 parameters: a universal gaugino mass m{sub 1/2}, distinct masses for the scalar partners of matter fermions in five- and ten-dimensional representations of SU(5), m{sub 5} and m{sub 10}, and for the 5 and anti 5 Higgs representations m{sub H{sub u}} and m{sub H{sub d}}, a universal trilinear soft SUSY-breaking parameter A{sub 0}, and the ratio of Higgs vevs tan β. In addition to previous constraints from direct sparticle searches, low-energy and avour observables, we incorporate constraints based on preliminary results from 13 TeV LHC searches for jets+E{sub T} events and long-lived particles, as well as the latest PandaX-II and LUX searches for direct Dark Matter detection. In addition to previously-identified mechanisms for bringing the supersymmetric relic density into the range allowed by cosmology, we identify a novel u{sub R}/c{sub R}-χ{sup 0}{sub 1} coannihilation mechanism that appears in the supersymmetric SU(5) GUT model and discuss the role of ν{sub T} coannihilation. We find complementarity between the prospects for direct Dark Matter detection and SUSY searches at the LHC.
Exclusion probabilities and likelihood ratios with applications to mixtures.
Slooten, Klaas-Jan; Egeland, Thore
2016-01-01
The statistical evidence obtained from mixed DNA profiles can be summarised in several ways in forensic casework including the likelihood ratio (LR) and the Random Man Not Excluded (RMNE) probability. The literature has seen a discussion of the advantages and disadvantages of likelihood ratios and exclusion probabilities, and part of our aim is to bring some clarification to this debate. In a previous paper, we proved that there is a general mathematical relationship between these statistics: RMNE can be expressed as a certain average of the LR, implying that the expected value of the LR, when applied to an actual contributor to the mixture, is at least equal to the inverse of the RMNE. While the mentioned paper presented applications for kinship problems, the current paper demonstrates the relevance for mixture cases, and for this purpose, we prove some new general properties. We also demonstrate how to use the distribution of the likelihood ratio for donors of a mixture, to obtain estimates for exceedance probabilities of the LR for non-donors, of which the RMNE is a special case corresponding to L R>0. In order to derive these results, we need to view the likelihood ratio as a random variable. In this paper, we describe how such a randomization can be achieved. The RMNE is usually invoked only for mixtures without dropout. In mixtures, artefacts like dropout and drop-in are commonly encountered and we address this situation too, illustrating our results with a basic but widely implemented model, a so-called binary model. The precise definitions, modelling and interpretation of the required concepts of dropout and drop-in are not entirely obvious, and we attempt to clarify them here in a general likelihood framework for a binary model.
Modeling the time--varying subjective quality of HTTP video streams with rate adaptations.
Chen, Chao; Choi, Lark Kwon; de Veciana, Gustavo; Caramanis, Constantine; Heath, Robert W; Bovik, Alan C
2014-05-01
Newly developed hypertext transfer protocol (HTTP)-based video streaming technologies enable flexible rate-adaptation under varying channel conditions. Accurately predicting the users' quality of experience (QoE) for rate-adaptive HTTP video streams is thus critical to achieve efficiency. An important aspect of understanding and modeling QoE is predicting the up-to-the-moment subjective quality of a video as it is played, which is difficult due to hysteresis effects and nonlinearities in human behavioral responses. This paper presents a Hammerstein-Wiener model for predicting the time-varying subjective quality (TVSQ) of rate-adaptive videos. To collect data for model parameterization and validation, a database of longer duration videos with time-varying distortions was built and the TVSQs of the videos were measured in a large-scale subjective study. The proposed method is able to reliably predict the TVSQ of rate adaptive videos. Since the Hammerstein-Wiener model has a very simple structure, the proposed method is suitable for online TVSQ prediction in HTTP-based streaming.
Comparison of likelihood testing procedures for parallel systems with covariances
International Nuclear Information System (INIS)
Ayman Baklizi; Isa Daud; Noor Akma Ibrahim
1998-01-01
In this paper we considered investigating and comparing the behavior of the likelihood ratio, the Rao's and the Wald's statistics for testing hypotheses on the parameters of the simple linear regression model based on parallel systems with covariances. These statistics are asymptotically equivalent (Barndorff-Nielsen and Cox, 1994). However, their relative performances in finite samples are generally known. A Monte Carlo experiment is conducted to stimulate the sizes and the powers of these statistics for complete samples and in the presence of time censoring. Comparisons of the statistics are made according to the attainment of assumed size of the test and their powers at various points in the parameter space. The results show that the likelihood ratio statistics appears to have the best performance in terms of the attainment of the assumed size of the test. Power comparisons show that the Rao statistic has some advantage over the Wald statistic in almost all of the space of alternatives while likelihood ratio statistic occupies either the first or the last position in term of power. Overall, the likelihood ratio statistic appears to be more appropriate to the model under study, especially for small sample sizes
Zvelc, Gregor
2010-12-01
In the article the author presents a model of interpersonal relationships based on integration of object relations theory and theory of attachment. He proposes three main bipolar dimensions of interpersonal relationships: Independence - Dependence, Connectedness - Alienation and Reciprocity - Self-absorption. The author also proposes that it is important to distinguish between two main types of adult interpersonal relationships: object and subject relations. Object relations describe relationships in which the other person is perceived as an object that serves the satisfaction of the first person's needs. Object relations are a manifestation of the right pole of the three main dimensions of interpersonal relationships (Dependence, Alienation and Self-absorption). Subject relations are a counter-pole to the concept of object relations. They describe relationships with other people who are experienced as subjects with their own wishes, interests and needs. Subject relations are a manifestation of the left pole of the main dimensions (Independence, Connectedness and Reciprocity). In this article the author specifically focuses on definitions of object relations in adulthood through a description of six sub-dimensions of object relations: Symbiotic Merging, Separation Anxiety, Social Isolation, Fear of Engulfment, Egocentrism and Narcissism. Every sub-dimension is described in connection to adaptive and pathological functioning. Further research is needed to test the clinical and scientific validity of the model.
An accurate fatigue damage model for welded joints subjected to variable amplitude loading
Aeran, A.; Siriwardane, S. C.; Mikkelsen, O.; Langen, I.
2017-12-01
Researchers in the past have proposed several fatigue damage models to overcome the shortcomings of the commonly used Miner’s rule. However, requirements of material parameters or S-N curve modifications restricts their practical applications. Also, application of most of these models under variable amplitude loading conditions have not been found. To overcome these restrictions, a new fatigue damage model is proposed in this paper. The proposed model can be applied by practicing engineers using only the S-N curve given in the standard codes of practice. The model is verified with experimentally derived damage evolution curves for C 45 and 16 Mn and gives better agreement compared to previous models. The model predicted fatigue lives are also in better correlation with experimental results compared to previous models as shown in earlier published work by the authors. The proposed model is applied to welded joints subjected to variable amplitude loadings in this paper. The model given around 8% shorter fatigue lives compared to Eurocode given Miner’s rule. This shows the importance of applying accurate fatigue damage models for welded joints.
Hammer, K A; Janes, F R
1995-01-01
The objectives for developing the participative method of subject definition were to gain all the relevant information to a high level of fidelity in the earliest stages of the work and so be able to build a realistic model at reduced labour cost. In order to better integrate the two activities--information acquisition and mathematical modelling--a procedure was devised using the methods of interactive management to facilitate teamwork. This procedure provided the techniques to create suitable working relationships between the two groups, the informants and the modellers, so as to maximize their free and accurate intercommunication, both during the initial definition of the linen service and during the monitoring of the accuracy and reality of the draft models. The objectives of this project were met in that the final model was quickly validated and approved, at a low labour cost.
Statistical damage constitutive model for rocks subjected to cyclic stress and cyclic temperature
Zhou, Shu-Wei; Xia, Cai-Chu; Zhao, Hai-Bin; Mei, Song-Hua; Zhou, Yu
2017-10-01
A constitutive model of rocks subjected to cyclic stress-temperature was proposed. Based on statistical damage theory, the damage constitutive model with Weibull distribution was extended. Influence of model parameters on the stress-strain curve for rock reloading after stress-temperature cycling was then discussed. The proposed model was initially validated by rock tests for cyclic stress-temperature and only cyclic stress. Finally, the total damage evolution induced by stress-temperature cycling and reloading after cycling was explored and discussed. The proposed constitutive model is reasonable and applicable, describing well the stress-strain relationship during stress-temperature cycles and providing a good fit to the test results. Elastic modulus in the reference state and the damage induced by cycling affect the shape of reloading stress-strain curve. Total damage induced by cycling and reloading after cycling exhibits three stages: initial slow increase, mid-term accelerated increase, and final slow increase.
A dermal model for spray painters, part I : subjective exposure modelling of spray paint deposition
Brouwer, D.H.; Semple, S.; Marquart, J.; Cherrie, J.W.
2001-01-01
The discriminative power of existing dermal exposure models is limited. Most models only allow occupational hygienists to rank workers between and within workplaces according to broad bands of dermal exposure. No allowance is made for the work practices of different individuals. In this study a
Colclough, Giles L; Woolrich, Mark W; Harrison, Samuel J; Rojas López, Pedro A; Valdes-Sosa, Pedro A; Smith, Stephen M
2018-05-07
A Bayesian model for sparse, hierarchical inverse covariance estimation is presented, and applied to multi-subject functional connectivity estimation in the human brain. It enables simultaneous inference of the strength of connectivity between brain regions at both subject and population level, and is applicable to fmri, meg and eeg data. Two versions of the model can encourage sparse connectivity, either using continuous priors to suppress irrelevant connections, or using an explicit description of the network structure to estimate the connection probability between each pair of regions. A large evaluation of this model, and thirteen methods that represent the state of the art of inverse covariance modelling, is conducted using both simulated and resting-state functional imaging datasets. Our novel Bayesian approach has similar performance to the best extant alternative, Ng et al.'s Sparse Group Gaussian Graphical Model algorithm, which also is based on a hierarchical structure. Using data from the Human Connectome Project, we show that these hierarchical models are able to reduce the measurement error in meg beta-band functional networks by 10%, producing concomitant increases in estimates of the genetic influence on functional connectivity. Copyright © 2018. Published by Elsevier Inc.
DEFF Research Database (Denmark)
Mantel, Claire; Bech, Søren; Korhonen, Jari
2015-01-01
Local backlight dimming is a technology aiming at both saving energy and improving visual quality on television sets. As the rendition of the image is specified locally, the numerical signal corresponding to the displayed image needs to be computed through a model of the display. This simulated...... signal can then be used as input to objective quality metrics. The focus of this paper is on determining which characteristics of locally backlit displays influence quality assessment. A subjective experiment assessing the quality of highly contrasted videos displayed with various local backlight......-dimming algorithms is set up. Subjective results are then compared with both objective measures and objective quality metrics using different display models. The first analysis indicates that the most significant objective features are temporal variations, power consumption (probably representing leakage...
THESEUS: maximum likelihood superpositioning and analysis of macromolecular structures.
Theobald, Douglas L; Wuttke, Deborah S
2006-09-01
THESEUS is a command line program for performing maximum likelihood (ML) superpositions and analysis of macromolecular structures. While conventional superpositioning methods use ordinary least-squares (LS) as the optimization criterion, ML superpositions provide substantially improved accuracy by down-weighting variable structural regions and by correcting for correlations among atoms. ML superpositioning is robust and insensitive to the specific atoms included in the analysis, and thus it does not require subjective pruning of selected variable atomic coordinates. Output includes both likelihood-based and frequentist statistics for accurate evaluation of the adequacy of a superposition and for reliable analysis of structural similarities and differences. THESEUS performs principal components analysis for analyzing the complex correlations found among atoms within a structural ensemble. ANSI C source code and selected binaries for various computing platforms are available under the GNU open source license from http://monkshood.colorado.edu/theseus/ or http://www.theseus3d.org.
Directory of Open Access Journals (Sweden)
Rambiritch V
2016-07-01
Full Text Available Virendra Rambiritch,1 Poobalan Naidoo,2 Breminand Maharaj,1 Goonaseelan Pillai3 1University of KwaZulu-Natal, Durban, 2Department of Internal Medicine, RK Khan Regional Hospital, Chatsworth, South Africa; 3Novartis Pharma AG, Basel, Switzerland Aim: The aim of this study was to describe the pharmacokinetics (PK of glibenclamide in poorly controlled South African type 2 diabetic subjects using noncompartmental and model-based methods. Methods: A total of 24 subjects with type 2 diabetes were administered increasing doses (0 mg/d, 2.5 mg/d, 5 mg/d, 10 mg/d, and 20 mg/d of glibenclamide daily at 2-week intervals. Plasma glibenclamide, glucose, and insulin determinations were performed. Blood sampling times were 0 minute, 30 minutes, 60 minutes, 90 minutes, and 120 minutes (post breakfast sampling and 240 minutes, 270 minutes, 300 minutes, 330 minutes, 360 minutes, and 420 minutes (post lunch sampling on days 14, 28, 42, 56, and 70 for doses of 0 mg, 2.5 mg, 5.0 mg, 10 mg, and 20 mg, respectively. Blood sampling was performed after the steady state was reached. A total of 24 individuals in the data set contributed to a total of 841 observation records. The PK was analyzed using noncompartmental analysis methods, which were implemented in WinNonLin®, and population PK analysis using NONMEM®. Glibenclamide concentration data were log transformed prior to fitting. Results: A two-compartmental disposition model was selected after evaluating one-, two-, and three-compartmental models to describe the time course of glibenclamide plasma concentration data. The one-compartment model adequately described the data; however, the two-compartment model provided a better fit. The three-compartment model failed to achieve successful convergence. A more complex model, to account for enterohepatic recirculation that was observed in the data, was unsuccessful. Conclusion: In South African diabetic subjects, glibenclamide demonstrates linear PK and was best
A Subject-Specific Kinematic Model to Predict Human Motion in Exoskeleton-Assisted Gait
Torricelli, Diego; Cortés, Camilo; Lete, Nerea; Bertelsen, Álvaro; Gonzalez-Vargas, Jose E.; del-Ama, Antonio J.; Dimbwadyo, Iris; Moreno, Juan C.; Florez, Julian; Pons, Jose L.
2018-01-01
The relative motion between human and exoskeleton is a crucial factor that has remarkable consequences on the efficiency, reliability and safety of human-robot interaction. Unfortunately, its quantitative assessment has been largely overlooked in the literature. Here, we present a methodology that allows predicting the motion of the human joints from the knowledge of the angular motion of the exoskeleton frame. Our method combines a subject-specific skeletal model with a kinematic model of a lower limb exoskeleton (H2, Technaid), imposing specific kinematic constraints between them. To calibrate the model and validate its ability to predict the relative motion in a subject-specific way, we performed experiments on seven healthy subjects during treadmill walking tasks. We demonstrate a prediction accuracy lower than 3.5° globally, and around 1.5° at the hip level, which represent an improvement up to 66% compared to the traditional approach assuming no relative motion between the user and the exoskeleton. PMID:29755336
Gentz, Steven J.; Ordway, David O; Parsons, David S.; Garrison, Craig M.; Rodgers, C. Steven; Collins, Brian W.
2015-01-01
The NASA Engineering and Safety Center (NESC) received a request to develop an analysis model based on both frequency response and wave propagation analyses for predicting shock response spectrum (SRS) on composite materials subjected to pyroshock loading. The model would account for near-field environment (approx. 9 inches from the source) dominated by direct wave propagation, mid-field environment (approx. 2 feet from the source) characterized by wave propagation and structural resonances, and far-field environment dominated by lower frequency bending waves in the structure. This report documents the outcome of the assessment.
A Vector Autoregressive Model for Electricity Prices Subject to Long Memory and Regime Switching
DEFF Research Database (Denmark)
Haldrup, Niels; Nielsen, Frank; Nielsen, Morten Ørregaard
2007-01-01
A regime dependent VAR model is suggested that allows long memory (fractional integration) in each of the regime states as well as the possibility of fractional cointegra- tion. The model is relevant in describing the price dynamics of electricity prices where the transmission of power is subject...... to occasional congestion periods. For a system of bilat- eral prices non-congestion means that electricity prices are identical whereas congestion makes prices depart. Hence, the joint price dynamics implies switching between essen- tially a univariate price process under non-congestion and a bivariate price...
Directory of Open Access Journals (Sweden)
Liangsuo Ma
2015-01-01
Full Text Available Cocaine dependence is associated with increased impulsivity in humans. Both cocaine dependence and impulsive behavior are under the regulatory control of cortico-striatal networks. One behavioral laboratory measure of impulsivity is response inhibition (ability to withhold a prepotent response in which altered patterns of regional brain activation during executive tasks in service of normal performance are frequently found in cocaine dependent (CD subjects studied with functional magnetic resonance imaging (fMRI. However, little is known about aberrations in specific directional neuronal connectivity in CD subjects. The present study employed fMRI-based dynamic causal modeling (DCM to study the effective (directional neuronal connectivity associated with response inhibition in CD subjects, elicited under performance of a Go/NoGo task with two levels of NoGo difficulty (Easy and Hard. The performance on the Go/NoGo task was not significantly different between CD subjects and controls. The DCM analysis revealed that prefrontal–striatal connectivity was modulated (influenced during the NoGo conditions for both groups. The effective connectivity from left (L anterior cingulate cortex (ACC to L caudate was similarly modulated during the Easy NoGo condition for both groups. During the Hard NoGo condition in controls, the effective connectivity from right (R dorsolateral prefrontal cortex (DLPFC to L caudate became more positive, and the effective connectivity from R ventrolateral prefrontal cortex (VLPFC to L caudate became more negative. In CD subjects, the effective connectivity from L ACC to L caudate became more negative during the Hard NoGo conditions. These results indicate that during Hard NoGo trials in CD subjects, the ACC rather than DLPFC or VLPFC influenced caudate during response inhibition.
Modal analysis of human body vibration model for Indian subjects under sitting posture.
Singh, Ishbir; Nigam, S P; Saran, V H
2015-01-01
Need and importance of modelling in human body vibration research studies are well established. The study of biodynamic responses of human beings can be classified into experimental and analytical methods. In the past few decades, plenty of mathematical models have been developed based on the diverse field measurements to describe the biodynamic responses of human beings. In this paper, a complete study on lumped parameter model derived from 50th percentile anthropometric data for a seated 54- kg Indian male subject without backrest support under free un-damped conditions has been carried out considering human body segments to be of ellipsoidal shape. Conventional lumped parameter modelling considers the human body as several rigid masses interconnected by springs and dampers. In this study, concept of mass of interconnecting springs has been incorporated and eigenvalues thus obtained are found to be closer to the values reported in the literature. Results obtained clearly establish decoupling of vertical and fore-and-aft oscillations. The mathematical modelling of human body vibration studies help in validating the experimental investigations for ride comfort of a sitting subject. This study clearly establishes the decoupling of vertical and fore-and-aft vibrations and helps in better understanding of possible human response to single and multi-axial excitations.
Szwedowski, T D; Fialkov, J; Whyne, C M
2011-01-01
Developing a more complete understanding of the mechanical response of the craniofacial skeleton (CFS) to physiological loads is fundamental to improving treatment for traumatic injuries, reconstruction due to neoplasia, and deformities. Characterization of the biomechanics of the CFS is challenging due to its highly complex structure and heterogeneity, motivating the utilization of experimentally validated computational models. As such, the objective of this study was to develop, experimentally validate, and parametrically analyse a patient-specific finite element (FE) model of the CFS to elucidate a better understanding of the factors that are of intrinsic importance to the skeletal structural behaviour of the human CFS. An FE model of a cadaveric craniofacial skeleton was created from subject-specific computed tomography data. The model was validated based on bone strain measurements taken under simulated physiological-like loading through the masseter and temporalis muscles (which are responsible for the majority of craniofacial physiologic loading due to mastication). The baseline subject-specific model using locally defined cortical bone thicknesses produced the strongest correlation to the experimental data (r2 = 0.73). Large effects on strain patterns arising from small parametric changes in cortical thickness suggest that the very thin bony structures present in the CFS are crucial to characterizing the local load distribution in the CFS accurately.
CONSTITUTIVE MODEL OF STEEL FIBRE REINFORCED CONCRETE SUBJECTED TO HIGH TEMPERATURES
Directory of Open Access Journals (Sweden)
Lukas Blesak
2016-12-01
Full Text Available Research on structural load-bearing systems exposed to elevated temperatures is an active topic in civil engineering. Carrying out a full-size experiment of a specimen exposed to fire is a challenging task considering not only the preparation labour but also the necessary costs. Therefore, such experiments are simulated using various software and computational models in order to predict the structural behaviour as exactly as possible. In this paper such a procedure, focusing on software simulation, is described in detail. The proposed constitutive model is based on the stress-strain curve and allows predicting SFRC material behaviour in bending at ambient and elevated temperature. SFRC material is represented by the initial linear behaviour, an instantaneous drop of stress after the initial crack occurs and its consequent specific ductility, which influences the overall modelled specimen behaviour under subjected loading. The model is calibrated with ATENA FEM software using experimental results.
Experiential learning model on entrepreneurship subject for improving students’ soft skills
Directory of Open Access Journals (Sweden)
Lina Rifda Naufalin
2017-01-01
Full Text Available The objective of the research was to improve students’ soft skills on entrepreneurship subject by using experiential learning model. It was expected that the learning model could upgrade students’ soft skills which were indicated by the higher confidence, result and job oriented, being courageous to take risks, leadership, originality, and future-oriented. It was a class action research using Kemmis and Mc Tagart’s design model. The research was conducted for two cycles. The subject of the study was economics education students in 2015/2016. The result of the research showed that the experiential learning model could improve students’ soft skills. The research showed that there were increases at the dimension of confidence, (52.1%, result-oriented (22.9%, being courageous to take risks (10.4%, leadership (12.5%, originality (10.4%, and future-oriented (18.8%. It could be concluded that the experiential learning model was effective to improve students’ soft skills on entrepreneurship subject. It also showed that the dimension of confidence had the highest rise. Students’ soft skills were shaped through the continuous stimulus when they got involved at the implementation.Penelitian ini bertujuan untuk meningkatkan soft skills mahasiswa dalam mata kuliah kewirausahaan dengan menggunakan model experietial learning. Diharapkan dengan model pembelajaran ini terjadi peningkatan soft skills mahasiswa yang ditandai dengan peningkatan rasa percaya diri, berorientasi tugas dan hasil, berani mengambil resiko, kepemimpinan, keorisinilan, dan berorientasi masa depan. Penelitian ini menggunakan metode penelitian tindakan kelas dengan menggunakan model desain menurut Kemmis dan Mc Tagart. Penelitian ini dilakukan dalam dua siklus, yaitu siklus I dan siklus II. Penelitian ini dilaksanakan di kelas pendidikan ekonomi angkatan 2015/2016. Hasil penelitian ini menunjukkan bahwa penggunaan model experiential learning dapat meningkatkan soft skills
Likelihood functions for the analysis of single-molecule binned photon sequences
Energy Technology Data Exchange (ETDEWEB)
Gopich, Irina V., E-mail: irinag@niddk.nih.gov [Laboratory of Chemical Physics, National Institute of Diabetes and Digestive and Kidney Diseases, National Institutes of Health, Bethesda, MD 20892 (United States)
2012-03-02
Graphical abstract: Folding of a protein with attached fluorescent dyes, the underlying conformational trajectory of interest, and the observed binned photon trajectory. Highlights: Black-Right-Pointing-Pointer A sequence of photon counts can be analyzed using a likelihood function. Black-Right-Pointing-Pointer The exact likelihood function for a two-state kinetic model is provided. Black-Right-Pointing-Pointer Several approximations are considered for an arbitrary kinetic model. Black-Right-Pointing-Pointer Improved likelihood functions are obtained to treat sequences of FRET efficiencies. - Abstract: We consider the analysis of a class of experiments in which the number of photons in consecutive time intervals is recorded. Sequence of photon counts or, alternatively, of FRET efficiencies can be studied using likelihood-based methods. For a kinetic model of the conformational dynamics and state-dependent Poisson photon statistics, the formalism to calculate the exact likelihood that this model describes such sequences of photons or FRET efficiencies is developed. Explicit analytic expressions for the likelihood function for a two-state kinetic model are provided. The important special case when conformational dynamics are so slow that at most a single transition occurs in a time bin is considered. By making a series of approximations, we eventually recover the likelihood function used in hidden Markov models. In this way, not only is insight gained into the range of validity of this procedure, but also an improved likelihood function can be obtained.
Cases in which ancestral maximum likelihood will be confusingly misleading.
Handelman, Tomer; Chor, Benny
2017-05-07
Ancestral maximum likelihood (AML) is a phylogenetic tree reconstruction criteria that "lies between" maximum parsimony (MP) and maximum likelihood (ML). ML has long been known to be statistically consistent. On the other hand, Felsenstein (1978) showed that MP is statistically inconsistent, and even positively misleading: There are cases where the parsimony criteria, applied to data generated according to one tree topology, will be optimized on a different tree topology. The question of weather AML is statistically consistent or not has been open for a long time. Mossel et al. (2009) have shown that AML can "shrink" short tree edges, resulting in a star tree with no internal resolution, which yields a better AML score than the original (resolved) model. This result implies that AML is statistically inconsistent, but not that it is positively misleading, because the star tree is compatible with any other topology. We show that AML is confusingly misleading: For some simple, four taxa (resolved) tree, the ancestral likelihood optimization criteria is maximized on an incorrect (resolved) tree topology, as well as on a star tree (both with specific edge lengths), while the tree with the original, correct topology, has strictly lower ancestral likelihood. Interestingly, the two short edges in the incorrect, resolved tree topology are of length zero, and are not adjacent, so this resolved tree is in fact a simple path. While for MP, the underlying phenomenon can be described as long edge attraction, it turns out that here we have long edge repulsion. Copyright © 2017. Published by Elsevier Ltd.
Pendeteksian Outlier pada Regresi Nonlinier dengan Metode statistik Likelihood Displacement
Directory of Open Access Journals (Sweden)
Siti Tabi'atul Hasanah
2012-11-01
Full Text Available Outlier is an observation that much different (extreme from the other observational data, or data can be interpreted that do not follow the general pattern of the model. Sometimes outliers provide information that can not be provided by other data. That's why outliers should not just be eliminated. Outliers can also be an influential observation. There are many methods that can be used to detect of outliers. In previous studies done on outlier detection of linear regression. Next will be developed detection of outliers in nonlinear regression. Nonlinear regression here is devoted to multiplicative nonlinear regression. To detect is use of statistical method likelihood displacement. Statistical methods abbreviated likelihood displacement (LD is a method to detect outliers by removing the suspected outlier data. To estimate the parameters are used to the maximum likelihood method, so we get the estimate of the maximum. By using LD method is obtained i.e likelihood displacement is thought to contain outliers. Further accuracy of LD method in detecting the outliers are shown by comparing the MSE of LD with the MSE from the regression in general. Statistic test used is Λ. Initial hypothesis was rejected when proved so is an outlier.
Central Pressure Appraisal: Clinical Validation of a Subject-Specific Mathematical Model.
Directory of Open Access Journals (Sweden)
Francesco Tosello
Full Text Available Current evidence suggests that aortic blood pressure has a superior prognostic value with respect to brachial pressure for cardiovascular events, but direct measurement is not feasible in daily clinical practice.The aim of the present study is the clinical validation of a multiscale mathematical model for non-invasive appraisal of central blood pressure from subject-specific characteristics.A total of 51 young male were selected for the present study. Aortic systolic and diastolic pressure were estimated with a mathematical model and were compared to the most-used non-invasive validated technique (SphygmoCor device, AtCor Medical, Australia. SphygmoCor was calibrated through diastolic and systolic brachial pressure obtained with a sphygmomanometer, while model inputs consist of brachial pressure, height, weight, age, left-ventricular end-systolic and end-diastolic volumes, and data from a pulse wave velocity study.Model-estimated systolic and diastolic central blood pressures resulted to be significantly related to SphygmoCor-assessed central systolic (r = 0.65 p <0.0001 and diastolic (r = 0.84 p<0.0001 blood pressures. The model showed a significant overestimation of systolic pressure (+7.8 (-2.2;14 mmHg, p = 0.0003 and a significant underestimation of diastolic values (-3.2 (-7.5;1.6, p = 0.004, which imply a significant overestimation of central pulse pressure. Interestingly, model prediction errors mirror the mean errors reported in large meta-analysis characterizing the use of the SphygmoCor when non-invasive calibration is performed.In conclusion, multi-scale mathematical model predictions result to be significantly related to SphygmoCor ones. Model-predicted systolic and diastolic aortic pressure resulted in difference of less than 10 mmHg in the 51% and 84% of the subjects, respectively, when compared with SphygmoCor-obtained pressures.
Subjective value of risky foods for individual domestic chicks: a hierarchical Bayesian model.
Kawamori, Ai; Matsushima, Toshiya
2010-05-01
For animals to decide which prey to attack, the gain and delay of the food item must be integrated in a value function. However, the subjective value is not obtained by expected profitability when it is accompanied by risk. To estimate the subjective value, we examined choices in a cross-shaped maze with two colored feeders in domestic chicks. When tested by a reversal in food amount or delay, chicks changed choices similarly in both conditions (experiment 1). We therefore examined risk sensitivity for amount and delay (experiment 2) by supplying one feeder with food of fixed profitability and the alternative feeder with high- or low-profitability food at equal probability. Profitability varied in amount (groups 1 and 2 at high and low variance) or in delay (group 3). To find the equilibrium, the amount (groups 1 and 2) or delay (group 3) of the food in the fixed feeder was adjusted in a total of 18 blocks. The Markov chain Monte Carlo method was applied to a hierarchical Bayesian model to estimate the subjective value. Chicks undervalued the variable feeder in group 1 and were indifferent in group 2 but overvalued the variable feeder in group 3 at a population level. Re-examination without the titration procedure (experiment 3) suggested that the subjective value was not absolute for each option. When the delay was varied, the variable option was often given a paradoxically high value depending on fixed alternative. Therefore, the basic assumption of the uniquely determined value function might be questioned.
High-order Composite Likelihood Inference for Max-Stable Distributions and Processes
Castruccio, Stefano; Huser, Raphaë l; Genton, Marc G.
2015-01-01
In multivariate or spatial extremes, inference for max-stable processes observed at a large collection of locations is a very challenging problem in computational statistics, and current approaches typically rely on less expensive composite likelihoods constructed from small subsets of data. In this work, we explore the limits of modern state-of-the-art computational facilities to perform full likelihood inference and to efficiently evaluate high-order composite likelihoods. With extensive simulations, we assess the loss of information of composite likelihood estimators with respect to a full likelihood approach for some widely-used multivariate or spatial extreme models, we discuss how to choose composite likelihood truncation to improve the efficiency, and we also provide recommendations for practitioners. This article has supplementary material online.
High-order Composite Likelihood Inference for Max-Stable Distributions and Processes
Castruccio, Stefano
2015-09-29
In multivariate or spatial extremes, inference for max-stable processes observed at a large collection of locations is a very challenging problem in computational statistics, and current approaches typically rely on less expensive composite likelihoods constructed from small subsets of data. In this work, we explore the limits of modern state-of-the-art computational facilities to perform full likelihood inference and to efficiently evaluate high-order composite likelihoods. With extensive simulations, we assess the loss of information of composite likelihood estimators with respect to a full likelihood approach for some widely-used multivariate or spatial extreme models, we discuss how to choose composite likelihood truncation to improve the efficiency, and we also provide recommendations for practitioners. This article has supplementary material online.
Castruccio, Stefano; Huser, Raphaë l; Genton, Marc G.
2016-01-01
In multivariate or spatial extremes, inference for max-stable processes observed at a large collection of points is a very challenging problem and current approaches typically rely on less expensive composite likelihoods constructed from small subsets of data. In this work, we explore the limits of modern state-of-the-art computational facilities to perform full likelihood inference and to efficiently evaluate high-order composite likelihoods. With extensive simulations, we assess the loss of information of composite likelihood estimators with respect to a full likelihood approach for some widely used multivariate or spatial extreme models, we discuss how to choose composite likelihood truncation to improve the efficiency, and we also provide recommendations for practitioners. This article has supplementary material online.
Nunes, Natalie; Ambler, Gareth; Hoo, Wee-Liak; Naftalin, Joel; Foo, Xulin; Widschwendter, Martin; Jurkovic, Davor
2013-11-01
This study aimed to assess the accuracy of the International Ovarian Tumour Analysis (IOTA) logistic regression models (LR1 and LR2) and that of subjective pattern recognition (PR) for the diagnosis of ovarian cancer. This was a prospective single-center study in a general gynecology unit of a tertiary hospital during 33 months. There were 292 consecutive women who underwent surgery after an ultrasound diagnosis of an adnexal tumor. All examinations were by a single level 2 ultrasound operator, according to the IOTA guidelines. The malignancy likelihood was calculated using the IOTA LR1 and LR2. The women were then examined separately by an expert operator using subjective PR. These were compared to operative findings and histology. The sensitivity, specificity, area under the curve (AUC), and accuracy of the 3 methods were calculated and compared. The AUCs for LR1 and LR2 were 0.94 [95% confidence interval (CI), 0.92-0.97] and 0.93 (95% CI, 0.90-0.96), respectively. Subjective PR gave a positive likelihood ratio (LR+ve) of 13.9 (95% CI, 7.84-24.6) and a LR-ve of 0.049 (95% CI, 0.022-0.107). The corresponding LR+ve and LR-ve for LR1 were 3.33 (95% CI, 2.85-3.55) and 0.03 (95% CI, 0.01-0.10), and for LR2 were 3.58 (95% CI, 2.77-4.63) and 0.052 (95% CI, 0.022-0.123). The accuracy of PR was 0.942 (95% CI, 0.908-0.966), which was significantly higher when compared with 0.829 (95% CI, 0.781-0.870) for LR1 and 0.836 (95% CI, 0.788-0.872) for LR2 (P IOTA LR1 and LR2 were similar in nonexpert's hands when compared to the original and validation IOTA studies. The PR method was the more accurate test to diagnose ovarian cancer than either of the IOTA models.
Xu, Songhua; Hudson, Kathleen; Bradley, Yong; Daley, Brian J.; Frederick-Dyer, Katherine; Tourassi, Georgia
2012-02-01
The majority of clinical content-based image retrieval (CBIR) studies disregard human perception subjectivity, aiming to duplicate the consensus expert assessment of the visual similarity on example cases. The purpose of our study is twofold: i) discern better the extent of human perception subjectivity when assessing the visual similarity of two images with similar semantic content, and (ii) explore the feasibility of personalized predictive modeling of visual similarity. We conducted a human observer study in which five observers of various expertise were shown ninety-nine triplets of mammographic masses with similar BI-RADS descriptors and were asked to select the two masses with the highest visual relevance. Pairwise agreement ranged between poor and fair among the five observers, as assessed by the kappa statistic. The observers' self-consistency rate was remarkably low, based on repeated questions where either the orientation or the presentation order of a mass was changed. Various machine learning algorithms were explored to determine whether they can predict each observer's personalized selection using textural features. Many algorithms performed with accuracy that exceeded each observer's self-consistency rate, as determined using a cross-validation scheme. This accuracy was statistically significantly higher than would be expected by chance alone (two-tailed p-value ranged between 0.001 and 0.01 for all five personalized models). The study confirmed that human perception subjectivity should be taken into account when developing CBIR-based medical applications.
Maximum Likelihood Estimation and Inference With Examples in R, SAS and ADMB
Millar, Russell B
2011-01-01
This book takes a fresh look at the popular and well-established method of maximum likelihood for statistical estimation and inference. It begins with an intuitive introduction to the concepts and background of likelihood, and moves through to the latest developments in maximum likelihood methodology, including general latent variable models and new material for the practical implementation of integrated likelihood using the free ADMB software. Fundamental issues of statistical inference are also examined, with a presentation of some of the philosophical debates underlying the choice of statis
Subject-specific computational modeling of DBS in the PPTg area
Directory of Open Access Journals (Sweden)
Laura M. Zitella
2015-07-01
Full Text Available Deep brain stimulation (DBS in the pedunculopontine tegmental nucleus (PPTg has been proposed to alleviate medically intractable gait difficulties associated with Parkinson’s disease. Clinical trials have shown somewhat variable outcomes, stemming in part from surgical targeting variability, modulating fiber pathways implicated in side effects, and a general lack of mechanistic understanding of DBS in this brain region. Subject-specific computational models of DBS are a promising tool to investigate the underlying therapy and side effects. In this study, a parkinsonian rhesus macaque was implanted unilaterally with an 8-contact DBS lead in the PPTg region. Fiber tracts adjacent to PPTg, including the oculomotor nerve, central tegmental tract, and superior cerebellar peduncle, were reconstructed from a combination of pre-implant 7T MRI, post-implant CT, and post-mortem histology. These structures were populated with axon models and coupled with a finite element model simulating the voltage distribution in the surrounding neural tissue during stimulation. This study introduces two empirical approaches to evaluate model parameters. First, incremental monopolar cathodic stimulation (20Hz, 90µs pulse width was evaluated for each electrode, during which a right eyelid flutter was observed at the proximal four contacts (-1.0 to -1.4mA. These current amplitudes followed closely with model predicted activation of the oculomotor nerve when assuming an anisotropic conduction medium. Second, PET imaging was collected OFF-DBS and twice during DBS (two different contacts, which supported the model predicted activation of the central tegmental tract and superior cerebellar peduncle. Together, subject-specific models provide a framework to more precisely predict pathways modulated by DBS.
Sensitivity of subject-specific models to errors in musculo-skeletal geometry.
Carbone, V; van der Krogt, M M; Koopman, H F J M; Verdonschot, N
2012-09-21
Subject-specific musculo-skeletal models of the lower extremity are an important tool for investigating various biomechanical problems, for instance the results of surgery such as joint replacements and tendon transfers. The aim of this study was to assess the potential effects of errors in musculo-skeletal geometry on subject-specific model results. We performed an extensive sensitivity analysis to quantify the effect of the perturbation of origin, insertion and via points of each of the 56 musculo-tendon parts contained in the model. We used two metrics, namely a Local Sensitivity Index (LSI) and an Overall Sensitivity Index (OSI), to distinguish the effect of the perturbation on the predicted force produced by only the perturbed musculo-tendon parts and by all the remaining musculo-tendon parts, respectively, during a simulated gait cycle. Results indicated that, for each musculo-tendon part, only two points show a significant sensitivity: its origin, or pseudo-origin, point and its insertion, or pseudo-insertion, point. The most sensitive points belong to those musculo-tendon parts that act as prime movers in the walking movement (insertion point of the Achilles Tendon: LSI=15.56%, OSI=7.17%; origin points of the Rectus Femoris: LSI=13.89%, OSI=2.44%) and as hip stabilizers (insertion points of the Gluteus Medius Anterior: LSI=17.92%, OSI=2.79%; insertion point of the Gluteus Minimus: LSI=21.71%, OSI=2.41%). The proposed priority list provides quantitative information to improve the predictive accuracy of subject-specific musculo-skeletal models. Copyright © 2012 Elsevier Ltd. All rights reserved.
Jeon, Jihyoun; Hsu, Li; Gorfine, Malka
2012-07-01
Frailty models are useful for measuring unobserved heterogeneity in risk of failures across clusters, providing cluster-specific risk prediction. In a frailty model, the latent frailties shared by members within a cluster are assumed to act multiplicatively on the hazard function. In order to obtain parameter and frailty variate estimates, we consider the hierarchical likelihood (H-likelihood) approach (Ha, Lee and Song, 2001. Hierarchical-likelihood approach for frailty models. Biometrika 88, 233-243) in which the latent frailties are treated as "parameters" and estimated jointly with other parameters of interest. We find that the H-likelihood estimators perform well when the censoring rate is low, however, they are substantially biased when the censoring rate is moderate to high. In this paper, we propose a simple and easy-to-implement bias correction method for the H-likelihood estimators under a shared frailty model. We also extend the method to a multivariate frailty model, which incorporates complex dependence structure within clusters. We conduct an extensive simulation study and show that the proposed approach performs very well for censoring rates as high as 80%. We also illustrate the method with a breast cancer data set. Since the H-likelihood is the same as the penalized likelihood function, the proposed bias correction method is also applicable to the penalized likelihood estimators.
Modeling self on others: An import theory of subjectivity and selfhood.
Prinz, Wolfgang
2017-03-01
This paper outlines an Import Theory of subjectivity and selfhood. Import theory claims that subjectivity is initially perceived as a key feature of other minds before it then becomes imported from other minds to own minds whereby it lays the ground for mental selfhood. Import theory builds on perception-production matching, which in turn draws on both representational mechanisms and social practices. Representational mechanisms rely on common coding of perception and production. Social practices rely on action mirroring in dyadic interactions. The interplay between mechanisms and practices gives rise to model self on others. Individuals become intentional agents in virtue of perceiving others mirroring themselves. The outline of the theory is preceded by an introductory section that locates import theory in the broader context of competing approaches, and it is followed by a concluding section that assesses import theory in terms of empirical evidence and explanatory power. Copyright © 2017 Elsevier Inc. All rights reserved.
Multiple-Input Subject-Specific Modeling of Plasma Glucose Concentration for Feedforward Control.
Kotz, Kaylee; Cinar, Ali; Mei, Yong; Roggendorf, Amy; Littlejohn, Elizabeth; Quinn, Laurie; Rollins, Derrick K
2014-11-26
The ability to accurately develop subject-specific, input causation models, for blood glucose concentration (BGC) for large input sets can have a significant impact on tightening control for insulin dependent diabetes. More specifically, for Type 1 diabetics (T1Ds), it can lead to an effective artificial pancreas (i.e., an automatic control system that delivers exogenous insulin) under extreme changes in critical disturbances. These disturbances include food consumption, activity variations, and physiological stress changes. Thus, this paper presents a free-living, outpatient, multiple-input, modeling method for BGC with strong causation attributes that is stable and guards against overfitting to provide an effective modeling approach for feedforward control (FFC). This approach is a Wiener block-oriented methodology, which has unique attributes for meeting critical requirements for effective, long-term, FFC.
Modeling of Melting and Resolidification in Domain of Metal Film Subjected to a Laser Pulse
Directory of Open Access Journals (Sweden)
Majchrzak E.
2016-03-01
Full Text Available Thermal processes in domain of thin metal film subjected to a strong laser pulse are discussed. The heating of domain considered causes the melting and next (after the end of beam impact the resolidification of metal superficial layer. The laser action (a time dependent bell-type function is taken into account by the introduction of internal heat source in the energy equation describing the heat transfer in domain of metal film. Taking into account the extremely short duration, extreme temperature gradients and very small geometrical dimensions of the domain considered, the mathematical model of the process is based on the dual phase lag equation supplemented by the suitable boundary-initial conditions. To model the phase transitions the artificial mushy zone is introduced. At the stage of numerical modeling the Control Volume Method is used. The examples of computations are also presented.
Model of external exposure of population living in the areas subjected to radioactive contamination
International Nuclear Information System (INIS)
Golikov, V.Yu.; Balonov, M.I.
2002-01-01
In the paper, we formulated the general approach to assessment of external doses to population living in contaminated areas (the model equation and the set of parameters). The model parameters were assessed on the basis of results of monitoring in the environment, phantom experiments, and social and demographic information obtained on the contaminated areas. Verification of model assessments performed by comparison with measurement results of individual external doses in inhabitants within the thermoluminescent dosimetry method have shown that differences in dose assessments within both methods does not exceed 1.5 times at a confidence level of 95%. In the paper, we present the results illustrating specific features of external dose formation in population living in the areas of Russia subjected to radioactive contamination due to nuclear tests at the Semipalatinsk test site, radioactive releases from the Mayak enterprise, and the Chernobyl accident. (author)
Tryfonidis, Michail
It has been observed that during orbital spaceflight the absence of gravitation related sensory inputs causes incongruence between the expected and the actual sensory feedback resulting from voluntary movements. This incongruence results in a reinterpretation or neglect of gravity-induced sensory input signals. Over time, new internal models develop, gradually compensating for the loss of spatial reference. The study of adaptation of goal-directed movements is the main focus of this thesis. The hypothesis is that during the adaptive learning process the neural connections behave in ways that can be described by an adaptive control method. The investigation presented in this thesis includes two different sets of experiments. A series of dart throwing experiments took place onboard the space station Mir. Experiments also took place at the Biomechanics lab at MIT, where the subjects performed a series of continuous trajectory tracking movements while a planar robotic manipulandum exerted external torques on the subjects' moving arms. The experimental hypothesis for both experiments is that during the first few trials the subjects will perform poorly trying to follow a prescribed trajectory, or trying to hit a target. A theoretical framework is developed that is a modification of the sliding control method used in robotics. The new control framework is an attempt to explain the adaptive behavior of the subjects. Numerical simulations of the proposed framework are compared with experimental results and predictions from competitive models. The proposed control methodology extends the results of the sliding mode theory to human motor control. The resulting adaptive control model of the motor system is robust to external dynamics, even those of negative gain, uses only position and velocity feedback, and achieves bounded steady-state error without explicit knowledge of the system's nonlinearities. In addition, the experimental and modeling results demonstrate that
Martelli, Saulo; Valente, Giordano; Viceconti, Marco; Taddei, Fulvia
2015-01-01
Subject-specific musculoskeletal models have become key tools in the clinical decision-making process. However, the sensitivity of the calculated solution to the unavoidable errors committed while deriving the model parameters from the available information is not fully understood. The aim of this study was to calculate the sensitivity of all the kinematics and kinetics variables to the inter-examiner uncertainty in the identification of the lower limb joint models. The study was based on the computer tomography of the entire lower-limb from a single donor and the motion capture from a body-matched volunteer. The hip, the knee and the ankle joint models were defined following the International Society of Biomechanics recommendations. Using a software interface, five expert anatomists identified on the donor's images the necessary bony locations five times with a three-day time interval. A detailed subject-specific musculoskeletal model was taken from an earlier study, and re-formulated to define the joint axes by inputting the necessary bony locations. Gait simulations were run using OpenSim within a Monte Carlo stochastic scheme, where the locations of the bony landmarks were varied randomly according to the estimated distributions. Trends for the joint angles, moments, and the muscle and joint forces did not substantially change after parameter perturbations. The highest variations were as follows: (a) 11° calculated for the hip rotation angle, (b) 1% BW × H calculated for the knee moment and (c) 0.33 BW calculated for the ankle plantarflexor muscles and the ankle joint forces. In conclusion, the identification of the joint axes from clinical images is a robust procedure for human movement modelling and simulation.
Geng, Yuan
2016-11-01
This study investigated the relationship among emotional intelligence, gratitude, and subjective well-being in a sample of university students. A total of 365 undergraduates completed the emotional intelligence scale, the gratitude questionnaire, and the subjective well-being measures. The results of the structural equation model showed that emotional intelligence is positively associated with gratitude and subjective well-being, that gratitude is positively associated with subjective well-being, and that gratitude partially mediates the positive relationship between emotional intelligence and subjective well-being. Bootstrap test results also revealed that emotional intelligence has a significant indirect effect on subjective well-being through gratitude.
Efficient Bit-to-Symbol Likelihood Mappings
Moision, Bruce E.; Nakashima, Michael A.
2010-01-01
This innovation is an efficient algorithm designed to perform bit-to-symbol and symbol-to-bit likelihood mappings that represent a significant portion of the complexity of an error-correction code decoder for high-order constellations. Recent implementation of the algorithm in hardware has yielded an 8- percent reduction in overall area relative to the prior design.
Likelihood-ratio-based biometric verification
Bazen, A.M.; Veldhuis, Raymond N.J.
2002-01-01
This paper presents results on optimal similarity measures for biometric verification based on fixed-length feature vectors. First, we show that the verification of a single user is equivalent to the detection problem, which implies that for single-user verification the likelihood ratio is optimal.
Likelihood Ratio-Based Biometric Verification
Bazen, A.M.; Veldhuis, Raymond N.J.
The paper presents results on optimal similarity measures for biometric verification based on fixed-length feature vectors. First, we show that the verification of a single user is equivalent to the detection problem, which implies that, for single-user verification, the likelihood ratio is optimal.
Ride quality evaluation. IV - Models of subjective reaction to aircraft motion
Jacobson, I. D.; Richards, L. G.
1978-01-01
The paper examines models of human reaction to the motions typically experienced on short-haul aircraft flights. Data are taken on the regularly scheduled flights of four commercial airlines - three airplanes and one helicopter. The data base consists of: (1) a series of motion recordings distributed over each flight, each including all six degrees of freedom of motion; temperature, pressure, and noise are also recorded; (2) ratings of perceived comfort and satisfaction from the passengers on each flight; (3) moment-by-moment comfort ratings from a test subject assigned to each airplane; and (4) overall comfort ratings for each flight from the test subjects. Regression models are obtained for prediction of rated comfort from rms values for six degrees of freedom of motion. It is shown that the model C = 2.1 + 17.1 T + 17.2 V (T = transverse acceleration, V = vertical acceleration) gives a good fit to the airplane data but is less acceptable for the helicopter data.
Bayesian interpretation of Generalized empirical likelihood by maximum entropy
Rochet , Paul
2011-01-01
We study a parametric estimation problem related to moment condition models. As an alternative to the generalized empirical likelihood (GEL) and the generalized method of moments (GMM), a Bayesian approach to the problem can be adopted, extending the MEM procedure to parametric moment conditions. We show in particular that a large number of GEL estimators can be interpreted as a maximum entropy solution. Moreover, we provide a more general field of applications by proving the method to be rob...
Use of deterministic sampling for exploring likelihoods in linkage analysis for quantitative traits.
Mackinnon, M.J.; Beek, van der S.; Kinghorn, B.P.
1996-01-01
Deterministic sampling was used to numerically evaluate the expected log-likelihood surfaces of QTL-marker linkage models in large pedigrees with simple structures. By calculating the expected values of likelihoods, questions of power of experimental designs, bias in parameter estimates, approximate
A guideline for the validation of likelihood ratio methods used for forensic evidence evaluation
Meuwly, Didier; Ramos, Daniel; Haraksim, Rudolf
2017-01-01
This Guideline proposes a protocol for the validation of forensic evaluation methods at the source level, using the Likelihood Ratio framework as defined within the Bayes’ inference model. In the context of the inference of identity of source, the Likelihood Ratio is used to evaluate the strength of
Theoretical modeling of the subject: Western and Eastern types of human reflexion.
Lefebvre, Vladimir A
2017-12-01
The author puts forth the hypothesis that mental phenomena are connected with thermodynamic properties of large neural network. A model of the subject with reflexion and capable for meditation is constructed. The processes of reflexion and meditation are presented as the sequence of heat engines. Each subsequent engine compensates for the imperfectness of the preceding engine by performing work equal to the lost available work of the preceding one. The sequence of heat engines is regarded as a chain of the subject's mental images of the self. Each engine can be interpreted as an image of the self that the engine next to it has, and the work performed by engines as the emotions that the subject and his images are experiencing. Two types of meditation are analyzed: The dissolution in nothingness and union with the Absolute. In the first type, the initial engine is the one that yields heat to the coldest reservoir, and in the second type, the initial engine is the one that takes heat from the hottest reservoir. The main concepts of thermodynamics are reviewed in relation to the process of human reflexion. Copyright © 2017 Elsevier Ltd. All rights reserved.
Double coupling: modeling subjectivity and asymmetric organization in social-ecological systems
Directory of Open Access Journals (Sweden)
David Manuel-Navarrete
2015-09-01
Full Text Available Social-ecological organization is a multidimensional phenomenon that combines material and symbolic processes. However, the coupling between social and ecological subsystem is often conceptualized as purely material, thus reducing the symbolic dimension to its behavioral and actionable expressions. In this paper I conceptualize social-ecological systems as doubly coupled. On the one hand, material expressions of socio-cultural processes affect and are affected by ecological dynamics. On the other hand, coupled social-ecological material dynamics are concurrently coupled with subjective dynamics via coding, decoding, personal experience, and human agency. This second coupling operates across two organizationally heterogeneous dimensions: material and symbolic. Although resilience thinking builds on the recognition of organizational asymmetry between living and nonliving systems, it has overlooked the equivalent asymmetry between ecological and socio-cultural subsystems. Three guiding concepts are proposed to formalize double coupling. The first one, social-ecological asymmetry, expands on past seminal work on ecological self-organization to incorporate reflexivity and subjectivity in social-ecological modeling. Organizational asymmetry is based in the distinction between social rules, which are symbolically produced and changed through human agents' reflexivity and purpose, and biophysical rules, which are determined by functional relations between ecological components. The second guiding concept, conscious power, brings to the fore human agents' distinctive capacity to produce our own subjective identity and the consequences of this capacity for social-ecological organization. The third concept, congruence between subjective and objective dynamics, redefines sustainability as contingent on congruent relations between material and symbolic processes. Social-ecological theories and analyses based on these three guiding concepts would support the
Deformation of log-likelihood loss function for multiclass boosting.
Kanamori, Takafumi
2010-09-01
The purpose of this paper is to study loss functions in multiclass classification. In classification problems, the decision function is estimated by minimizing an empirical loss function, and then, the output label is predicted by using the estimated decision function. We propose a class of loss functions which is obtained by a deformation of the log-likelihood loss function. There are four main reasons why we focus on the deformed log-likelihood loss function: (1) this is a class of loss functions which has not been deeply investigated so far, (2) in terms of computation, a boosting algorithm with a pseudo-loss is available to minimize the proposed loss function, (3) the proposed loss functions provide a clear correspondence between the decision functions and conditional probabilities of output labels, (4) the proposed loss functions satisfy the statistical consistency of the classification error rate which is a desirable property in classification problems. Based on (3), we show that the deformed log-likelihood loss provides a model of mislabeling which is useful as a statistical model of medical diagnostics. We also propose a robust loss function against outliers in multiclass classification based on our approach. The robust loss function is a natural extension of the existing robust loss function for binary classification. A model of mislabeling and a robust loss function are useful to cope with noisy data. Some numerical studies are presented to show the robustness of the proposed loss function. A mathematical characterization of the deformed log-likelihood loss function is also presented. Copyright 2010 Elsevier Ltd. All rights reserved.
A coupled thermo-hydro-mechanical-damage model for concrete subjected to moderate temperatures
Energy Technology Data Exchange (ETDEWEB)
Bary, B.; Carpentier, O. [CEA Saclay, DEN/DPC/SCCME/LECBA, F-91191 Gif Sur Yvette, (France); Ranc, G. [CEA VALRHO, DEN/DTEC/L2EC/LCEC, F-30207 Bagnols Sur Ceze, (France); Durand, S. [CEA Saclay, DEN/DM2S/SEMT/LM2S, F-91191 Gif Sur Yvette, (France)
2008-07-01
This study focuses on the concrete behavior subjected to moderate temperatures, with a particular emphasis on the transient thermo-hydric stage. A simplified coupled thermo-hydro-mechanical model is developed with the assumption that the gaseous phase is composed uniquely of vapor. Estimations of the mechanical parameters, Biot coefficient and permeability as a function of damage and saturation degree are provided by applying effective-medium approximation schemes. The isotherm adsorption curves are supposed to depend upon both temperature and crack-induced porosity. The effects of damage and parameters linked to transfer (in particular the adsorption curves) on the concrete structure response in the transient phase of heating are then investigated and evaluated. To this aim, the model is applied to the simulation of concrete cylinders with height and diameter of 0.80 m subjected to heating rates of 0.1 and 10 degrees C/min up to 160 degrees C. The numerical results are analyzed, commented and compared with experimental ones in terms of water mass loss, temperatures and gas pressures evolutions. A numerical study indicates that some parameters have a greater influence on the results than others, and that certain coupling terms in the mass conservation equation of water may be neglected. (authors)
Improved Likelihood Function in Particle-based IR Eye Tracking
DEFF Research Database (Denmark)
Satria, R.; Sorensen, J.; Hammoud, R.
2005-01-01
In this paper we propose a log likelihood-ratio function of foreground and background models used in a particle filter to track the eye region in dark-bright pupil image sequences. This model fuses information from both dark and bright pupil images and their difference image into one model. Our...... enhanced tracker overcomes the issues of prior selection of static thresholds during the detection of feature observations in the bright-dark difference images. The auto-initialization process is performed using cascaded classifier trained using adaboost and adapted to IR eye images. Experiments show good...
Moment Conditions Selection Based on Adaptive Penalized Empirical Likelihood
Directory of Open Access Journals (Sweden)
Yunquan Song
2014-01-01
Full Text Available Empirical likelihood is a very popular method and has been widely used in the fields of artificial intelligence (AI and data mining as tablets and mobile application and social media dominate the technology landscape. This paper proposes an empirical likelihood shrinkage method to efficiently estimate unknown parameters and select correct moment conditions simultaneously, when the model is defined by moment restrictions in which some are possibly misspecified. We show that our method enjoys oracle-like properties; that is, it consistently selects the correct moment conditions and at the same time its estimator is as efficient as the empirical likelihood estimator obtained by all correct moment conditions. Moreover, unlike the GMM, our proposed method allows us to carry out confidence regions for the parameters included in the model without estimating the covariances of the estimators. For empirical implementation, we provide some data-driven procedures for selecting the tuning parameter of the penalty function. The simulation results show that the method works remarkably well in terms of correct moment selection and the finite sample properties of the estimators. Also, a real-life example is carried out to illustrate the new methodology.
Simplified likelihood for the re-interpretation of public CMS results
The CMS Collaboration
2017-01-01
In this note, a procedure for the construction of simplified likelihoods for the re-interpretation of the results of CMS searches for new physics is presented. The procedure relies on the use of a reduced set of information on the background models used in these searches which can readily be provided by the CMS collaboration. A toy example is used to demonstrate the procedure and its accuracy in reproducing the full likelihood for setting limits in models for physics beyond the standard model. Finally, two representative searches from the CMS collaboration are used to demonstrate the validity of the simplified likelihood approach under realistic conditions.
A vector autoregressive model for electricity prices subject to long memory and regime switching
International Nuclear Information System (INIS)
Haldrup, Niels; Nielsen, Frank S.; Nielsen, Morten Oerregaard
2010-01-01
A regime dependent VAR model is suggested that allows long memory (fractional integration) in each of the observed regime states as well as the possibility of fractional cointegration. The model is motivated by the dynamics of electricity prices where the transmission of power is subject to occasional congestion periods. For a system of bilateral prices non-congestion means that electricity prices are identical whereas congestion makes prices depart. Hence, the joint price dynamics implies switching between a univariate price process under non-congestion and a bivariate price process under congestion. At the same time, it is an empirical regularity that electricity prices tend to show a high degree of long memory, and thus that prices may be fractionally cointegrated. Analysis of Nord Pool data shows that even though the prices are identical under non-congestion, the prices are not, in general, fractionally cointegrated in the congestion state. Hence, in most cases price convergence is a property following from regime switching rather than a conventional error correction mechanism. Finally, the suggested model is shown to deliver forecasts that are more precise compared to competing models. (author)
Directory of Open Access Journals (Sweden)
Misztal Ignacy
2009-01-01
Full Text Available Abstract A semi-parametric non-linear longitudinal hierarchical model is presented. The model assumes that individual variation exists both in the degree of the linear change of performance (slope beyond a particular threshold of the independent variable scale and in the magnitude of the threshold itself; these individual variations are attributed to genetic and environmental components. During implementation via a Bayesian MCMC approach, threshold levels were sampled using a Metropolis step because their fully conditional posterior distributions do not have a closed form. The model was tested by simulation following designs similar to previous studies on genetics of heat stress. Posterior means of parameters of interest, under all simulation scenarios, were close to their true values with the latter always being included in the uncertain regions, indicating an absence of bias. The proposed models provide flexible tools for studying genotype by environmental interaction as well as for fitting other longitudinal traits subject to abrupt changes in the performance at particular points on the independent variable scale.
Factors Associated with Young Adults’ Pregnancy Likelihood
Kitsantas, Panagiota; Lindley, Lisa L.; Wu, Huichuan
2014-01-01
OBJECTIVES While progress has been made to reduce adolescent pregnancies in the United States, rates of unplanned pregnancy among young adults (18–29 years) remain high. In this study, we assessed factors associated with perceived likelihood of pregnancy (likelihood of getting pregnant/getting partner pregnant in the next year) among sexually experienced young adults who were not trying to get pregnant and had ever used contraceptives. METHODS We conducted a secondary analysis of 660 young adults, 18–29 years old in the United States, from the cross-sectional National Survey of Reproductive and Contraceptive Knowledge. Logistic regression and classification tree analyses were conducted to generate profiles of young adults most likely to report anticipating a pregnancy in the next year. RESULTS Nearly one-third (32%) of young adults indicated they believed they had at least some likelihood of becoming pregnant in the next year. Young adults who believed that avoiding pregnancy was not very important were most likely to report pregnancy likelihood (odds ratio [OR], 5.21; 95% CI, 2.80–9.69), as were young adults for whom avoiding a pregnancy was important but not satisfied with their current contraceptive method (OR, 3.93; 95% CI, 1.67–9.24), attended religious services frequently (OR, 3.0; 95% CI, 1.52–5.94), were uninsured (OR, 2.63; 95% CI, 1.31–5.26), and were likely to have unprotected sex in the next three months (OR, 1.77; 95% CI, 1.04–3.01). DISCUSSION These results may help guide future research and the development of pregnancy prevention interventions targeting sexually experienced young adults. PMID:25782849
Applying exclusion likelihoods from LHC searches to extended Higgs sectors
International Nuclear Information System (INIS)
Bechtle, Philip; Heinemeyer, Sven; Staal, Oscar; Stefaniak, Tim; Weiglein, Georg
2015-01-01
LHC searches for non-standard Higgs bosons decaying into tau lepton pairs constitute a sensitive experimental probe for physics beyond the Standard Model (BSM), such as supersymmetry (SUSY). Recently, the limits obtained from these searches have been presented by the CMS collaboration in a nearly model-independent fashion - as a narrow resonance model - based on the full 8 TeV dataset. In addition to publishing a 95 % C.L. exclusion limit, the full likelihood information for the narrowresonance model has been released. This provides valuable information that can be incorporated into global BSM fits. We present a simple algorithm that maps an arbitrary model with multiple neutral Higgs bosons onto the narrow resonance model and derives the corresponding value for the exclusion likelihood from the CMS search. This procedure has been implemented into the public computer code HiggsBounds (version 4.2.0 and higher). We validate our implementation by cross-checking against the official CMS exclusion contours in three Higgs benchmark scenarios in the Minimal Supersymmetric Standard Model (MSSM), and find very good agreement. Going beyond validation, we discuss the combined constraints of the ττ search and the rate measurements of the SM-like Higgs at 125 GeV in a recently proposed MSSM benchmark scenario, where the lightest Higgs boson obtains SM-like couplings independently of the decoupling of the heavier Higgs states. Technical details for how to access the likelihood information within HiggsBounds are given in the appendix. The program is available at http:// higgsbounds.hepforge.org. (orig.)
Development of a Subject-Specific Foot-Ground Contact Model for Walking.
Jackson, Jennifer N; Hass, Chris J; Fregly, Benjamin J
2016-09-01
Computational walking simulations could facilitate the development of improved treatments for clinical conditions affecting walking ability. Since an effective treatment is likely to change a patient's foot-ground contact pattern and timing, such simulations should ideally utilize deformable foot-ground contact models tailored to the patient's foot anatomy and footwear. However, no study has reported a deformable modeling approach that can reproduce all six ground reaction quantities (expressed as three reaction force components, two center of pressure (CoP) coordinates, and a free reaction moment) for an individual subject during walking. This study proposes such an approach for use in predictive optimizations of walking. To minimize complexity, we modeled each foot as two rigid segments-a hindfoot (HF) segment and a forefoot (FF) segment-connected by a pin joint representing the toes flexion-extension axis. Ground reaction forces (GRFs) and moments acting on each segment were generated by a grid of linear springs with nonlinear damping and Coulomb friction spread across the bottom of each segment. The stiffness and damping of each spring and common friction parameter values for all springs were calibrated for both feet simultaneously via a novel three-stage optimization process that used motion capture and ground reaction data collected from a single walking trial. The sequential three-stage process involved matching (1) the vertical force component, (2) all three force components, and finally (3) all six ground reaction quantities. The calibrated model was tested using four additional walking trials excluded from calibration. With only small changes in input kinematics, the calibrated model reproduced all six ground reaction quantities closely (root mean square (RMS) errors less than 13 N for all three forces, 25 mm for anterior-posterior (AP) CoP, 8 mm for medial-lateral (ML) CoP, and 2 N·m for the free moment) for both feet in all walking trials. The
Sampling of systematic errors to estimate likelihood weights in nuclear data uncertainty propagation
International Nuclear Information System (INIS)
Helgesson, P.; Sjöstrand, H.; Koning, A.J.; Rydén, J.; Rochman, D.; Alhassan, E.; Pomp, S.
2016-01-01
In methodologies for nuclear data (ND) uncertainty assessment and propagation based on random sampling, likelihood weights can be used to infer experimental information into the distributions for the ND. As the included number of correlated experimental points grows large, the computational time for the matrix inversion involved in obtaining the likelihood can become a practical problem. There are also other problems related to the conventional computation of the likelihood, e.g., the assumption that all experimental uncertainties are Gaussian. In this study, a way to estimate the likelihood which avoids matrix inversion is investigated; instead, the experimental correlations are included by sampling of systematic errors. It is shown that the model underlying the sampling methodology (using univariate normal distributions for random and systematic errors) implies a multivariate Gaussian for the experimental points (i.e., the conventional model). It is also shown that the likelihood estimates obtained through sampling of systematic errors approach the likelihood obtained with matrix inversion as the sample size for the systematic errors grows large. In studied practical cases, it is seen that the estimates for the likelihood weights converge impractically slowly with the sample size, compared to matrix inversion. The computational time is estimated to be greater than for matrix inversion in cases with more experimental points, too. Hence, the sampling of systematic errors has little potential to compete with matrix inversion in cases where the latter is applicable. Nevertheless, the underlying model and the likelihood estimates can be easier to intuitively interpret than the conventional model and the likelihood function involving the inverted covariance matrix. Therefore, this work can both have pedagogical value and be used to help motivating the conventional assumption of a multivariate Gaussian for experimental data. The sampling of systematic errors could also
Unbinned likelihood analysis of EGRET observations
International Nuclear Information System (INIS)
Digel, Seth W.
2000-01-01
We present a newly-developed likelihood analysis method for EGRET data that defines the likelihood function without binning the photon data or averaging the instrumental response functions. The standard likelihood analysis applied to EGRET data requires the photons to be binned spatially and in energy, and the point-spread functions to be averaged over energy and inclination angle. The full-width half maximum of the point-spread function increases by about 40% from on-axis to 30 degree sign inclination, and depending on the binning in energy can vary by more than that in a single energy bin. The new unbinned method avoids the loss of information that binning and averaging cause and can properly analyze regions where EGRET viewing periods overlap and photons with different inclination angles would otherwise be combined in the same bin. In the poster, we describe the unbinned analysis method and compare its sensitivity with binned analysis for detecting point sources in EGRET data
Vijayakumar, Vishal; Case, Michelle; Shirinpour, Sina; He, Bin
2017-12-01
Effective pain assessment and management strategies are needed to better manage pain. In addition to self-report, an objective pain assessment system can provide a more complete picture of the neurophysiological basis for pain. In this study, a robust and accurate machine learning approach is developed to quantify tonic thermal pain across healthy subjects into a maximum of ten distinct classes. A random forest model was trained to predict pain scores using time-frequency wavelet representations of independent components obtained from electroencephalography (EEG) data, and the relative importance of each frequency band to pain quantification is assessed. The mean classification accuracy for predicting pain on an independent test subject for a range of 1-10 is 89.45%, highest among existing state of the art quantification algorithms for EEG. The gamma band is the most important to both intersubject and intrasubject classification accuracy. The robustness and generalizability of the classifier are demonstrated. Our results demonstrate the potential of this tool to be used clinically to help us to improve chronic pain treatment and establish spectral biomarkers for future pain-related studies using EEG.
Nonlinear dynamic modeling of a simple flexible rotor system subjected to time-variable base motions
Chen, Liqiang; Wang, Jianjun; Han, Qinkai; Chu, Fulei
2017-09-01
Rotor systems carried in transportation system or under seismic excitations are considered to have a moving base. To study the dynamic behavior of flexible rotor systems subjected to time-variable base motions, a general model is developed based on finite element method and Lagrange's equation. Two groups of Euler angles are defined to describe the rotation of the rotor with respect to the base and that of the base with respect to the ground. It is found that the base rotations would cause nonlinearities in the model. To verify the proposed model, a novel test rig which could simulate the base angular-movement is designed. Dynamic experiments on a flexible rotor-bearing system with base angular motions are carried out. Based upon these, numerical simulations are conducted to further study the dynamic response of the flexible rotor under harmonic angular base motions. The effects of base angular amplitude, rotating speed and base frequency on response behaviors are discussed by means of FFT, waterfall, frequency response curve and orbits of the rotor. The FFT and waterfall plots of the disk horizontal and vertical vibrations are marked with multiplications of the base frequency and sum and difference tones of the rotating frequency and the base frequency. Their amplitudes will increase remarkably when they meet the whirling frequencies of the rotor system.
Damage and failure modeling of lotus-type porous material subjected to low-cycle fatigue
Directory of Open Access Journals (Sweden)
J. Kramberger
2016-01-01
Full Text Available The investigation of low-cycle fatigue behaviour of lotus-type porous material is presented in this paper. Porous materials exhibit some unique features which are useful for a number of various applications. This paper evaluates a numerical approach for determining of damage initiation and evolution of lotus-type porous material with computational simulations, where the considered computational models have different pore topology patterns. The low-cycle fatigue analysis was performed by using a damage evolution law. The damage state was calculated and updated based on the inelastic hysteresis energy for stabilized cycle. Degradation of the elastic stifness was modeled using scalar damage variable. In order to examine crack propagation path finite elements with severe damage were deleted and removed from the mesh during simulation. The direct cyclic analysis capability in Abaqus/Standard was used for low-cycle fatigue analysis to obtain the stabilized response of a model subjected to the periodic loading. The computational results show a qualitative understanding of pores topology influence on low-cycle fatigue under transversal loading conditions in relation to pore orientation.
Toma, Milan; Bloodworth, Charles H; Einstein, Daniel R; Pierce, Eric L; Cochran, Richard P; Yoganathan, Ajit P; Kunzelman, Karyn S
2016-12-01
The diversity of mitral valve (MV) geometries and multitude of surgical options for correction of MV diseases necessitates the use of computational modeling. Numerical simulations of the MV would allow surgeons and engineers to evaluate repairs, devices, procedures, and concepts before performing them and before moving on to more costly testing modalities. Constructing, tuning, and validating these models rely upon extensive in vitro characterization of valve structure, function, and response to change due to diseases. Micro-computed tomography ([Formula: see text]CT) allows for unmatched spatial resolution for soft tissue imaging. However, it is still technically challenging to obtain an accurate geometry of the diastolic MV. We discuss here the development of a novel technique for treating MV specimens with glutaraldehyde fixative in order to minimize geometric distortions in preparation for [Formula: see text]CT scanning. The technique provides a resulting MV geometry which is significantly more detailed in chordal structure, accurate in leaflet shape, and closer to its physiological diastolic geometry. In this paper, computational fluid-structure interaction (FSI) simulations are used to show the importance of more detailed subject-specific MV geometry with 3D chordal structure to simulate a proper closure validated against [Formula: see text]CT images of the closed valve. Two computational models, before and after use of the aforementioned technique, are used to simulate closure of the MV.
Directory of Open Access Journals (Sweden)
Li Dawei
2014-08-01
Full Text Available Servicing is applied periodically in practice with the aim of restoring the system state and prolonging the lifetime. It is generally seen as an imperfect maintenance action which has a chief influence on the maintenance strategy. In order to model the maintenance effect of servicing, this study analyzes the deterioration characteristics of system under scheduled servicing. And then the deterioration model is established from the failure mechanism by compound Poisson process. On the basis of the system damage value and failure mechanism, the failure rate refresh factor is proposed to describe the maintenance effect of servicing. A maintenance strategy is developed which combines the benefits of scheduled servicing and preventive maintenance. Then the optimization model is given to determine the optimal servicing period and preventive maintenance time, with an objective to minimize the system expected life-cycle cost per unit time and a constraint on system survival probability for the duration of mission time. Subject to mission time, it can control the ability of accomplishing the mission at any time so as to ensure the high dependability. An example of water pump rotor relating to scheduled servicing is introduced to illustrate the failure rate refresh factor and the proposed maintenance strategy. Compared with traditional methods, the numerical results show that the failure rate refresh factor can describe the maintenance effect of servicing more intuitively and objectively. It also demonstrates that this maintenance strategy can prolong the lifetime, reduce the total lifetime maintenance cost and guarantee the dependability of system.
Kadum, Hawwa; Rockel, Stanislav; Holling, Michael; Peinke, Joachim; Cal, Raul Bayon
2017-11-01
The wake behind a floating model horizontal axis wind turbine during pitch motion is investigated and compared to a fixed wind turbine wake. An experiment is conducted in an acoustic wind tunnel where hot-wire data are acquired at five downstream locations. At each downstream location, a rake of 16 hot-wires was used with placement of the probes increasing radially in the vertical, horizontal, and diagonally at 45 deg. In addition, the effect of turbulence intensity on the floating wake is examined by subjecting the wind turbine to different inflow conditions controlled through three settings in the wind tunnel grid, a passive and two active protocols, thus varying in intensity. The wakes are inspected by statistics of the point measurements, where the various length/time scales are considered. The wake characteristics for a floating wind turbine are compared to a fixed turbine, and uncovering its features; relevant as the demand for exploiting deep waters in wind energy is increasing.
Prediction of Cognitive Performance and Subjective Sleepiness Using a Model of Arousal Dynamics.
Postnova, Svetlana; Lockley, Steven W; Robinson, Peter A
2018-04-01
A model of arousal dynamics is applied to predict objective performance and subjective sleepiness measures, including lapses and reaction time on a visual Performance Vigilance Test (vPVT), performance on a mathematical addition task (ADD), and the Karolinska Sleepiness Scale (KSS). The arousal dynamics model is comprised of a physiologically based flip-flop switch between the wake- and sleep-active neuronal populations and a dynamic circadian oscillator, thus allowing prediction of sleep propensity. Published group-level experimental constant routine (CR) and forced desynchrony (FD) data are used to calibrate the model to predict performance and sleepiness. Only the studies using dim light (performance measures during CR and FD protocols, with sleep-wake cycles ranging from 20 to 42.85 h and a 2:1 wake-to-sleep ratio. New metrics relating model outputs to performance and sleepiness data are developed and tested against group average outcomes from 7 (vPVT lapses), 5 (ADD), and 8 (KSS) experimental protocols, showing good quantitative and qualitative agreement with the data (root mean squared error of 0.38, 0.19, and 0.35, respectively). The weights of the homeostatic and circadian effects are found to be different between the measures, with KSS having stronger homeostatic influence compared with the objective measures of performance. Using FD data in addition to CR data allows us to challenge the model in conditions of both acute sleep deprivation and structured circadian misalignment, ensuring that the role of the circadian and homeostatic drives in performance is properly captured.
Carbone, V; van der Krogt, M M; Koopman, H F J M; Verdonschot, N
2016-06-14
Subject-specific musculoskeletal (MS) models of the lower extremity are essential for applications such as predicting the effects of orthopedic surgery. We performed an extensive sensitivity analysis to assess the effects of potential errors in Hill muscle-tendon (MT) model parameters for each of the 56 MT parts contained in a state-of-the-art MS model. We used two metrics, namely a Local Sensitivity Index (LSI) and an Overall Sensitivity Index (OSI), to distinguish the effect of the perturbation on the predicted force produced by the perturbed MT parts and by all the remaining MT parts, respectively, during a simulated gait cycle. Results indicated that sensitivity of the model depended on the specific role of each MT part during gait, and not merely on its size and length. Tendon slack length was the most sensitive parameter, followed by maximal isometric muscle force and optimal muscle fiber length, while nominal pennation angle showed very low sensitivity. The highest sensitivity values were found for the MT parts that act as prime movers of gait (Soleus: average OSI=5.27%, Rectus Femoris: average OSI=4.47%, Gastrocnemius: average OSI=3.77%, Vastus Lateralis: average OSI=1.36%, Biceps Femoris Caput Longum: average OSI=1.06%) and hip stabilizers (Gluteus Medius: average OSI=3.10%, Obturator Internus: average OSI=1.96%, Gluteus Minimus: average OSI=1.40%, Piriformis: average OSI=0.98%), followed by the Peroneal muscles (average OSI=2.20%) and Tibialis Anterior (average OSI=1.78%) some of which were not included in previous sensitivity studies. Finally, the proposed priority list provides quantitative information to indicate which MT parts and which MT parameters should be estimated most accurately to create detailed and reliable subject-specific MS models. Copyright © 2016 Elsevier Ltd. All rights reserved.
Corporate governance effect on financial distress likelihood: Evidence from Spain
Directory of Open Access Journals (Sweden)
Montserrat Manzaneque
2016-01-01
Full Text Available The paper explores some mechanisms of corporate governance (ownership and board characteristics in Spanish listed companies and their impact on the likelihood of financial distress. An empirical study was conducted between 2007 and 2012 using a matched-pairs research design with 308 observations, with half of them classified as distressed and non-distressed. Based on the previous study by Pindado, Rodrigues, and De la Torre (2008, a broader concept of bankruptcy is used to define business failure. Employing several conditional logistic models, as well as to other previous studies on bankruptcy, the results confirm that in difficult situations prior to bankruptcy, the impact of board ownership and proportion of independent directors on business failure likelihood are similar to those exerted in more extreme situations. These results go one step further, to offer a negative relationship between board size and the likelihood of financial distress. This result is interpreted as a form of creating diversity and to improve the access to the information and resources, especially in contexts where the ownership is highly concentrated and large shareholders have a great power to influence the board structure. However, the results confirm that ownership concentration does not have a significant impact on financial distress likelihood in the Spanish context. It is argued that large shareholders are passive as regards an enhanced monitoring of management and, alternatively, they do not have enough incentives to hold back the financial distress. These findings have important implications in the Spanish context, where several changes in the regulatory listing requirements have been carried out with respect to corporate governance, and where there is no empirical evidence regarding this respect.
Fiber Bragg grating-based performance monitoring of a slope model subjected to seepage
Zhu, Hong-Hu; Shi, Bin; Yan, Jun-Fan; Zhang, Jie; Zhang, Cheng-Cheng; Wang, Bao-Jun
2014-09-01
In the past few years, fiber optic sensing technologies have played an increasingly important role in the health monitoring of civil infrastructures. These innovative sensing technologies have recently been successfully applied to the performance monitoring of a series of geotechnical structures. Fiber optic sensors have shown many unique advantages in comparison with conventional sensors, including immunity to electrical noise, higher precision and improved durability and embedding capabilities; fiber optic sensors are also smaller in size and lighter in weight. In order to explore the mechanism of seepage-induced slope instability, a small-scale 1 g model test of the soil slope has been performed in the laboratory. During the model’s construction, specially fabricated sensing fibers containing nine fiber Bragg grating (FBG) strain sensors connected in a series were horizontally and vertically embedded into the soil mass. The surcharge load was applied on the slope crest, and the groundwater level inside of the slope was subsequently varied using two water chambers installed besides the slope model. The fiber optic sensing data of the vertical and horizontal strains within the slope model were automatically recorded by an FBG interrogator and a computer during the test. The test results are presented and interpreted in detail. It is found that the gradually accumulated deformation of the slope model subjected to seepage can be accurately captured by the quasi-distributed FBG strain sensors. The test results also demonstrate that the slope stability is significantly affected by ground water seepage, which fits well with the results that were calculated using finite element and limit equilibrium methods. The relationship between the strain measurements and the safety factors is further analyzed, together with a discussion on the residual strains. The performance evaluation of a soil slope using fiber optic strain sensors is proved to be a potentially effective
Fiber Bragg grating-based performance monitoring of a slope model subjected to seepage
International Nuclear Information System (INIS)
Zhu, Hong-Hu; Shi, Bin; Yan, Jun-Fan; Zhang, Cheng-Cheng; Wang, Bao-Jun; Zhang, Jie
2014-01-01
In the past few years, fiber optic sensing technologies have played an increasingly important role in the health monitoring of civil infrastructures. These innovative sensing technologies have recently been successfully applied to the performance monitoring of a series of geotechnical structures. Fiber optic sensors have shown many unique advantages in comparison with conventional sensors, including immunity to electrical noise, higher precision and improved durability and embedding capabilities; fiber optic sensors are also smaller in size and lighter in weight. In order to explore the mechanism of seepage-induced slope instability, a small-scale 1 g model test of the soil slope has been performed in the laboratory. During the model’s construction, specially fabricated sensing fibers containing nine fiber Bragg grating (FBG) strain sensors connected in a series were horizontally and vertically embedded into the soil mass. The surcharge load was applied on the slope crest, and the groundwater level inside of the slope was subsequently varied using two water chambers installed besides the slope model. The fiber optic sensing data of the vertical and horizontal strains within the slope model were automatically recorded by an FBG interrogator and a computer during the test. The test results are presented and interpreted in detail. It is found that the gradually accumulated deformation of the slope model subjected to seepage can be accurately captured by the quasi-distributed FBG strain sensors. The test results also demonstrate that the slope stability is significantly affected by ground water seepage, which fits well with the results that were calculated using finite element and limit equilibrium methods. The relationship between the strain measurements and the safety factors is further analyzed, together with a discussion on the residual strains. The performance evaluation of a soil slope using fiber optic strain sensors is proved to be a potentially effective
Alsing, Justin; Wandelt, Benjamin; Feeney, Stephen
2018-03-01
Many statistical models in cosmology can be simulated forwards but have intractable likelihood functions. Likelihood-free inference methods allow us to perform Bayesian inference from these models using only forward simulations, free from any likelihood assumptions or approximations. Likelihood-free inference generically involves simulating mock data and comparing to the observed data; this comparison in data-space suffers from the curse of dimensionality and requires compression of the data to a small number of summary statistics to be tractable. In this paper we use massive asymptotically-optimal data compression to reduce the dimensionality of the data-space to just one number per parameter, providing a natural and optimal framework for summary statistic choice for likelihood-free inference. Secondly, we present the first cosmological application of Density Estimation Likelihood-Free Inference (DELFI), which learns a parameterized model for joint distribution of data and parameters, yielding both the parameter posterior and the model evidence. This approach is conceptually simple, requires less tuning than traditional Approximate Bayesian Computation approaches to likelihood-free inference and can give high-fidelity posteriors from orders of magnitude fewer forward simulations. As an additional bonus, it enables parameter inference and Bayesian model comparison simultaneously. We demonstrate Density Estimation Likelihood-Free Inference with massive data compression on an analysis of the joint light-curve analysis supernova data, as a simple validation case study. We show that high-fidelity posterior inference is possible for full-scale cosmological data analyses with as few as ˜104 simulations, with substantial scope for further improvement, demonstrating the scalability of likelihood-free inference to large and complex cosmological datasets.
Eryilmaz, Ali
2011-01-01
The aim of this study is to develop and test a subjective well-being model for adolescents in high school. A total of 326 adolescents in high school (176 female and 150 male) participated in this study. The data was collected by using the general needs satisfaction questionnaire, which is for the adolescents' subjective well-being, and determining…
Directory of Open Access Journals (Sweden)
Juri Taborri
2015-09-01
Full Text Available Gait-phase recognition is a necessary functionality to drive robotic rehabilitation devices for lower limbs. Hidden Markov Models (HMMs represent a viable solution, but they need subject-specific training, making data processing very time-consuming. Here, we validated an inter-subject procedure to avoid the intra-subject one in two, four and six gait-phase models in pediatric subjects. The inter-subject procedure consists in the identification of a standardized parameter set to adapt the model to measurements. We tested the inter-subject procedure both on scalar and distributed classifiers. Ten healthy children and ten hemiplegic children, each equipped with two Inertial Measurement Units placed on shank and foot, were recruited. The sagittal component of angular velocity was recorded by gyroscopes while subjects performed four walking trials on a treadmill. The goodness of classifiers was evaluated with the Receiver Operating Characteristic. The results provided a goodness from good to optimum for all examined classifiers (0 < G < 0.6, with the best performance for the distributed classifier in two-phase recognition (G = 0.02. Differences were found among gait partitioning models, while no differences were found between training procedures with the exception of the shank classifier. Our results raise the possibility of avoiding subject-specific training in HMM for gait-phase recognition and its implementation to control exoskeletons for the pediatric population.
A biclustering algorithm for binary matrices based on penalized Bernoulli likelihood
Lee, Seokho; Huang, Jianhua Z.
2013-01-01
We propose a new biclustering method for binary data matrices using the maximum penalized Bernoulli likelihood estimation. Our method applies a multi-layer model defined on the logits of the success probabilities, where each layer represents a
A path model of sarcopenia on bone mass loss in elderly subjects.
Rondanelli, M; Guido, D; Opizzi, A; Faliva, M A; Perna, S; Grassi, M
2014-01-01
Aging is associated with decreases in muscle mass, strength, power (sarcopenia) and bone mineral density (BMD). The aims of this study were to investigate in elderly the role of sarcopenia on BMD loss by a path model, including adiposity, inflammation, and malnutrition associations. Body composition and BMD were measured by dual X-ray absorptiometry in 159 elderly subjects (52 male/107 female; mean age 80.3 yrs). Muscle strength was determined with dynamometer. Serum albumin and PCR were also assessed. Structural equations examined the effect of sarcopenia (measured by Relative Skeletal Muscle Mass, Total Muscle Mass, Handgrip, Muscle Quality Score) on osteoporosis (measured by Vertebral and Femoral T-scores) in a latent variable model including adiposity (measured by Total Fat Mass, BMI, Ginoid/Android Fat), inflammation (PCR), and malnutrition (serum albumin). The sarcopenia assumed a role of moderator in the adiposity-osteoporosis relationship. Specifically, increasing the sarcopenia, the relationship adiposity-osteoporosis (β: -0.58) decrease in intensity. Adiposity also influences sarcopenia (β: -0.18). Malnutrition affects the inflammatory and the adiposity states (β: +0.61, and β: -0.30, respectively), while not influencing the sarcopenia. Thus, adiposity has a role as a mediator of the effect of malnutrition on both sarcopenia and osteoporosis. Malnutrition decreases adiposity; decreasing adiposity, in turn, increase the sarcopenia and osteoporosis. This study suggests such as in a group of elderly sarcopenia affects the link between adiposity and BMD, but not have a pure independent effect on osteoporosis.
Bonne, F.; Bonnay, P.; Girard, A.; Hoa, C.; Lacroix, B.; Le Coz, Q.; Nicollet, S.; Poncet, J.-M.; Zani, L.
2017-12-01
Supercritical helium loops at 4.2 K are the baseline cooling strategy of tokamaks superconducting magnets (JT-60SA, ITER, DEMO, etc.). This loops work with cryogenic circulators that force a supercritical helium flow through the superconducting magnets in order that the temperature stay below the working range all along their length. This paper shows that a supercritical helium loop associated with a saturated liquid helium bath can satisfy temperature constraints in different ways (playing on bath temperature and on the supercritical flow), but that only one is optimal from an energy point of view (every Watt consumed at 4.2 K consumes at least 220 W of electrical power). To find the optimal operational conditions, an algorithm capable of minimizing an objective function (energy consumption at 5 bar, 5 K) subject to constraints has been written. This algorithm works with a supercritical loop model realized with the Simcryogenics [2] library. This article describes the model used and the results of constrained optimization. It will be possible to see that the changes in operating point on the temperature of the magnet (e.g. in case of a change in the plasma configuration) involves large changes on the cryodistribution optimal operating point. Recommendations will be made to ensure that the energetic consumption is kept as low as possible despite the changing operating point. This work is partially supported by EUROfusion Consortium through the Euratom Research and Training Program 20142018 under Grant 633053.
Chong, Song Hun
2016-08-09
Geosystems often experience numerous loading cycles. Plastic strain accumulation during repetitive mechanical loads can lead to shear shakedown or continued shear ratcheting; in all cases, volumetric strains diminish as the specimen evolves towards terminal density. Previously suggested models and new functions are identified to fit plastic strain accumulation data. All accumulation models are formulated to capture terminal density (volumetric strain) and either shakedown or ratcheting (shear strain). Repetitive vertical loading tests under zero lateral strain conditions are conducted using three different sands packed at initially low and high densities. Test results show that plastic strain accumulation for all sands and density conditions can be captured in the same dimensionless plot defined in terms of the initial relative density, terminal density, and ratio between the amplitude of the repetitive load and the initial static load. This observation allows us to advance a simple but robust procedure to estimate the maximum one-dimensional settlement that a foundation could experience if subjected to repetitive loads. © 2016, Canadian Science Publishing. All rights reserved.
Energy Technology Data Exchange (ETDEWEB)
Bakry, A. [King Abdulaziz University, 80203, Department of Physics, Faculty of Science (Saudi Arabia); Abdulrhmann, S. [Jazan University, 114, Department of Physics, Faculty of Sciences (Saudi Arabia); Ahmed, M., E-mail: mostafa.farghal@mu.edu.eg [King Abdulaziz University, 80203, Department of Physics, Faculty of Science (Saudi Arabia)
2016-06-15
We theoretically model the dynamics of semiconductor lasers subject to the double-reflector feedback. The proposed model is a new modification of the time-delay rate equations of semiconductor lasers under the optical feedback to account for this type of the double-reflector feedback. We examine the influence of adding the second reflector to dynamical states induced by the single-reflector feedback: periodic oscillations, period doubling, and chaos. Regimes of both short and long external cavities are considered. The present analyses are done using the bifurcation diagram, temporal trajectory, phase portrait, and fast Fourier transform of the laser intensity. We show that adding the second reflector attracts the periodic and perioddoubling oscillations, and chaos induced by the first reflector to a route-to-continuous-wave operation. During this operation, the periodic-oscillation frequency increases with strengthening the optical feedback. We show that the chaos induced by the double-reflector feedback is more irregular than that induced by the single-reflector feedback. The power spectrum of this chaos state does not reflect information on the geometry of the optical system, which then has potential for use in chaotic (secure) optical data encryption.
Anticipating cognitive effort: roles of perceived error-likelihood and time demands.
Dunn, Timothy L; Inzlicht, Michael; Risko, Evan F
2017-11-13
Why are some actions evaluated as effortful? In the present set of experiments we address this question by examining individuals' perception of effort when faced with a trade-off between two putative cognitive costs: how much time a task takes vs. how error-prone it is. Specifically, we were interested in whether individuals anticipate engaging in a small amount of hard work (i.e., low time requirement, but high error-likelihood) vs. a large amount of easy work (i.e., high time requirement, but low error-likelihood) as being more effortful. In between-subject designs, Experiments 1 through 3 demonstrated that individuals anticipate options that are high in perceived error-likelihood (yet less time consuming) as more effortful than options that are perceived to be more time consuming (yet low in error-likelihood). Further, when asked to evaluate which of the two tasks was (a) more effortful, (b) more error-prone, and (c) more time consuming, effort-based and error-based choices closely tracked one another, but this was not the case for time-based choices. Utilizing a within-subject design, Experiment 4 demonstrated overall similar pattern of judgments as Experiments 1 through 3. However, both judgments of error-likelihood and time demand similarly predicted effort judgments. Results are discussed within the context of extant accounts of cognitive control, with considerations of how error-likelihood and time demands may independently and conjunctively factor into judgments of cognitive effort.
Dimension-Independent Likelihood-Informed MCMC
Cui, Tiangang; Law, Kody; Marzouk, Youssef
2015-01-01
Many Bayesian inference problems require exploring the posterior distribution of high-dimensional parameters, which in principle can be described as functions. By exploiting low-dimensional structure in the change from prior to posterior [distributions], we introduce a suite of MCMC samplers that can adapt to the complex structure of the posterior distribution, yet are well-defined on function space. Posterior sampling in nonlinear inverse problems arising from various partial di erential equations and also a stochastic differential equation are used to demonstrate the e ciency of these dimension-independent likelihood-informed samplers.
Dimension-Independent Likelihood-Informed MCMC
Cui, Tiangang
2015-01-07
Many Bayesian inference problems require exploring the posterior distribution of high-dimensional parameters, which in principle can be described as functions. By exploiting low-dimensional structure in the change from prior to posterior [distributions], we introduce a suite of MCMC samplers that can adapt to the complex structure of the posterior distribution, yet are well-defined on function space. Posterior sampling in nonlinear inverse problems arising from various partial di erential equations and also a stochastic differential equation are used to demonstrate the e ciency of these dimension-independent likelihood-informed samplers.
Approximate maximum parsimony and ancestral maximum likelihood.
Alon, Noga; Chor, Benny; Pardi, Fabio; Rapoport, Anat
2010-01-01
We explore the maximum parsimony (MP) and ancestral maximum likelihood (AML) criteria in phylogenetic tree reconstruction. Both problems are NP-hard, so we seek approximate solutions. We formulate the two problems as Steiner tree problems under appropriate distances. The gist of our approach is the succinct characterization of Steiner trees for a small number of leaves for the two distances. This enables the use of known Steiner tree approximation algorithms. The approach leads to a 16/9 approximation ratio for AML and asymptotically to a 1.55 approximation ratio for MP.
Clément, Julien; Dumas, Raphaël; Hagemeister, Nicola; de Guise, Jaques A
2015-11-05
Soft tissue artifact (STA) distort marker-based knee kinematics measures and make them difficult to use in clinical practice. None of the current methods designed to compensate for STA is suitable, but multi-body optimization (MBO) has demonstrated encouraging results and can be improved. The goal of this study was to develop and validate the performance of knee joint models, with anatomical and subject-specific kinematic constraints, used in MBO to reduce STA errors. Twenty subjects were recruited: 10 healthy and 10 osteoarthritis (OA) subjects. Subject-specific knee joint models were evaluated by comparing dynamic knee kinematics recorded by a motion capture system (KneeKG™) and optimized with MBO to quasi-static knee kinematics measured by a low-dose, upright, biplanar radiographic imaging system (EOS(®)). Errors due to STA ranged from 1.6° to 22.4° for knee rotations and from 0.8 mm to 14.9 mm for knee displacements in healthy and OA subjects. Subject-specific knee joint models were most effective in compensating for STA in terms of abduction-adduction, inter-external rotation and antero-posterior displacement. Root mean square errors with subject-specific knee joint models ranged from 2.2±1.2° to 6.0±3.9° for knee rotations and from 2.4±1.1 mm to 4.3±2.4 mm for knee displacements in healthy and OA subjects, respectively. Our study shows that MBO can be improved with subject-specific knee joint models, and that the quality of the motion capture calibration is critical. Future investigations should focus on more refined knee joint models to reproduce specific OA knee geometry and physiology. Copyright © 2015 Elsevier Ltd. All rights reserved.
A subjective supply–demand model: the maximum Boltzmann/Shannon entropy solution
International Nuclear Information System (INIS)
Piotrowski, Edward W; Sładkowski, Jan
2009-01-01
The present authors have put forward a projective geometry model of rational trading. The expected (mean) value of the time that is necessary to strike a deal and the profit strongly depend on the strategies adopted. A frequent trader often prefers maximal profit intensity to the maximization of profit resulting from a separate transaction because the gross profit/income is the adopted/recommended benchmark. To investigate activities that have different periods of duration we define, following the queuing theory, the profit intensity as a measure of this economic category. The profit intensity in repeated trading has a unique property of attaining its maximum at a fixed point regardless of the shape of demand curves for a wide class of probability distributions of random reverse transactions (i.e. closing of the position). These conclusions remain valid for an analogous model based on supply analysis. This type of market game is often considered in research aiming at finding an algorithm that maximizes profit of a trader who negotiates prices with the Rest of the World (a collective opponent), possessing a definite and objective supply profile. Such idealization neglects the sometimes important influence of an individual trader on the demand/supply profile of the Rest of the World and in extreme cases questions the very idea of demand/supply profile. Therefore we put forward a trading model in which the demand/supply profile of the Rest of the World induces the (rational) trader to (subjectively) presume that he/she lacks (almost) all knowledge concerning the market but his/her average frequency of trade. This point of view introduces maximum entropy principles into the model and broadens the range of economic phenomena that can be perceived as a sort of thermodynamical system. As a consequence, the profit intensity has a fixed point with an astonishing connection with Fibonacci classical works and looking for the quickest algorithm for obtaining the extremum of a
A subjective supply-demand model: the maximum Boltzmann/Shannon entropy solution
Piotrowski, Edward W.; Sładkowski, Jan
2009-03-01
The present authors have put forward a projective geometry model of rational trading. The expected (mean) value of the time that is necessary to strike a deal and the profit strongly depend on the strategies adopted. A frequent trader often prefers maximal profit intensity to the maximization of profit resulting from a separate transaction because the gross profit/income is the adopted/recommended benchmark. To investigate activities that have different periods of duration we define, following the queuing theory, the profit intensity as a measure of this economic category. The profit intensity in repeated trading has a unique property of attaining its maximum at a fixed point regardless of the shape of demand curves for a wide class of probability distributions of random reverse transactions (i.e. closing of the position). These conclusions remain valid for an analogous model based on supply analysis. This type of market game is often considered in research aiming at finding an algorithm that maximizes profit of a trader who negotiates prices with the Rest of the World (a collective opponent), possessing a definite and objective supply profile. Such idealization neglects the sometimes important influence of an individual trader on the demand/supply profile of the Rest of the World and in extreme cases questions the very idea of demand/supply profile. Therefore we put forward a trading model in which the demand/supply profile of the Rest of the World induces the (rational) trader to (subjectively) presume that he/she lacks (almost) all knowledge concerning the market but his/her average frequency of trade. This point of view introduces maximum entropy principles into the model and broadens the range of economic phenomena that can be perceived as a sort of thermodynamical system. As a consequence, the profit intensity has a fixed point with an astonishing connection with Fibonacci classical works and looking for the quickest algorithm for obtaining the extremum of a
How to Improve the Likelihood of CDM Approval?
DEFF Research Database (Denmark)
Brandt, Urs Steiner; Svendsen, Gert Tinggaard
2014-01-01
How can the likelihood of Clean Development Mechanism (CDM) approval be improved in the face of institutional shortcomings? To answer this question, we focus on the three institutional shortcomings of income sharing, risk sharing and corruption prevention concerning afforestation/reforestation (A....../R). Furthermore, three main stakeholders are identified, namely investors, governments and agents in a principal-agent model regarding monitoring and enforcement capacity. Developing countries such as West Africa have, despite huge potentials, not been integrated in A/R CDM projects yet. Remote sensing, however...
Estimating likelihood of future crashes for crash-prone drivers
Subasish Das; Xiaoduan Sun; Fan Wang; Charles Leboeuf
2015-01-01
At-fault crash-prone drivers are usually considered as the high risk group for possible future incidents or crashes. In Louisiana, 34% of crashes are repeatedly committed by the at-fault crash-prone drivers who represent only 5% of the total licensed drivers in the state. This research has conducted an exploratory data analysis based on the driver faultiness and proneness. The objective of this study is to develop a crash prediction model to estimate the likelihood of future crashes for the a...
Modeling Double Subjectivity for Gaining Programmable Insights: Framing the Case of Uber
Directory of Open Access Journals (Sweden)
Loretta Henderson Cheeks
2017-09-01
Full Text Available The Internet is the premier platform that enable the emergence of new technologies. Online news is unstructured narrative text that embeds facts, frames, and amplification that can influence society attitudes about technology adoption. Online news sources are carriers of voluminous amounts of news for reaching significantly large audience and have no geographical or time boundaries. The interplay of complex and dynamical forces among authors and readers allow for progressive emergent and latent properties to exhibit. Our concept of “Double subjectivity” provides a new paradigm for exploring complementary programmable insights of deeply buried meanings in a system. The ability to understand internal embeddedness in a large collection of related articles are beyond the reach of existing computational tools, and are hence left to human readers with unscalable results. This paper uncovers the potential to utilize advanced machine learning in a new way to automate the understanding of implicit structures and their associated latent meanings to give an early human-level insight into emergent technologies, with a concrete example of “Uber”. This paper establishes the new concept of double subjectivity as an instrument for large-scale machining of unstructured text and introduces a social influence model for the discovery of distinct pathways into emerging technology, and hence an insight. The programmable insight reveals early spatial and temporal opinion shift monitoring in complex networks in a structured way for computational treatment and visualization.
CFD modeling of hydro-biochemical behavior of MSW subjected to leachate recirculation.
Feng, Shi-Jin; Cao, Ben-Yi; Li, An-Zheng; Chen, Hong-Xin; Zheng, Qi-Teng
2018-02-01
The most commonly used method of operating landfills more sustainably is to promote rapid biodegradation and stabilization of municipal solid waste (MSW) by leachate recirculation. The present study is an application of computational fluid dynamics (CFD) to the 3D modeling of leachate recirculation in bioreactor landfills using vertical wells. The objective is to model and investigate the hydrodynamic and biochemical behavior of MSW subject to leachate recirculation. The results indicate that the maximum recirculated leachate volume can be reached when vertical wells are set at the upper middle part of a landfill (H W /H T = 0.4), and increasing the screen length can be more helpful in enlarging the influence radius than increasing the well length (an increase in H S /H W from 0.4 to 0.6 results in an increase in influence radius from 6.5 to 7.7 m). The time to reach steady state of leachate recirculation decreases with the increase in pressure head; however, the time for leachate to drain away increases with the increase in pressure head. It also showed that methanogenic biomass inoculum of 1.0 kg/m 3 can accelerate the volatile fatty acid depletion and increase the peak depletion rate to 2.7 × 10 -6 kg/m 3 /s. The degradation-induced void change parameter exerts an influence on the processes of MSW biodegradation because a smaller parameter value results in a greater increase in void space.
Subjective element of events as іntentional basis of the discursive model of communication
Directory of Open Access Journals (Sweden)
Y. S. Kravtsov
2016-03-01
Full Text Available The article reveals the phenomenological aspects of the communication process. The importance of the more detailed analysis of developments in the subjective foundation of information space. A new approach to communication is associated with a new look at the world, it puts new emphasis on the methodology of knowledge. According to the author the specifics of postmodern social situation due to the fact that society has become transparent (clear through a radical change in the technology of mass communication. Existing induces derealization communicative reality, when the signal transmission rate in the range of our planet can be equated to instant, and the distance and time of the message are extremely small, approaching zero. Therefore, the conclusion is that there is another, more sophisticated communicative community that is virtual, which establishes a fundamentally different circuit, or a model of the communicative process. Investigated that in contrast to the current actually expresses integrity, stability and completeness, virtual reality is the source of the difference and diversity. Thus virtuality is a phenomenon, immanent in the very structure of existence, embodies the creative opportunity-generating activities. The article reveals that virtual reality is based on the principle of «feedback» that allows for the maximum entry of a person into the information space.The scale of the phenomenon of virtual manifestations in social and individual life suggests the «virtualization» of society and encourages researchers to develop a new understanding of social reality in its relation to the reality of the virtual. At the same time virtual model is the result of the synthesis of human sensory and mental abilities, but taken in their generality, the idea of the correlation between man and objects in the world. Hense, his model has a priori importance, because it incorporated all rational isolated situations where it may be. Innovation
Maximum Likelihood Reconstruction for Magnetic Resonance Fingerprinting.
Zhao, Bo; Setsompop, Kawin; Ye, Huihui; Cauley, Stephen F; Wald, Lawrence L
2016-08-01
This paper introduces a statistical estimation framework for magnetic resonance (MR) fingerprinting, a recently proposed quantitative imaging paradigm. Within this framework, we present a maximum likelihood (ML) formalism to estimate multiple MR tissue parameter maps directly from highly undersampled, noisy k-space data. A novel algorithm, based on variable splitting, the alternating direction method of multipliers, and the variable projection method, is developed to solve the resulting optimization problem. Representative results from both simulations and in vivo experiments demonstrate that the proposed approach yields significantly improved accuracy in parameter estimation, compared to the conventional MR fingerprinting reconstruction. Moreover, the proposed framework provides new theoretical insights into the conventional approach. We show analytically that the conventional approach is an approximation to the ML reconstruction; more precisely, it is exactly equivalent to the first iteration of the proposed algorithm for the ML reconstruction, provided that a gridding reconstruction is used as an initialization.
Subtracting and Fitting Histograms using Profile Likelihood
D'Almeida, F M L
2008-01-01
It is known that many interesting signals expected at LHC are of unknown shape and strongly contaminated by background events. These signals will be dif cult to detect during the rst years of LHC operation due to the initial low luminosity. In this work, one presents a method of subtracting histograms based on the pro le likelihood function when the background is previously estimated by Monte Carlo events and one has low statistics. Estimators for the signal in each bin of the histogram difference are calculated so as limits for the signals with 68.3% of Con dence Level in a low statistics case when one has a exponential background and a Gaussian signal. The method can also be used to t histograms when the signal shape is known. Our results show a good performance and avoid the problem of negative values when subtracting histograms.
Gutenko, Gregory
A study examined the responses of Canadian and American subjects in their approval of, and attraction to, specific television and film characters exhibiting aggressive behavior, and in their evaluation of the realism and saliency of the characters and situations observed. Subjects, undergraduate students at the University of Windsor in Windsor,…
[How our subjective coherence is built? The model of cognitive dissonance].
Naccache, Lionel; El Karoui, Imen; Salti, Moti; Chammat, Mariam; Maillet, Mathurin; Allali, Sébastien
2015-01-01
Our conscious, subjective discourse, demonstrates a temporal coherence that distinguishes it from the many unconscious cognitive representations explored by cognitive neuroscience. This subjective coherence, --particularly its dynamics--can be modified in certain psychiatric syndromes including a " dissociative state " (e.g. schizophrenia), or in several neuropsychiatric disorders (e.g. frontal lobe syndrome). The medical and environmental consequences of these changes are significant. However, the psychological and neural mechanisms of this fundamental property remain largely unknown. We explored the dynamics of subjective coherence in an experimental paradigm (the "free choice "paradigm) originating for the field of cognitive dissonance. Using a series of behavioral experiments, conducted in healthy volunteers, we have discovered a key role for the episodic memory in the preference change process when simply making a choice. These results highlight the importance of conscious memory in the construction of subjective consistency, of which the subjects do not yet seem to be the conscious agents.
Numerical modeling of liquefaction-induced failure of geo-structures subjected to earthquakes
International Nuclear Information System (INIS)
Rapti, Ioanna
2016-01-01
and coupled hydro-mechanical conditions. Two criteria are used to define the onset of the structure's collapse. The second order work is used to describe the local instability at specific instants of the ground motion, while the estimation of a local safety factor is proposed by calculating soil's residual strength. Concerning the failure mode, the effect of excess pore water pressure is of great importance, as an otherwise stable structure-foundation system in dry and fully drained conditions becomes unstable during coupled analysis. Finally, a levee-foundation system is simulated and the influence of soil's permeability, depth of the liquefiable layer, as well as, characteristics of input ground motion on the liquefaction-induced failure is evaluated. For the current levee model, its induced damage level (i.e. settlements and deformations) is strongly related to both liquefaction apparition and dissipation of excess pore water pressure on the foundation. A circular collapse surface is generated inside the liquefied region and extends towards the crest in both sides of the levee. Even so, when the liquefied layer is situated in depth, no effect on the levee response is found. This research work can be considered as a reference case study for seismic assessment of embankment-type structures subjected to earthquake and provides a high-performance computational framework accessible to engineers. (author)
Accelerated maximum likelihood parameter estimation for stochastic biochemical systems
Directory of Open Access Journals (Sweden)
Daigle Bernie J
2012-05-01
Full Text Available Abstract Background A prerequisite for the mechanistic simulation of a biochemical system is detailed knowledge of its kinetic parameters. Despite recent experimental advances, the estimation of unknown parameter values from observed data is still a bottleneck for obtaining accurate simulation results. Many methods exist for parameter estimation in deterministic biochemical systems; methods for discrete stochastic systems are less well developed. Given the probabilistic nature of stochastic biochemical models, a natural approach is to choose parameter values that maximize the probability of the observed data with respect to the unknown parameters, a.k.a. the maximum likelihood parameter estimates (MLEs. MLE computation for all but the simplest models requires the simulation of many system trajectories that are consistent with experimental data. For models with unknown parameters, this presents a computational challenge, as the generation of consistent trajectories can be an extremely rare occurrence. Results We have developed Monte Carlo Expectation-Maximization with Modified Cross-Entropy Method (MCEM2: an accelerated method for calculating MLEs that combines advances in rare event simulation with a computationally efficient version of the Monte Carlo expectation-maximization (MCEM algorithm. Our method requires no prior knowledge regarding parameter values, and it automatically provides a multivariate parameter uncertainty estimate. We applied the method to five stochastic systems of increasing complexity, progressing from an analytically tractable pure-birth model to a computationally demanding model of yeast-polarization. Our results demonstrate that MCEM2 substantially accelerates MLE computation on all tested models when compared to a stand-alone version of MCEM. Additionally, we show how our method identifies parameter values for certain classes of models more accurately than two recently proposed computationally efficient methods
International Nuclear Information System (INIS)
Aoyagi, Y.; Yamada, K.; Takahashi, T.
1981-01-01
With a view to investigating the earthquake resistance characteristics of reinforced concrete containments two cylindrical models with three-way system of bars were made and loaded laterally up to failure combined with or without internal pressures, simulating the conditions in which containments were subjected to earthquake forces at a simultaneous LOCA or at normal operation. The main conclusions obtained withing the limit of the experiments are as follows. (1) Stresses in reinforcements in three-way reinforced concrete plate elements can reasonably be estimated by the equations proposed by Baumann. It is, however, necessary to take into consideration the contributions of concrete between cracks to the deformation in order to accurately estimate the average strains in the plate elements, applying such a formula as CEB as reformed by the authors. (2) The strength capacity of three-way reinforced concrete containments against lateral forces combined with internal pressure is somewhat inferior to that of orthogonally reinforced one if compared on the condition that the volumetric reinforcement ratios are the same for the two cases of reinforcement arrangements. However, three-way reinforcement improves initial shear rigidity as well as ultimate horizontal deformability for lateral forces. (3) The ability for three-way reinforced concrete containment to absorb strain energy in the range of large deformations is superior to that of orthogonally reinforced one. The equivalent viscous damping coefficient for the former is markedly larger than that for the latter, especially at the increased deformational stages. These experimental evidences suggent that three-way system of reinforcement may constitute one of the prospective measures to improve the earthquake resistance of reinforced concrete containments. (orig./HP)
Cash, W.
1979-01-01
Many problems in the experimental estimation of parameters for models can be solved through use of the likelihood ratio test. Applications of the likelihood ratio, with particular attention to photon counting experiments, are discussed. The procedures presented solve a greater range of problems than those currently in use, yet are no more difficult to apply. The procedures are proved analytically, and examples from current problems in astronomy are discussed.
Maximum Likelihood Approach for RFID Tag Set Cardinality Estimation with Detection Errors
DEFF Research Database (Denmark)
Nguyen, Chuyen T.; Hayashi, Kazunori; Kaneko, Megumi
2013-01-01
Abstract Estimation schemes of Radio Frequency IDentification (RFID) tag set cardinality are studied in this paper using Maximum Likelihood (ML) approach. We consider the estimation problem under the model of multiple independent reader sessions with detection errors due to unreliable radio...... is evaluated under dierent system parameters and compared with that of the conventional method via computer simulations assuming flat Rayleigh fading environments and framed-slotted ALOHA based protocol. Keywords RFID tag cardinality estimation maximum likelihood detection error...
DEFF Research Database (Denmark)
Christensen, Ole Fredslund; Frydenberg, Morten; Jensen, Jens Ledet
2005-01-01
The large deviation modified likelihood ratio statistic is studied for testing a variance component equal to a specified value. Formulas are presented in the general balanced case, whereas in the unbalanced case only the one-way random effects model is studied. Simulation studies are presented......, showing that the normal approximation to the large deviation modified likelihood ratio statistic gives confidence intervals for variance components with coverage probabilities very close to the nominal confidence coefficient....
Application of the Method of Maximum Likelihood to Identification of Bipedal Walking Robots
Czech Academy of Sciences Publication Activity Database
Dolinský, Kamil; Čelikovský, Sergej
(2017) ISSN 1063-6536 R&D Projects: GA ČR(CZ) GA17-04682S Institutional support: RVO:67985556 Keywords : Control * identification * maximum likelihood (ML) * walking robots Subject RIV: BC - Control Systems Theory Impact factor: 3.882, year: 2016 http://ieeexplore.ieee.org/document/7954032/
DEFF Research Database (Denmark)
Alskär, Oskar; Bagger, Jonatan I; Røge, Rikke M.
2016-01-01
The integrated glucose-insulin (IGI) model is a previously published semimechanistic model that describes plasma glucose and insulin concentrations after glucose challenges. The aim of this work was to use knowledge of physiology to improve the IGI model's description of glucose absorption and ga...... model provides a better description and improves the understanding of dynamic glucose tests involving oral glucose....... and gastric emptying after tests with varying glucose doses. The developed model's performance was compared to empirical models. To develop our model, data from oral and intravenous glucose challenges in patients with type 2 diabetes and healthy control subjects were used together with present knowledge...... glucose absorption was superior to linear absorption regardless of the gastric emptying model applied. The semiphysiological model developed performed better than previously published empirical models and allows better understanding of the mechanisms underlying glucose absorption. In conclusion, our new...
A Comparison of Graded Response and Rasch Partial Credit Models with Subjective Well-Being.
Baker, John G.; Rounds, James B.; Zevon, Michael A.
2000-01-01
Compared two multiple category item response theory models using a data set of 52 mood terms with 713 undergraduate psychology students. Comparative model fit for the Samejima (F. Samejima, 1966) logistic model for graded responses and the Masters (G. Masters, 1982) partial credit model favored the former model for this data set. (SLD)
CONSTRUCTING A FLEXIBLE LIKELIHOOD FUNCTION FOR SPECTROSCOPIC INFERENCE
International Nuclear Information System (INIS)
Czekala, Ian; Andrews, Sean M.; Mandel, Kaisey S.; Green, Gregory M.; Hogg, David W.
2015-01-01
We present a modular, extensible likelihood framework for spectroscopic inference based on synthetic model spectra. The subtraction of an imperfect model from a continuously sampled spectrum introduces covariance between adjacent datapoints (pixels) into the residual spectrum. For the high signal-to-noise data with large spectral range that is commonly employed in stellar astrophysics, that covariant structure can lead to dramatically underestimated parameter uncertainties (and, in some cases, biases). We construct a likelihood function that accounts for the structure of the covariance matrix, utilizing the machinery of Gaussian process kernels. This framework specifically addresses the common problem of mismatches in model spectral line strengths (with respect to data) due to intrinsic model imperfections (e.g., in the atomic/molecular databases or opacity prescriptions) by developing a novel local covariance kernel formalism that identifies and self-consistently downweights pathological spectral line “outliers.” By fitting many spectra in a hierarchical manner, these local kernels provide a mechanism to learn about and build data-driven corrections to synthetic spectral libraries. An open-source software implementation of this approach is available at http://iancze.github.io/Starfish, including a sophisticated probabilistic scheme for spectral interpolation when using model libraries that are sparsely sampled in the stellar parameters. We demonstrate some salient features of the framework by fitting the high-resolution V-band spectrum of WASP-14, an F5 dwarf with a transiting exoplanet, and the moderate-resolution K-band spectrum of Gliese 51, an M5 field dwarf
Targeted maximum likelihood estimation for a binary treatment: A tutorial.
Luque-Fernandez, Miguel Angel; Schomaker, Michael; Rachet, Bernard; Schnitzer, Mireille E
2018-04-23
When estimating the average effect of a binary treatment (or exposure) on an outcome, methods that incorporate propensity scores, the G-formula, or targeted maximum likelihood estimation (TMLE) are preferred over naïve regression approaches, which are biased under misspecification of a parametric outcome model. In contrast propensity score methods require the correct specification of an exposure model. Double-robust methods only require correct specification of either the outcome or the exposure model. Targeted maximum likelihood estimation is a semiparametric double-robust method that improves the chances of correct model specification by allowing for flexible estimation using (nonparametric) machine-learning methods. It therefore requires weaker assumptions than its competitors. We provide a step-by-step guided implementation of TMLE and illustrate it in a realistic scenario based on cancer epidemiology where assumptions about correct model specification and positivity (ie, when a study participant had 0 probability of receiving the treatment) are nearly violated. This article provides a concise and reproducible educational introduction to TMLE for a binary outcome and exposure. The reader should gain sufficient understanding of TMLE from this introductory tutorial to be able to apply the method in practice. Extensive R-code is provided in easy-to-read boxes throughout the article for replicability. Stata users will find a testing implementation of TMLE and additional material in the Appendix S1 and at the following GitHub repository: https://github.com/migariane/SIM-TMLE-tutorial. © 2018 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd.
Dimension-independent likelihood-informed MCMC
Cui, Tiangang
2015-10-08
Many Bayesian inference problems require exploring the posterior distribution of high-dimensional parameters that represent the discretization of an underlying function. This work introduces a family of Markov chain Monte Carlo (MCMC) samplers that can adapt to the particular structure of a posterior distribution over functions. Two distinct lines of research intersect in the methods developed here. First, we introduce a general class of operator-weighted proposal distributions that are well defined on function space, such that the performance of the resulting MCMC samplers is independent of the discretization of the function. Second, by exploiting local Hessian information and any associated low-dimensional structure in the change from prior to posterior distributions, we develop an inhomogeneous discretization scheme for the Langevin stochastic differential equation that yields operator-weighted proposals adapted to the non-Gaussian structure of the posterior. The resulting dimension-independent and likelihood-informed (DILI) MCMC samplers may be useful for a large class of high-dimensional problems where the target probability measure has a density with respect to a Gaussian reference measure. Two nonlinear inverse problems are used to demonstrate the efficiency of these DILI samplers: an elliptic PDE coefficient inverse problem and path reconstruction in a conditioned diffusion.
Reducing the likelihood of long tennis matches.
Barnett, Tristan; Alan, Brown; Pollard, Graham
2006-01-01
Long matches can cause problems for tournaments. For example, the starting times of subsequent matches can be substantially delayed causing inconvenience to players, spectators, officials and television scheduling. They can even be seen as unfair in the tournament setting when the winner of a very long match, who may have negative aftereffects from such a match, plays the winner of an average or shorter length match in the next round. Long matches can also lead to injuries to the participating players. One factor that can lead to long matches is the use of the advantage set as the fifth set, as in the Australian Open, the French Open and Wimbledon. Another factor is long rallies and a greater than average number of points per game. This tends to occur more frequently on the slower surfaces such as at the French Open. The mathematical method of generating functions is used to show that the likelihood of long matches can be substantially reduced by using the tiebreak game in the fifth set, or more effectively by using a new type of game, the 50-40 game, throughout the match. Key PointsThe cumulant generating function has nice properties for calculating the parameters of distributions in a tennis matchA final tiebreaker set reduces the length of matches as currently being used in the US OpenA new 50-40 game reduces the length of matches whilst maintaining comparable probabilities for the better player to win the match.
Dimension-independent likelihood-informed MCMC
Cui, Tiangang; Law, Kody; Marzouk, Youssef M.
2015-01-01
Many Bayesian inference problems require exploring the posterior distribution of high-dimensional parameters that represent the discretization of an underlying function. This work introduces a family of Markov chain Monte Carlo (MCMC) samplers that can adapt to the particular structure of a posterior distribution over functions. Two distinct lines of research intersect in the methods developed here. First, we introduce a general class of operator-weighted proposal distributions that are well defined on function space, such that the performance of the resulting MCMC samplers is independent of the discretization of the function. Second, by exploiting local Hessian information and any associated low-dimensional structure in the change from prior to posterior distributions, we develop an inhomogeneous discretization scheme for the Langevin stochastic differential equation that yields operator-weighted proposals adapted to the non-Gaussian structure of the posterior. The resulting dimension-independent and likelihood-informed (DILI) MCMC samplers may be useful for a large class of high-dimensional problems where the target probability measure has a density with respect to a Gaussian reference measure. Two nonlinear inverse problems are used to demonstrate the efficiency of these DILI samplers: an elliptic PDE coefficient inverse problem and path reconstruction in a conditioned diffusion.
Maximum likelihood window for time delay estimation
International Nuclear Information System (INIS)
Lee, Young Sup; Yoon, Dong Jin; Kim, Chi Yup
2004-01-01
Time delay estimation for the detection of leak location in underground pipelines is critically important. Because the exact leak location depends upon the precision of the time delay between sensor signals due to leak noise and the speed of elastic waves, the research on the estimation of time delay has been one of the key issues in leak lovating with the time arrival difference method. In this study, an optimal Maximum Likelihood window is considered to obtain a better estimation of the time delay. This method has been proved in experiments, which can provide much clearer and more precise peaks in cross-correlation functions of leak signals. The leak location error has been less than 1 % of the distance between sensors, for example the error was not greater than 3 m for 300 m long underground pipelines. Apart from the experiment, an intensive theoretical analysis in terms of signal processing has been described. The improved leak locating with the suggested method is due to the windowing effect in frequency domain, which offers a weighting in significant frequencies.
Moral Identity Predicts Doping Likelihood via Moral Disengagement and Anticipated Guilt.
Kavussanu, Maria; Ring, Christopher
2017-08-01
In this study, we integrated elements of social cognitive theory of moral thought and action and the social cognitive model of moral identity to better understand doping likelihood in athletes. Participants (N = 398) recruited from a variety of team sports completed measures of moral identity, moral disengagement, anticipated guilt, and doping likelihood. Moral identity predicted doping likelihood indirectly via moral disengagement and anticipated guilt. Anticipated guilt about potential doping mediated the relationship between moral disengagement and doping likelihood. Our findings provide novel evidence to suggest that athletes, who feel that being a moral person is central to their self-concept, are less likely to use banned substances due to their lower tendency to morally disengage and the more intense feelings of guilt they expect to experience for using banned substances.
Likelihood-Based Inference of B Cell Clonal Families.
Directory of Open Access Journals (Sweden)
Duncan K Ralph
2016-10-01
Full Text Available The human immune system depends on a highly diverse collection of antibody-making B cells. B cell receptor sequence diversity is generated by a random recombination process called "rearrangement" forming progenitor B cells, then a Darwinian process of lineage diversification and selection called "affinity maturation." The resulting receptors can be sequenced in high throughput for research and diagnostics. Such a collection of sequences contains a mixture of various lineages, each of which may be quite numerous, or may consist of only a single member. As a step to understanding the process and result of this diversification, one may wish to reconstruct lineage membership, i.e. to cluster sampled sequences according to which came from the same rearrangement events. We call this clustering problem "clonal family inference." In this paper we describe and validate a likelihood-based framework for clonal family inference based on a multi-hidden Markov Model (multi-HMM framework for B cell receptor sequences. We describe an agglomerative algorithm to find a maximum likelihood clustering, two approximate algorithms with various trade-offs of speed versus accuracy, and a third, fast algorithm for finding specific lineages. We show that under simulation these algorithms greatly improve upon existing clonal family inference methods, and that they also give significantly different clusters than previous methods when applied to two real data sets.
Gauging the likelihood of stable cavitation from ultrasound contrast agents.
Bader, Kenneth B; Holland, Christy K
2013-01-07
The mechanical index (MI) was formulated to gauge the likelihood of adverse bioeffects from inertial cavitation. However, the MI formulation did not consider bubble activity from stable cavitation. This type of bubble activity can be readily nucleated from ultrasound contrast agents (UCAs) and has the potential to promote beneficial bioeffects. Here, the presence of stable cavitation is determined numerically by tracking the onset of subharmonic oscillations within a population of bubbles for frequencies up to 7 MHz and peak rarefactional pressures up to 3 MPa. In addition, the acoustic pressure rupture threshold of an UCA population was determined using the Marmottant model. The threshold for subharmonic emissions of optimally sized bubbles was found to be lower than the inertial cavitation threshold for all frequencies studied. The rupture thresholds of optimally sized UCAs were found to be lower than the threshold for subharmonic emissions for either single cycle or steady state acoustic excitations. Because the thresholds of both subharmonic emissions and UCA rupture are linearly dependent on frequency, an index of the form I(CAV) = P(r)/f (where P(r) is the peak rarefactional pressure in MPa and f is the frequency in MHz) was derived to gauge the likelihood of subharmonic emissions due to stable cavitation activity nucleated from UCAs.
Transfer Entropy as a Log-Likelihood Ratio
Barnett, Lionel; Bossomaier, Terry
2012-09-01
Transfer entropy, an information-theoretic measure of time-directed information transfer between joint processes, has steadily gained popularity in the analysis of complex stochastic dynamics in diverse fields, including the neurosciences, ecology, climatology, and econometrics. We show that for a broad class of predictive models, the log-likelihood ratio test statistic for the null hypothesis of zero transfer entropy is a consistent estimator for the transfer entropy itself. For finite Markov chains, furthermore, no explicit model is required. In the general case, an asymptotic χ2 distribution is established for the transfer entropy estimator. The result generalizes the equivalence in the Gaussian case of transfer entropy and Granger causality, a statistical notion of causal influence based on prediction via vector autoregression, and establishes a fundamental connection between directed information transfer and causality in the Wiener-Granger sense.
Maximum likelihood positioning algorithm for high-resolution PET scanners
International Nuclear Information System (INIS)
Gross-Weege, Nicolas; Schug, David; Hallen, Patrick; Schulz, Volkmar
2016-01-01
Purpose: In high-resolution positron emission tomography (PET), lightsharing elements are incorporated into typical detector stacks to read out scintillator arrays in which one scintillator element (crystal) is smaller than the size of the readout channel. In order to identify the hit crystal by means of the measured light distribution, a positioning algorithm is required. One commonly applied positioning algorithm uses the center of gravity (COG) of the measured light distribution. The COG algorithm is limited in spatial resolution by noise and intercrystal Compton scatter. The purpose of this work is to develop a positioning algorithm which overcomes this limitation. Methods: The authors present a maximum likelihood (ML) algorithm which compares a set of expected light distributions given by probability density functions (PDFs) with the measured light distribution. Instead of modeling the PDFs by using an analytical model, the PDFs of the proposed ML algorithm are generated assuming a single-gamma-interaction model from measured data. The algorithm was evaluated with a hot-rod phantom measurement acquired with the preclinical HYPERION II D PET scanner. In order to assess the performance with respect to sensitivity, energy resolution, and image quality, the ML algorithm was compared to a COG algorithm which calculates the COG from a restricted set of channels. The authors studied the energy resolution of the ML and the COG algorithm regarding incomplete light distributions (missing channel information caused by detector dead time). Furthermore, the authors investigated the effects of using a filter based on the likelihood values on sensitivity, energy resolution, and image quality. Results: A sensitivity gain of up to 19% was demonstrated in comparison to the COG algorithm for the selected operation parameters. Energy resolution and image quality were on a similar level for both algorithms. Additionally, the authors demonstrated that the performance of the ML
Maximum likelihood versus likelihood-free quantum system identification in the atom maser
International Nuclear Information System (INIS)
Catana, Catalin; Kypraios, Theodore; Guţă, Mădălin
2014-01-01
We consider the problem of estimating a dynamical parameter of a Markovian quantum open system (the atom maser), by performing continuous time measurements in the system's output (outgoing atoms). Two estimation methods are investigated and compared. Firstly, the maximum likelihood estimator (MLE) takes into account the full measurement data and is asymptotically optimal in terms of its mean square error. Secondly, the ‘likelihood-free’ method of approximate Bayesian computation (ABC) produces an approximation of the posterior distribution for a given set of summary statistics, by sampling trajectories at different parameter values and comparing them with the measurement data via chosen statistics. Building on previous results which showed that atom counts are poor statistics for certain values of the Rabi angle, we apply MLE to the full measurement data and estimate its Fisher information. We then select several correlation statistics such as waiting times, distribution of successive identical detections, and use them as input of the ABC algorithm. The resulting posterior distribution follows closely the data likelihood, showing that the selected statistics capture ‘most’ statistical information about the Rabi angle. (paper)
Average Likelihood Methods of Classification of Code Division Multiple Access (CDMA)
2016-05-01
subject to code matrices that follows the structure given by (113). [⃗ yR y⃗I ] = √ Es 2L [ GR1 −GI1 GI2 GR2 ] [ QR −QI QI QR ] [⃗ bR b⃗I ] + [⃗ nR n⃗I... QR ] [⃗ b+ b⃗− ] + [⃗ n+ n⃗− ] (115) The average likelihood for type 4 CDMA (116) is a special case of type 1 CDMA with twice the code length and...AVERAGE LIKELIHOOD METHODS OF CLASSIFICATION OF CODE DIVISION MULTIPLE ACCESS (CDMA) MAY 2016 FINAL TECHNICAL REPORT APPROVED FOR PUBLIC RELEASE
Help-Seeking Response to Subjective Memory Complaints in Older Adults: Toward a Conceptual Model
Begum, Aysha; Whitley, Rob; Banerjee, Sube; Matthews, David; Stewart, Robert; Morgan, Craig
2013-01-01
Purpose: Subjective memory complaint is a term used to refer older adults who report memory problems. Extensive literature exists on its etiology and impact on long-term cognitive decline, and some physicians consider it important in the early detection of dementia. Despite the salient features reported by both patients and clinicians, few people…
Akin, Umran; Akin, Ahmet
2014-01-01
Authenticity is a basic personality characteristic that has an important influence on both the psychological and social lives of individuals. Subjective vitality also assumes a facilitative role regarding positive mental health indicators. Therefore, the purpose of this study is to investigate the predictive role of authenticity on subjective…
Buckling of plate strip subjected to localized corrosion a stochastic model
Czech Academy of Sciences Publication Activity Database
Sadovský, Z.; Drdácký, Miloš
2001-01-01
Roč. 39, č. 3 (2001), s. 247-259 ISSN 0263-8231 R&D Projects: GA ČR GA103/97/S051 Grant - others:GA SR(SK) GA2/5102/20 Subject RIV: JK - Corrosion ; Surface Treatment of Materials Impact factor: 0.429, year: 2001
An experimentally validated fatigue model for wood subjected to tension perpendicular to the grain
DEFF Research Database (Denmark)
Clorius, Christian Odin; Pedersen, Martin Uhre; Hoffmeyer, Preben
2009-01-01
This study presents an experimental investigation of fatigue in wood subjected to tension perpendicular to the grain. The study has been designed with special reference to the influence of the frequency of loading. The investigation reveals an interaction between number of load oscillations and a...... a good basis....
The Relationship of Coping, Self-Worth, and Subjective Well-Being: A Structural Equation Model
Smedema, Susan Miller; Catalano, Denise; Ebener, Deborah J.
2010-01-01
The purpose of this study was to determine the relationship between various coping-related variables and the evaluation of self-worth and subjective well-being among persons with spinal cord injury. Positive coping variables included hope, proactive coping style, and sense of humor, whereas negative coping variables included perceptions of stress,…
Zeighami, A; Aissaoui, R; Dumas, R
2018-03-01
Contact point (CP) trajectory is a crucial parameter in estimating medial/lateral tibio-femoral contact forces from the musculoskeletal (MSK) models. The objective of the present study was to develop a method to incorporate the subject-specific CP trajectories into the MSK model. Ten healthy subjects performed 45 s treadmill gait trials. The subject-specific CP trajectories were constructed on the tibia and femur as a function of extension-flexion using low-dose bi-plane X-ray images during a quasi-static squat. At each extension-flexion position, the tibia and femur CPs were superimposed in the three directions on the medial side, and in the anterior-posterior and proximal-distal directions on the lateral side to form the five kinematic constraints of the knee joint. The Lagrange multipliers associated to these constraints directly yielded the medial/lateral contact forces. The results from the personalized CP trajectory model were compared against the linear CP trajectory and sphere-on-plane CP trajectory models which were adapted from the commonly used MSK models. Changing the CP trajectory had a remarkable impact on the knee kinematics and changed the medial and lateral contact forces by 1.03 BW and 0.65 BW respectively, in certain subjects. The direction and magnitude of the medial/lateral contact force were highly variable among the subjects and the medial-lateral shift of the CPs alone could not determine the increase/decrease pattern of the contact forces. The suggested kinematic constraints are adaptable to the CP trajectories derived from a variety of joint models and those experimentally measured from the 3D imaging techniques. Copyright © 2018 Elsevier Ltd. All rights reserved.
Algesheimer, René; Bagozzi, Richard P.; Dholakia, Utpal M.
2018-01-01
We offer a new conceptualization and measurement models for constructs at the group-level of analysis in small group research. The conceptualization starts with classical notions of group behavior proposed by Tönnies, Simmel, and Weber and then draws upon plural subject theory by philosophers Gilbert and Tuomela to frame a new perspective…
Sengupta, Shreejita; Jaseem, T; Ambalavanan, Jayachidambaram; Hegde, Anupama
2018-04-01
Despite various studies with conflicting results, the effect of thyroid hormones on lipids and insulin levels in dysthyroidism is of great interest. This case control study was aimed to perceive the existence of IR and dyslipidemia in mild subclinical hypothyroid subjects (TSH ≤ 9.9 µIU/ml) as compared to their age and gender matched euthyroid controls. Basic demographic information like height, weight was recorded. Serum samples of all the subjects were assayed for thyroid profile, lipid profile, blood glucose, HbA1C and insulin. BMI and insulin resistance was calculated. Compared to controls patients with mild subclinical hypothyroidism demonstrated hyperinsulinemia and dyslipidemia observed by the higher LDL cholesterol. A significantly positive correlation was observed for HOMA-IR with TSH and LDL cholesterol. Hence, even in the mild subclinical hypothyroid state assessment of thyroid function should be combined with estimation of plasma glucose, insulin and serum lipids to monitor and prevent its associated effects.
Directed walk models of adsorbing semi-flexible polymers subject to an elongational force
Energy Technology Data Exchange (ETDEWEB)
Iliev, G K [Department of Mathematics and Statistics, University of Melbourne, Parkville (Australia); Orlandini, E [Dipartimento di Fisica, CNISM, Universita di Padova, Via Marzolo 8, 35131 Padova (Italy); Whittington, S G [Department of Chemistry, University of Toronto, Toronto (Canada)
2010-08-06
We consider several directed path models of semi-flexible polymers. In each model we associate an energy parameter for every pair of adjacent collinear steps, allowing for a model of a polymer with tunable stiffness. We introduce weightings for vertices or edges in a distinguished plane to model the interaction of a semi-flexible polymer with an impenetrable surface. We also investigate the desorption of such a polymer under the influence of an elongational force and study the order of the associated phase transitions. Using a simple low-temperature theory, we approximate and study the ground state behaviour of the models.
Agency, structure and subjectivity: Towards a new metaphorical model of the mind
Fittipaldi, Luis Antonio Egidio
2013-01-01
This thesis was submitted for the award of Doctor of Philosophy and was awarded by Brunel University London. The current thesis is based on the research of the psychoanalytical concepts of agency, subject and structure while it correlates the same notions with the clinical observations of patients with personality disorder in crisis [patient group]. It also proposes an answer to the problem of agency and structure, incorporating structuration theory and recursivity. This is done by the co...
Simple model of cable-stayed bridge deck subjected to static wind loading
Kang, Yi-Lung; Wang, Yang Cheng
1997-05-01
Cable-stayed bridges have been known since 18th century with aesthetics design. The structural system and the structural behavior are significantly different from those of continuous bridges. Compared to continuous bridge, cable- stayed bridges have more flexure bridge deck than those of continuous bridges.On the other hand, cable-stayed bridges have less stiffness to resist wind loading especially for lateral loads. The first considering of bridge engineering is safety. In 1940's, Tacoma Narrows Suspension Bridge destroyed by wind loading is a good example even though it is not a cable-stayed bridge. After the bridge was destroyed, a lot of research articles have been published regarding cable supported bridge subjected to wind loading. In recent days, high strength materials have been served. The bridge engineers use the advantages to expand the span length of cable-stayed bridges. Due to the span length increased and the use of high strength materials, cable- stayed bridges have more significant nonlinear behavior subjected to wind loading. In this paper, a slice bridge deck of cable-stayed bridge connected to internal support cables is considered. The deck has been considered to be subjected to lateral static wind loading. Since cables can not take compressive force, the deck has strongly nonlinear behavior even though the materials are linear elastic. Several primary load combinations have ben considered in this paper such as the bridge deck supposed to be moved horizontally without rotation or the bridge deck supposed to be moved horizontally with rotational deformation. The mathematical formulas and the numerical solutions are found and represented in graphical forms. The results can be provided to bridge designers and researchers for further study of this type of structure subjected to wind loading.
Caldwell, E. C.; Cowley, M. S.; Scott-Pandorf, M. M.
2010-01-01
Develop a model that simulates a human running in 0 G using the European Space Agency s (ESA) Subject Loading System (SLS). The model provides ground reaction forces (GRF) based on speed and pull-down forces (PDF). DESIGN The theoretical basis for the Running Model was based on a simple spring-mass model. The dynamic properties of the spring-mass model express theoretical vertical GRF (GRFv) and shear GRF in the posterior-anterior direction (GRFsh) during running gait. ADAMs VIEW software was used to build the model, which has a pelvis, thigh segment, shank segment, and a spring foot (see Figure 1).the model s movement simulates the joint kinematics of a human running at Earth gravity with the aim of generating GRF data. DEVELOPMENT & VERIFICATION ESA provided parabolic flight data of subjects running while using the SLS, for further characterization of the model s GRF. Peak GRF data were fit to a linear regression line dependent on PDF and speed. Interpolation and extrapolation of the regression equation provided a theoretical data matrix, which is used to drive the model s motion equations. Verification of the model was conducted by running the model at 4 different speeds, with each speed accounting for 3 different PDF. The model s GRF data fell within a 1-standard-deviation boundary derived from the empirical ESA data. CONCLUSION The Running Model aids in conducting various simulations (potential scenarios include a fatigued runner or a powerful runner generating high loads at a fast cadence) to determine limitations for the T2 vibration isolation system (VIS) aboard the International Space Station. This model can predict how running with the ESA SLS affects the T2 VIS and may be used for other exercise analyses in the future.
International Nuclear Information System (INIS)
Combescure, Christelle
2013-01-01
Safety reassessments are periodically performed on the EDF nuclear power plants and the recent seismic reassessments leaded to the necessity of taking into account the non-linear behaviour of materials when modeling and simulating industrial structures of these power plants under seismic solicitations. A large proportion of these infrastructures is composed of reinforced concrete buildings, including reinforced concrete slabs and walls, and literature seems to be poor on plate modeling dedicated to seismic applications for this material. As for the few existing models dedicated to these specific applications, they present either a lack of dissipation energy in the material behaviour, or no micromechanical approach that justifies the parameters needed to properly describe the model. In order to provide a constitutive model which better represents the reinforced concrete plate behaviour under seismic loadings and whose parameters are easier to identify for the civil engineer, a constitutive model dedicated to reinforced concrete plates under seismic solicitations is proposed: the DHRC (Dissipative Homogenised Reinforced Concrete) model. Justified by a periodic homogenisation approach, this model includes two dissipative phenomena: damage of concrete matrix and internal sliding at the interface between steel rebar and surrounding concrete. An original coupling term between damage and sliding, resulting from the homogenisation process, induces a better representation of energy dissipation during the material degradation. The model parameters are identified from the geometric characteristics of the plate and a restricted number of material characteristics, allowing a very simple use of the model. Numerical validations of the DHRC model are presented, showing good agreement with experimental behaviour. A one dimensional simplification of the DHRC model is proposed, allowing the representation of reinforced concrete bars and simplified models of rods and wire mesh
Preliminary attempt on maximum likelihood tomosynthesis reconstruction of DEI data
International Nuclear Information System (INIS)
Wang Zhentian; Huang Zhifeng; Zhang Li; Kang Kejun; Chen Zhiqiang; Zhu Peiping
2009-01-01
Tomosynthesis is a three-dimension reconstruction method that can remove the effect of superimposition with limited angle projections. It is especially promising in mammography where radiation dose is concerned. In this paper, we propose a maximum likelihood tomosynthesis reconstruction algorithm (ML-TS) on the apparent absorption data of diffraction enhanced imaging (DEI). The motivation of this contribution is to develop a tomosynthesis algorithm in low-dose or noisy circumstances and make DEI get closer to clinic application. The theoretical statistical models of DEI data in physics are analyzed and the proposed algorithm is validated with the experimental data at the Beijing Synchrotron Radiation Facility (BSRF). The results of ML-TS have better contrast compared with the well known 'shift-and-add' algorithm and FBP algorithm. (authors)
Maximum likelihood estimation of phase-type distributions
DEFF Research Database (Denmark)
Esparza, Luz Judith R
for both univariate and multivariate cases. Methods like the EM algorithm and Markov chain Monte Carlo are applied for this purpose. Furthermore, this thesis provides explicit formulae for computing the Fisher information matrix for discrete and continuous phase-type distributions, which is needed to find......This work is concerned with the statistical inference of phase-type distributions and the analysis of distributions with rational Laplace transform, known as matrix-exponential distributions. The thesis is focused on the estimation of the maximum likelihood parameters of phase-type distributions...... confidence regions for their estimated parameters. Finally, a new general class of distributions, called bilateral matrix-exponential distributions, is defined. These distributions have the entire real line as domain and can be used, for instance, for modelling. In addition, this class of distributions...
Physician Bayesian updating from personal beliefs about the base rate and likelihood ratio.
Rottman, Benjamin Margolin
2017-02-01
Whether humans can accurately make decisions in line with Bayes' rule has been one of the most important yet contentious topics in cognitive psychology. Though a number of paradigms have been used for studying Bayesian updating, rarely have subjects been allowed to use their own preexisting beliefs about the prior and the likelihood. A study is reported in which physicians judged the posttest probability of a diagnosis for a patient vignette after receiving a test result, and the physicians' posttest judgments were compared to the normative posttest calculated from their own beliefs in the sensitivity and false positive rate of the test (likelihood ratio) and prior probability of the diagnosis. On the one hand, the posttest judgments were strongly related to the physicians' beliefs about both the prior probability as well as the likelihood ratio, and the priors were used considerably more strongly than in previous research. On the other hand, both the prior and the likelihoods were still not used quite as much as they should have been, and there was evidence of other nonnormative aspects to the updating, such as updating independent of the likelihood beliefs. By focusing on how physicians use their own prior beliefs for Bayesian updating, this study provides insight into how well experts perform probabilistic inference in settings in which they rely upon their own prior beliefs rather than experimenter-provided cues. It suggests that there is reason to be optimistic about experts' abilities, but that there is still considerable need for improvement.
The QOL-DASS Model to Estimate Overall Quality of Life and General Subjective Health
Mazaheri, Mehrdad
2011-01-01
Objective In Order to find how rating the WHOQOL-BREF and DASS scales are combined to produce an overall measure of quality of life and satisfaction with health rating, a QOL-DASS model was designed; and the strength of this hypothesized model was examined using the structural equation modeling. Method Participants included a sample of 103 voluntary males who were divided into two groups of unhealthy (N=55) and healthy (N=48). To assess satisfaction and negative emotions of depression, anxiet...
International Nuclear Information System (INIS)
Velloso, P.A.; Galeao, A.C.
1989-05-01
This paper deals with nonlinear vibrations of pipes subjected to non-conservative loads. Periodic solutions of these problems are determined using a variational approach based on Hamilton's Principle combined with a Fourier series expansion to describe the displacement field time dependence. A finite element model which utilizes Hemite's cubic interpolation for both axial and transversal displacement amplitudes is used. This model is applied to the problem of a pipe subjected to a tangential and a normal follower force. The numerical results obtained with this model are compared with the corespondent solutions determined using a total lagrangian description for the Principle of Virtual Work, coupled with Newmark's step-by-step integration procedure. It is shown that for small to moderate displacement amplitudes the one-term Fourier series approximation compares fairly well with the predicted solution. For large displacements as least a two-term approximation should be utilized [pt
Dolman, M; Chase, J
1996-08-01
A small-scale study was undertaken to test the relative predictive power of the Health Belief Model and Subjective Expected Utility Theory for the uptake of a behaviour (pelvic floor exercises) to reduce post-partum urinary incontinence in primigravida females. A structured questionnaire was used to gather data relevant to both models from a sample antenatal and postnatal primigravida women. Questions examined the perceived probability of becoming incontinent, the perceived (dis)utility of incontinence, the perceived probability of pelvic floor exercises preventing future urinary incontinence, the costs and benefits of performing pelvic floor exercises and sources of information and knowledge about incontinence. Multiple regression analysis focused on whether or not respondents intended to perform pelvic floor exercises and the factors influencing their decisions. Aggregated data were analysed to compare the Health Belief Model and Subjective Expected Utility Theory directly.
International Nuclear Information System (INIS)
Suluksna, Keerati; Juntasaro, Ekachai
2008-01-01
The γ-Re θ transition model of Menter et al. [Menter, F.R., Langtry, R.B., Volker, S., Huang, P.G., 2005. Transition modelling for general purpose CFD codes. ERCOFTAC International Symposium Engineering Turbulence Modelling and Measurements] is a highly generalized transport equation model in which it has been developed based on the concept of local variables compatible with modern CFD methods where the unstructured grid and the parallel computing technique are usually integrated in. To perform the prediction with this model, two essential parameters, F length which is used to control the length of the transition region and Re θc which is used to control the onset of the transition location, must be specified to close the model. At present, both parameters are proprietary and their formulations are unpublished. For the first time here, the relations for both parameters are formulated by means of numerical experiments and analysis under the assumption of Re θc = Re θt corresponding with the bypass transition behavior. Based on this analysis, the optimized values of the parameters are found and their relations can be constructed as follows: Re θc = 803.73(Tu ∞ , le + 0.6067) -1.027 and F length = 163 ln(Tu ∞ , le ) + 3.625. The performance of this transition model is assessed by testing with the experimental cases of T3AM, T3A, and T3B. Detailed comparisons with the predicted results by the transition models of Suzen and Huang [Suzen, Y.B., Huang, P.G., 2000. Modeling of flow transition using an intermittency transport equation. J. Fluids Eng. 122, 273-284] and Lodefier et al. [Lodefier, K., Merci, B., De Langhe, C., Dick, E., 2003. Transition modelling with the SST turbulence model and intermittency transport equation. ASME Turbo Expo, Atlanta, GA, USA, June 16-19], and also with the predicted results by the k-ε model of Launder and Sharma [Launder, B.E., Sharma, B., 1974. Application of the energy dissipation model of turbulence to the calculation of
O'Donnell, Katherine M; Thompson, Frank R; Semlitsch, Raymond D
2015-01-01
Detectability of individual animals is highly variable and nearly always binomial mixture models to account for multiple sources of variation in detectability. The state process of the hierarchical model describes ecological mechanisms that generate spatial and temporal patterns in abundance, while the observation model accounts for the imperfect nature of counting individuals due to temporary emigration and false absences. We illustrate our model's potential advantages, including the allowance of temporary emigration between sampling periods, with a case study of southern red-backed salamanders Plethodon serratus. We fit our model and a standard binomial mixture model to counts of terrestrial salamanders surveyed at 40 sites during 3-5 surveys each spring and fall 2010-2012. Our models generated similar parameter estimates to standard binomial mixture models. Aspect was the best predictor of salamander abundance in our case study; abundance increased as aspect became more northeasterly. Increased time-since-rainfall strongly decreased salamander surface activity (i.e. availability for sampling), while higher amounts of woody cover objects and rocks increased conditional detection probability (i.e. probability of capture, given an animal is exposed to sampling). By explicitly accounting for both components of detectability, we increased congruence between our statistical modeling and our ecological understanding of the system. We stress the importance of choosing survey locations and protocols that maximize species availability and conditional detection probability to increase population parameter estimate reliability.
International Nuclear Information System (INIS)
Kalan, R.J.; Ammerman, D.J.; Gwinn, K.W.
2004-01-01
Transportation and storage casks subjected to extra-regulatory loadings may experience large stresses and strains in key structural components. One of the areas susceptible to these large stresses and strains is the bolted joint retaining any closure lid on an overpack or a canister. Modeling this joint accurately is necessary in evaluating the performance of the cask under extreme loading conditions. However, developing detailed models of a bolt in a large cask finite element model can dramatically increase the computational time, making the analysis prohibitive. Sandia National Laboratories used a series of calibrated, detailed, bolt finite element sub-models to develop a modified-beam bolt-model in order to examine the response of a storage cask and closure to severe accident loadings. The initial sub-models were calibrated for tension and shear loading using test data for large diameter bolts. Next, using the calibrated test model, sub-models of the actual joints were developed to obtain force-displacement curves and failure points for the bolted joint. These functions were used to develop a modified beam element representation of the bolted joint, which could be incorporated into the larger cask finite element model. This paper will address the modeling and assumptions used for the development of the initial calibration models, the joint sub-models and the modified beam model
Yamaura, Yuichi; Connor, Edward F.; Royle, Andy; Itoh, Katsuo; Sato, Kiyoshi; Taki, Hisatomo; Mishima, Yoshio
2016-01-01
Models and data used to describe species–area relationships confound sampling with ecological process as they fail to acknowledge that estimates of species richness arise due to sampling. This compromises our ability to make ecological inferences from and about species–area relationships. We develop and illustrate hierarchical community models of abundance and frequency to estimate species richness. The models we propose separate sampling from ecological processes by explicitly accounting for the fact that sampled patches are seldom completely covered by sampling plots and that individuals present in the sampling plots are imperfectly detected. We propose a multispecies abundance model in which community assembly is treated as the summation of an ensemble of species-level Poisson processes and estimate patch-level species richness as a derived parameter. We use sampling process models appropriate for specific survey methods. We propose a multispecies frequency model that treats the number of plots in which a species occurs as a binomial process. We illustrate these models using data collected in surveys of early-successional bird species and plants in young forest plantation patches. Results indicate that only mature forest plant species deviated from the constant density hypothesis, but the null model suggested that the deviations were too small to alter the form of species–area relationships. Nevertheless, results from simulations clearly show that the aggregate pattern of individual species density–area relationships and occurrence probability–area relationships can alter the form of species–area relationships. The plant community model estimated that only half of the species present in the regional species pool were encountered during the survey. The modeling framework we propose explicitly accounts for sampling processes so that ecological processes can be examined free of sampling artefacts. Our modeling approach is extensible and could be applied
Smoking increases the likelihood of Helicobacter pylori treatment failure.
Itskoviz, David; Boltin, Doron; Leibovitzh, Haim; Tsadok Perets, Tsachi; Comaneshter, Doron; Cohen, Arnon; Niv, Yaron; Levi, Zohar
2017-07-01
Data regarding the impact of smoking on the success of Helicobacter pylori (H. pylori) eradication are conflicting, partially due to the fact that sociodemographic status is associated with both smoking and H. pylori treatment success. We aimed to assess the effect of smoking on H. pylori eradication rates after controlling for sociodemographic confounders. Included were subjects aged 15 years or older, with a first time positive C 13 -urea breath test (C 13 -UBT) between 2007 to 2014, who underwent a second C 13 -UBT after receiving clarithromycin-based triple therapy. Data regarding age, gender, socioeconomic status (SES), smoking (current smokers or "never smoked"), and drug use were extracted from the Clalit health maintenance organization database. Out of 120,914 subjects with a positive first time C 13 -UBT, 50,836 (42.0%) underwent a second C 13 -UBT test. After excluding former smokers, 48,130 remained who were eligible for analysis. The mean age was 44.3±18.2years, 69.2% were females, 87.8% were Jewish and 12.2% Arabs, 25.5% were current smokers. The overall eradication failure rates were 33.3%: 34.8% in current smokers and 32.8% in subjects who never smoked. In a multivariate analysis, eradication failure was positively associated with current smoking (Odds Ratio {OR} 1.15, 95% CI 1.10-1.20, psmoking was found to significantly increase the likelihood of unsuccessful first-line treatment for H. pylori infection. Copyright © 2017 Editrice Gastroenterologica Italiana S.r.l. Published by Elsevier Ltd. All rights reserved.
Directory of Open Access Journals (Sweden)
Daniel L. Rabosky
2006-01-01
Full Text Available Rates of species origination and extinction can vary over time during evolutionary radiations, and it is possible to reconstruct the history of diversification using molecular phylogenies of extant taxa only. Maximum likelihood methods provide a useful framework for inferring temporal variation in diversification rates. LASER is a package for the R programming environment that implements maximum likelihood methods based on the birth-death process to test whether diversification rates have changed over time. LASER contrasts the likelihood of phylogenetic data under models where diversification rates have changed over time to alternative models where rates have remained constant over time. Major strengths of the package include the ability to detect temporal increases in diversification rates and the inference of diversification parameters under multiple rate-variable models of diversification. The program and associated documentation are freely available from the R package archive at http://cran.r-project.org.
Modelling (B, S-Markets, Subject to Aggressive Buying of Shares
Directory of Open Access Journals (Sweden)
Elina A. Pilosyan
2013-01-01
Full Text Available The article is devoted to modeling (B, S-market with a finite number of aggressive buyers of shares, a theoretical study of such models of markets, have developed an algorithm to calculate the hedging of portfolios of various financial obligations
Maori Cultural Efficacy and Subjective Wellbeing: A Psychological Model and Research Agenda
Houkamau, Carla A.; Sibley, Chris G.
2011-01-01
Maori, the indigenous peoples of New Zealand, experience a range of negative outcomes. Psychological models and interventions aiming to improve outcomes for Maori tend to be founded on a "culture-as-cure" model. This view promotes cultural efficacy as a critical resilience factor that should improve outcomes for Maori. This is a founding…
Directory of Open Access Journals (Sweden)
Katherine M O'Donnell
Full Text Available Detectability of individual animals is highly variable and nearly always < 1; imperfect detection must be accounted for to reliably estimate population sizes and trends. Hierarchical models can simultaneously estimate abundance and effective detection probability, but there are several different mechanisms that cause variation in detectability. Neglecting temporary emigration can lead to biased population estimates because availability and conditional detection probability are confounded. In this study, we extend previous hierarchical binomial mixture models to account for multiple sources of variation in detectability. The state process of the hierarchical model describes ecological mechanisms that generate spatial and temporal patterns in abundance, while the observation model accounts for the imperfect nature of counting individuals due to temporary emigration and false absences. We illustrate our model's potential advantages, including the allowance of temporary emigration between sampling periods, with a case study of southern red-backed salamanders Plethodon serratus. We fit our model and a standard binomial mixture model to counts of terrestrial salamanders surveyed at 40 sites during 3-5 surveys each spring and fall 2010-2012. Our models generated similar parameter estimates to standard binomial mixture models. Aspect was the best predictor of salamander abundance in our case study; abundance increased as aspect became more northeasterly. Increased time-since-rainfall strongly decreased salamander surface activity (i.e. availability for sampling, while higher amounts of woody cover objects and rocks increased conditional detection probability (i.e. probability of capture, given an animal is exposed to sampling. By explicitly accounting for both components of detectability, we increased congruence between our statistical modeling and our ecological understanding of the system. We stress the importance of choosing survey locations and
Audio-visual Classification and Fusion of Spontaneous Affect Data in Likelihood Space
Nicolaou, Mihalis A.; Gunes, Hatice; Pantic, Maja
2010-01-01
This paper focuses on audio-visual (using facial expression, shoulder and audio cues) classification of spontaneous affect, utilising generative models for classification (i) in terms of Maximum Likelihood Classification with the assumption that the generative model structure in the classifier is
International Nuclear Information System (INIS)
Hensel, S.J.; Gromada, R.J.
1994-01-01
A thermophysical property model has been developed to analytically determine the thermal response of cane fiberboard when exposed to temperatures and heat fluxes associated with the 10 CFR 71 hypothetical accident condition (HAC) and associated post fire cooling. The complete model was developed from high temperature cane fiberboard 1-D test results and consists of heating and cooling sub-models. The heating property model accounts for the enhanced heat transfer of the hot gases in the fiberboard, the loss of energy via venting, and the loss of mass from venting during the heating portion of the test. The cooling property model accounts for the degraded material effects and the continued heat transfer associated with the hot gases after removal of the external heating source. Agreement between the test results of a four inch thick fiberboard sample with the analytical application of the complete property model is quite good and will be presented. A comparison of analysis results and furnace test data for the 9966 package suggests that the property model sufficiently accounts for the heat transfer in an actual package
Integrated computation model of lithium-ion battery subject to nail penetration
International Nuclear Information System (INIS)
Liu, Binghe; Yin, Sha; Xu, Jun
2016-01-01
Highlights: • A coupling model to predict battery penetration process is established. • Penetration test is designed and validates the computational model. • Governing factors of the penetration induced short-circuit is discussed. • Critical safety battery design guidance is suggested. - Abstract: The nail penetration of lithium-ion batteries (LIBs) has become a standard battery safety evaluation method to mimic the potential penetration of a foreign object into LIB, which can lead to internal short circuit with catastrophic consequences, such as thermal runaway, fire, and explosion. To provide a safe, time-efficient, and cost-effective method for studying the nail penetration problem, an integrated computational method that considers the mechanical, electrochemical, and thermal behaviors of the jellyroll was developed using a coupled 3D mechanical model, a 1D battery model, and a short circuit model. The integrated model, along with the sub-models, was validated to agree reasonably well with experimental test data. In addition, a comprehensive quantitative analysis of governing factors, e.g., shapes, sizes, and displacements of nails, states of charge, and penetration speeds, was conducted. The proposed computational framework for LIB nail penetration was first introduced. This framework can provide an accurate prediction of the time history profile of battery voltage, temperature, and mechanical behavior. The factors that affected the behavior of the jellyroll under nail penetration were discussed systematically. Results provide a solid foundation for future in-depth studies on LIB nail penetration mechanisms and safety design.
Anisotropic Multishell Analytical Modeling of an Intervertebral Disk Subjected to Axial Compression.
Demers, Sébastien; Nadeau, Sylvie; Bouzid, Abdel-Hakim
2016-04-01
Studies on intervertebral disk (IVD) response to various loads and postures are essential to understand disk's mechanical functions and to suggest preventive and corrective actions in the workplace. The experimental and finite-element (FE) approaches are well-suited for these studies, but validating their findings is difficult, partly due to the lack of alternative methods. Analytical modeling could allow methodological triangulation and help validation of FE models. This paper presents an analytical method based on thin-shell, beam-on-elastic-foundation and composite materials theories to evaluate the stresses in the anulus fibrosus (AF) of an axisymmetric disk composed of multiple thin lamellae. Large deformations of the soft tissues are accounted for using an iterative method and the anisotropic material properties are derived from a published biaxial experiment. The results are compared to those obtained by FE modeling. The results demonstrate the capability of the analytical model to evaluate the stresses at any location of the simplified AF. It also demonstrates that anisotropy reduces stresses in the lamellae. This novel model is a preliminary step in developing valuable analytical models of IVDs, and represents a distinctive groundwork that is able to sustain future refinements. This paper suggests important features that may be included to improve model realism.
Gupta, Manoj; Gupta, T C
2017-10-01
The present study aims to accurately estimate inertial, physical, and dynamic parameters of human body vibratory model consistent with physical structure of the human body that also replicates its dynamic response. A 13 degree-of-freedom (DOF) lumped parameter model for standing person subjected to support excitation is established. Model parameters are determined from anthropometric measurements, uniform mass density, elastic modulus of individual body segments, and modal damping ratios. Elastic moduli of ellipsoidal body segments are initially estimated by comparing stiffness of spring elements, calculated from a detailed scheme, and values available in literature for same. These values are further optimized by minimizing difference between theoretically calculated platform-to-head transmissibility ratio (TR) and experimental measurements. Modal damping ratios are estimated from experimental transmissibility response using two dominant peaks in the frequency range of 0-25 Hz. From comparison between dynamic response determined form modal analysis and experimental results, a set of elastic moduli for different segments of human body and a novel scheme to determine modal damping ratios from TR plots, are established. Acceptable match between transmissibility values calculated from the vibratory model and experimental measurements for 50th percentile U.S. male, except at very low frequencies, establishes the human body model developed. Also, reasonable agreement obtained between theoretical response curve and experimental response envelop for average Indian male, affirms the technique used for constructing vibratory model of a standing person. Present work attempts to develop effective technique for constructing subject specific damped vibratory model based on its physical measurements.
King, Mark A; Glynn, Jonathan A; Mitchell, Sean R
2011-11-01
A subject-specific angle-driven computer model of a tennis player, combined with a forward dynamics, equipment-specific computer model of tennis ball-racket impacts, was developed to determine the effect of ball-racket impacts on loading at the elbow for one-handed backhand groundstrokes. Matching subject-specific computer simulations of a typical topspin/slice one-handed backhand groundstroke performed by an elite tennis player were done with root mean square differences between performance and matching simulations of elbow loading for a topspin and slice one-handed backhand groundstroke is relatively small. In this study, the relatively small differences in elbow loading may be due to comparable angle-time histories at the wrist and elbow joints with the major kinematic differences occurring at the shoulder. Using a subject-specific angle-driven computer model combined with a forward dynamics, equipment-specific computer model of tennis ball-racket impacts allows peak internal loading, net impulse, and shock due to ball-racket impact to be calculated which would not otherwise be possible without impractical invasive techniques. This study provides a basis for further investigation of the factors that may increase elbow loading during tennis strokes.
Matthews, Russell A; Wayne, Julie Holliday; Ford, Michael T
2014-11-01
In the present study, we examine competing predictions of stress reaction models and adaptation theories regarding the longitudinal relationship between work-family conflict and subjective well-being. Based on data from 432 participants over 3 time points with 2 lags of varying lengths (i.e., 1 month, 6 months), our findings suggest that in the short term, consistent with prior theory and research, work-family conflict is associated with poorer subjective well-being. Counter to traditional work-family predictions but consistent with adaptation theories, after accounting for concurrent levels of work-family conflict as well as past levels of subjective well-being, past exposure to work-family conflict was associated with higher levels of subjective well-being over time. Moreover, evidence was found for reverse causation in that greater subjective well-being at 1 point in time was associated with reduced work-family conflict at a subsequent point in time. Finally, the pattern of results did not vary as a function of using different temporal lags. We discuss the theoretical, research, and practical implications of our findings. (PsycINFO Database Record (c) 2014 APA, all rights reserved).
2017-11-01
Howle, Dmitriy Krayterman, Justin E Pritchett, and Ryan Sorenson 5d. PROJECT NUMBER 5e. TASK NUMBER 5f. WORK UNIT NUMBER 7. PERFORMING...The Under-body Blast Methodology (UBM) for the Test and Evaluation (T&E) program was established to provide a capability for the US Army Test and...and must be validated. The UBM for the T&E program has completed efforts to validate soil models but not structural dynamics models. Modal testing
Models and Estimation Procedures for the Analysis of Subjects-by-Items Data Arrays.
1982-06-30
Conclusions and recommendations The usefulness of Tukey’s model for model-based psychological testing is probably greatest for analyses of responses which are...22314 National Institute of Education Attn: TC 1200 19th Street NW Washington, DC 20208 Dr. William Graham Testing Directorate 1 Dr. Lorraine D. Eyde ...Educational Testing Service 1 Dr. Norman Cliff Princeton, NJ 08450 Dept. of Psychology Univ. of So. California 1 Dr. Ina Bilodeau University Park
The QOL-DASS Model to Estimate Overall Quality of Life and General Subjective Health.
Mazaheri, Mehrdad
2011-01-01
In Order to find how rating the WHOQOL-BREF and DASS scales are combined to produce an overall measure of quality of life and satisfaction with health rating, a QOL-DASS model was designed; and the strength of this hypothesized model was examined using the structural equation modeling. Participants included a sample of 103 voluntary males who were divided into two groups of unhealthy (N=55) and healthy (N=48). To assess satisfaction and negative emotions of depression, anxiety and stress among the participants, they were asked to fill out the WHOQOL-BREF and The Depression Anxiety Stress Scale (DASS-42). Our findings on running the hypothesized model of QOL-DASS indicated that the proposed model of QOL-DASS fitted the data well for the both healthy and unhealthy groups. Our findings with CFA to evaluate the hypothesized model of QOL-DASS indicated that the different satisfaction domain ratings and the negative emotions of depression, anxiety and stress as the observed variables can represent the underlying constructs of general health and quality of life on both healthy and unhealthy groups.
International Nuclear Information System (INIS)
Zhou, Xiaojun; Wu, Changjie; Li, Yanting; Xi, Lifeng
2016-01-01
A periodic preventive maintenance modeling method is proposed for leased equipment with continuous internal degradation and stochastic external shock damage considered simultaneously, which can facilitate the equipment lessor to optimize the maintenance schedule for the same kind of equipment rented by different lessees. A novel interactive mechanism between the continuous internal degradation and the stochastic external shock damage is established on the hazard rate of the equipment with integrating the imperfect effect of maintenance. Two improvement factors are defined for the modeling of imperfect maintenance. The number of failures resulting from internal degradation and from external shocks are both mathematically deduced based on this interactive mechanism. The optimal preventive maintenance scheme is obtained by minimizing the cumulative maintenance cost throughout the lease period. Numerical example shows that the proposed preventive maintenance model not only can reflect the reliability status of the equipment but also can clearly distinguish between the impact from internal degradation and that from external shocks. - Highlights: • We propose an imperfect periodic preventive maintenance model for leased equipment. • It can distinguish between the impact from internal degradation and that from external shocks. • An internal–external interactive mechanism is proposed. • Two improvement factors are introduced into the modeling of imperfect maintenance. • The model is helpful for the PM scheduling of the same equipment rented by different lessees.
A New Perspective on Modeling Groundwater-Driven Health Risk With Subjective Information
Ozbek, M. M.
2003-12-01
Fuzzy rule-based systems provide an efficient environment for the modeling of expert information in the context of risk management for groundwater contamination problems. In general, their use in the form of conditional pieces of knowledge, has been either as a tool for synthesizing control laws from data (i.e., conjunction-based models), or in a knowledge representation and reasoning perspective in Artificial Intelligence (i.e., implication-based models), where only the latter may lead to coherence problems (e.g., input data that leads to logical inconsistency when added to the knowledge base). We implement a two-fold extension to an implication-based groundwater risk model (Ozbek and Pinder, 2002) including: 1) the implementation of sufficient conditions for a coherent knowledge base, and 2) the interpolation of expert statements to supplement gaps in knowledge. The original model assumes statements of public health professionals for the characterization of the exposed individual and the relation of dose and pattern of exposure to its carcinogenic effects. We demonstrate the utility of the extended model in that it: 1)identifies inconsistent statements and establishes coherence in the knowledge base, and 2) minimizes the burden of knowledge elicitation from the experts for utilizing existing knowledge in an optimal fashion.ÿÿ
A time-varying subjective quality model for mobile streaming videos with stalling events
Ghadiyaram, Deepti; Pan, Janice; Bovik, Alan C.
2015-09-01
Over-the-top mobile video streaming is invariably influenced by volatile network conditions which cause playback interruptions (stalling events), thereby impairing users' quality of experience (QoE). Developing models that can accurately predict users' QoE could enable the more efficient design of quality-control protocols for video streaming networks that reduce network operational costs while still delivering high-quality video content to the customers. Existing objective models that predict QoE are based on global video features, such as the number of stall events and their lengths, and are trained and validated on a small pool of ad hoc video datasets, most of which are not publicly available. The model we propose in this work goes beyond previous models as it also accounts for the fundamental effect that a viewer's recent level of satisfaction or dissatisfaction has on their overall viewing experience. In other words, the proposed model accounts for and adapts to the recency, or hysteresis effect caused by a stall event in addition to accounting for the lengths, frequency of occurrence, and the positions of stall events - factors that interact in a complex way to affect a user's QoE. On the recently introduced LIVE-Avvasi Mobile Video Database, which consists of 180 distorted videos of varied content that are afflicted solely with over 25 unique realistic stalling events, we trained and validated our model to accurately predict the QoE, attaining standout QoE prediction performance.
Shenkman, Geva
2012-10-01
This study examined the frequencies of the desires and likelihood estimations of Israeli gay men regarding fatherhood and couplehood, using a sample of 183 gay men aged 19-50. It follows previous research which indicated the existence of a gap in the United States with respect to fatherhood, and called for generalizability examinations in other countries and the exploration of possible explanations. As predicted, a gap was also found in Israel between fatherhood desires and their likelihood estimations, as well as between couplehood desires and their likelihood estimations. In addition, lower estimations of fatherhood likelihood were found to predict depression and to correlate with decreased subjective well-being. Possible psychosocial explanations are offered. Moreover, by mapping attitudes toward fatherhood and couplehood among Israeli gay men, the current study helps to extend our knowledge of several central human development motivations and their correlations with depression and subjective well-being in a less-studied sexual minority in a complex cultural climate. (PsycINFO Database Record (c) 2012 APA, all rights reserved).
The subjective experience of the self in the large group: two models for study.
Shields, W
2001-04-01
More and more opportunities now exist for group therapists to engage in the study of the self in the large group at local, national, and international conferences as well as in clinical and other organizational settings. This may be particularly important for the group therapist in the next century with potential benefit not only for individuals but also for groups and social systems of all kinds. In this article, I review my own subjective experiences in the large group context and in large study group experiences. Then, I contrast the group analytic and the group relations approaches to the large group with particular reference to Winnicott's theory about maturational processes in a facilitating environment.
Regularization parameter selection methods for ill-posed Poisson maximum likelihood estimation
International Nuclear Information System (INIS)
Bardsley, Johnathan M; Goldes, John
2009-01-01
In image processing applications, image intensity is often measured via the counting of incident photons emitted by the object of interest. In such cases, image data noise is accurately modeled by a Poisson distribution. This motivates the use of Poisson maximum likelihood estimation for image reconstruction. However, when the underlying model equation is ill-posed, regularization is needed. Regularized Poisson likelihood estimation has been studied extensively by the authors, though a problem of high importance remains: the choice of the regularization parameter. We will present three statistically motivated methods for choosing the regularization parameter, and numerical examples will be presented to illustrate their effectiveness
Likelihood of illegal alcohol sales at professional sport stadiums.
Toomey, Traci L; Erickson, Darin J; Lenk, Kathleen M; Kilian, Gunna R
2008-11-01
Several studies have assessed the propensity for illegal alcohol sales at licensed alcohol establishments and community festivals, but no previous studies examined the propensity for these sales at professional sport stadiums. In this study, we assessed the likelihood of alcohol sales to both underage youth and obviously intoxicated patrons at professional sports stadiums across the United States, and assessed the factors related to likelihood of both types of alcohol sales. We conducted pseudo-underage (i.e., persons age 21 or older who appear under 21) and pseudo-intoxicated (i.e., persons feigning intoxication) alcohol purchase attempts at stadiums that house professional hockey, basketball, baseball, and football teams. We conducted the purchase attempts at 16 sport stadiums located in 5 states. We measured 2 outcome variables: pseudo-underage sale (yes, no) and pseudo-intoxicated sale (yes, no), and 3 types of independent variables: (1) seller characteristics, (2) purchase attempt characteristics, and (3) event characteristics. Following univariate and bivariate analyses, we a separate series of logistic generalized mixed regression models for each outcome variable. The overall sales rates to the pseudo-underage and pseudo-intoxicated buyers were 18% and 74%, respectively. In the multivariate logistic analyses, we found that the odds of a sale to a pseudo-underage buyer in the stands was 2.9 as large as the odds of a sale at the concession booths (30% vs. 13%; p = 0.01). The odds of a sale to an obviously intoxicated buyer in the stands was 2.9 as large as the odds of a sale at the concession booths (89% vs. 73%; p = 0.02). Similar to studies assessing illegal alcohol sales at licensed alcohol establishments and community festivals, findings from this study shows the need for interventions specifically focused on illegal alcohol sales at professional sporting events.
On the Mechanical Modeling of Tensegrity Columns Subject to Impact Loading
Directory of Open Access Journals (Sweden)
Ada Amendola
2018-04-01
Full Text Available A physical model of a tensegrity columns is additively manufactured in a titanium alloy. After removing sacrificial supports, such a model is post-tensioned through suitable insertion of Spectra® cables. The wave dynamics of the examined system is first experimentally investigated by recording the motion through high-speed cameras assisted by a digital image correlation algorithm, which returns time-histories of the axial displacements of the bases of each prism of the column. Next, the experimental response is mechanically simulated by means of two different models: a stick-and-spring model accounting for the presence of bending-stiff connections between the 3D-printed elements (mixed bending-stretching response, and a tensegrity model accounting for a purely stretching response. The comparison of theory and experiment reveals that the presence of bending-stiff connections weakens the nonlinearity of the wave dynamics of the system. A stretching-dominated response instead supports highly compact solitary waves in the presence of small prestress and negligible bending stiffness of connections.
On the mechanical modeling of tensegrity columns subject to impact loading
Amendola, Ada; Favata, Antonino; Micheletti, Andrea
2018-04-01
A physical model of a tensegrity columns is additively manufactured in a titanium alloy. After removing sacrificial supports, such a model is post-tensioned through suitable insertion of Spectra cables. The wave dynamics of the examined system is first experimentally investigated by recording the motion through high-speed cameras assisted by a digital image correlation algorithm, which returns time-histories of the axial displacements of the bases of each prism of the column. Next, the experimental response is mechanically simulated by means of two different models: a stick-and-spring model accounting for the presence of bending-stiff connections between the 3D-printed elements (mixed bending-stretching response), and a tensegrity model accounting for a purely stretching response. The comparison of theory and experiment reveals that the presence of bending-stiff connections weakens the nonlinearity of the wave dynamics of the system. A stretching-dominated response instead supports highly compact solitary waves in the presence of small prestress and negligible bending stiffness of connections.
Kentel, Behzat B; King, Mark A; Mitchell, Sean R
2011-11-01
A torque-driven, subject-specific 3-D computer simulation model of the impact phase of one-handed tennis backhand strokes was evaluated by comparing performance and simulation results. Backhand strokes of an elite subject were recorded on an artificial tennis court. Over the 50-ms period after impact, good agreement was found with an overall RMS difference of 3.3° between matching simulation and performance in terms of joint and racket angles. Consistent with previous experimental research, the evaluation process showed that grip tightness and ball impact location are important factors that affect postimpact racket and arm kinematics. Associated with these factors, the model can be used for a better understanding of the eccentric contraction of the wrist extensors during one-handed backhand ground strokes, a hypothesized mechanism of tennis elbow.
Planck intermediate results: XVI. Profile likelihoods for cosmological parameters
DEFF Research Database (Denmark)
Bartlett, J.G.; Cardoso, J.-F.; Delabrouille, J.
2014-01-01
We explore the 2013 Planck likelihood function with a high-precision multi-dimensional minimizer (Minuit). This allows a refinement of the CDM best-fit solution with respect to previously-released results, and the construction of frequentist confidence intervals using profile likelihoods. The agr...
Planck 2013 results. XV. CMB power spectra and likelihood
DEFF Research Database (Denmark)
Tauber, Jan; Bartlett, J.G.; Bucher, M.
2014-01-01
This paper presents the Planck 2013 likelihood, a complete statistical description of the two-point correlation function of the CMB temperature fluctuations that accounts for all known relevant uncertainties, both instrumental and astrophysical in nature. We use this likelihood to derive our best...
The modified signed likelihood statistic and saddlepoint approximations
DEFF Research Database (Denmark)
Jensen, Jens Ledet
1992-01-01
SUMMARY: For a number of tests in exponential families we show that the use of a normal approximation to the modified signed likelihood ratio statistic r * is equivalent to the use of a saddlepoint approximation. This is also true in a large deviation region where the signed likelihood ratio...... statistic r is of order √ n. © 1992 Biometrika Trust....
Modelling of hot surface ignition within gas turbines subject to flammable gas in the intake
DEFF Research Database (Denmark)
Pedersen, Lea Duedahl; Nielsen, Kenny Krogh; Yin, Chungen
2017-01-01
Controlling risks associated with fires and explosions from leaks of flammable fluids at oil and gas facilities is paramount to ensuring safe operations. The gas turbine is a significant potential source of ignition; however, the residual risk is still not adequately understood. A model has been...... but decreases with increase in initial mixture temperature and pressure. The model shows a great potential in reliable prediction of the risk of hot surface ignition within gas turbines in the oil and gas industry. In the future, a dedicated experimental study will be performed not only to improve...
Mathematical modeling of an urban pigeon population subject to local management strategies.
Haidar, I; Alvarez, I; Prévot, A C
2017-06-01
This paper addresses the issue of managing urban pigeon population using some possible actions that make it reach a density target with respect to socio-ecological constraints. A mathematical model describing the dynamic of this population is introduced. This model incorporates the effect of some regulatory actions on the dynamic of this population. We use mathematical viability theory, which provides a framework to study compatibility between dynamics and state constraints. The viability study shows when and how it is possible to regulate the pigeon population with respect to the constraints. Copyright © 2017 Elsevier Inc. All rights reserved.
Ringing Artefact Reduction By An Efficient Likelihood Improvement Method
Fuderer, Miha
1989-10-01
In MR imaging, the extent of the acquired spatial frequencies of the object is necessarily finite. The resulting image shows artefacts caused by "truncation" of its Fourier components. These are known as Gibbs artefacts or ringing artefacts. These artefacts are particularly. visible when the time-saving reduced acquisition method is used, say, when scanning only the lowest 70% of the 256 data lines. Filtering the data results in loss of resolution. A method is described that estimates the high frequency data from the low-frequency data lines, with the likelihood of the image as criterion. It is a computationally very efficient method, since it requires practically only two extra Fourier transforms, in addition to the normal. reconstruction. The results of this method on MR images of human subjects are promising. Evaluations on a 70% acquisition image show about 20% decrease of the error energy after processing. "Error energy" is defined as the total power of the difference to a 256-data-lines reference image. The elimination of ringing artefacts then appears almost complete..
Physical activity may decrease the likelihood of children developing constipation.
Seidenfaden, Sandra; Ormarsson, Orri Thor; Lund, Sigrun H; Bjornsson, Einar S
2018-01-01
Childhood constipation is common. We evaluated children diagnosed with constipation, who were referred to an Icelandic paediatric emergency department, and determined the effect of lifestyle factors on its aetiology. The parents of children who were diagnosed with constipation and participated in a phase IIB clinical trial on laxative suppositories answered an online questionnaire about their children's lifestyle and constipation in March-April 2013. The parents of nonconstipated children that visited the paediatric department of Landspitali University Hospital or an Icelandic outpatient clinic answered the same questionnaire. We analysed responses regarding 190 children aged one year to 18 years: 60 with constipation and 130 without. We found that 40% of the constipated children had recurrent symptoms, 27% had to seek medical attention more than once and 33% received medication per rectum. The 47 of 130 control group subjects aged 10-18 were much more likely to exercise more than three times a week (72%) and for more than a hour (62%) than the 26 of 60 constipated children of the same age (42% and 35%, respectively). Constipation risk factors varied with age and many children diagnosed with constipation had recurrent symptoms. Physical activity may affect the likelihood of developing constipation in older children. ©2017 Foundation Acta Paediatrica. Published by John Wiley & Sons Ltd.
Bast, Callie Corinne Scheidt
1994-01-01
This thesis presents the on-going development of methodology for a probabilistic material strength degradation model. The probabilistic model, in the form of a postulated randomized multifactor equation, provides for quantification of uncertainty in the lifetime material strength of aerospace propulsion system components subjected to a number of diverse random effects. This model is embodied in the computer program entitled PROMISS, which can include up to eighteen different effects. Presently, the model includes four effects that typically reduce lifetime strength: high temperature, mechanical fatigue, creep, and thermal fatigue. Statistical analysis was conducted on experimental Inconel 718 data obtained from the open literature. This analysis provided regression parameters for use as the model's empirical material constants, thus calibrating the model specifically for Inconel 718. Model calibration was carried out for four variables, namely, high temperature, mechanical fatigue, creep, and thermal fatigue. Methodology to estimate standard deviations of these material constants for input into the probabilistic material strength model was developed. Using the current version of PROMISS, entitled PROMISS93, a sensitivity study for the combined effects of mechanical fatigue, creep, and thermal fatigue was performed. Results, in the form of cumulative distribution functions, illustrated the sensitivity of lifetime strength to any current value of an effect. In addition, verification studies comparing a combination of mechanical fatigue and high temperature effects by model to the combination by experiment were conducted. Thus, for Inconel 718, the basic model assumption of independence between effects was evaluated. Results from this limited verification study strongly supported this assumption.
Directory of Open Access Journals (Sweden)
U. Schneider
2009-01-01
Full Text Available The paper presents the structural application of a new thermal induced strain model for concrete – the TIS-Model. An advanced transient concrete model (ATCM is applied with the material model of the TIS-Model. The non-linear model comprises thermal strain, elastic strain, plastic strain and transient temperature strains, and load history modelling of restraint concrete structures subjected to fire.The calculations by finite element analysis (FEA were done using the SAFIR structural code. The FEA software was basically new with respect to the material modelling derived to use the new TIS-Model (as a transient model considers thermal induced strain. The equations of the ATCM consider a lot of capabilities, especially for considering irreversible effects of temperature on some material properties. By considering the load history during heating up, increasing load bearing capacity may be obtained due to higher stiffness of the concrete. With this model, it is possible to apply the thermal-physical behaviour of material laws for calculation of structures under extreme temperature conditions.A tunnel cross section designed and built by the cut and cover method is calculated with a tunnel fire curve. The results are compared with the results of a calculation with the model of the Eurocode 2 (EC2-Model. The effect of load history in highly loaded structures under fire load will be investigated.A comparison of this model with the ordinary calculation system of Eurocode 2 (EC2 shows that a better evaluation of the safety level was achieved with the new model. This opens a space for optimizing concrete structure design with transient temperature conditions up to 1000 °C.
Wu, Jun; Tjoa, Thomas; Li, Lianfa; Jaimes, Guillermo; Delfino, Ralph J
2012-07-11
Exposure to polycyclic aromatic hydrocarbon (PAH) has been linked to various adverse health outcomes. Personal PAH exposures are usually measured by personal monitoring or biomarkers, which are costly and impractical for a large population. Modeling is a cost-effective alternative to characterize personal PAH exposure although challenges exist because the PAH exposure can be highly variable between locations and individuals in non-occupational settings. In this study we developed models to estimate personal inhalation exposures to particle-bound PAH (PB-PAH) using data from global positioning system (GPS) time-activity tracking data, traffic activity, and questionnaire information. We conducted real-time (1-min interval) personal PB-PAH exposure sampling coupled with GPS tracking in 28 non-smoking women for one to three sessions and one to nine days each session from August 2009 to November 2010 in Los Angeles and Orange Counties, California. Each subject filled out a baseline questionnaire and environmental and behavior questionnaires on their typical activities in the previous three months. A validated model was used to classify major time-activity patterns (indoor, in-vehicle, and other) based on the raw GPS data. Multiple-linear regression and mixed effect models were developed to estimate averaged daily and subject-level PB-PAH exposures. The covariates we examined included day of week and time of day, GPS-based time-activity and GPS speed, traffic- and roadway-related parameters, meteorological variables (i.e. temperature, wind speed, relative humidity), and socio-demographic variables and occupational exposures from the questionnaire. We measured personal PB-PAH exposures for 180 days with more than 6 h of valid data on each day. The adjusted R2 of the model was 0.58 for personal daily exposures, 0.61 for subject-level personal exposures, and 0.75 for subject-level micro-environmental exposures. The amount of time in vehicle (averaging 4.5% of total
Directory of Open Access Journals (Sweden)
Arthur Coré
2017-01-01
Full Text Available This paper deals with the characterization and the numerical modelling of the collapse of composite hollow spherical structures developed to absorb energy during high velocity impacts. The structure is composed of hollow spheres (ϕ=2–30 mm made of epoxy resin and mineral powder. First of all, quasi-static and dynamic (v=5 mm·min−1 to v=2 m·s−1 compression tests are conducted at room temperature on a single sphere to study energy dissipation mechanisms. Fracture of the material appears to be predominant. A numerical model based on the discrete element method is investigated to simulate the single sphere crushing. The stress-strain-time relationship of the material based on the Ree-Eyring law is numerically implemented. The DEM modelling takes naturally into account the dynamic fracture and the crack path computed is close to the one observed experimentally in uniaxial compression. Eventually, high velocity impacts (v>100 m·s−1 of a hollow sphere on a rigid surface are conducted with an air cannon. The numerical results are in good agreement with the experimental data and demonstrate the ability of the present model to correctly describe the mechanical behavior of brittle materials at high strain rate.
Analysis of an age structured model for tick populations subject to seasonal effects
Liu, Kaihui; Lou, Yijun; Wu, Jianhong
2017-08-01
We investigate an age-structured hyperbolic equation model by allowing the birth and death functions to be density dependent and periodic in time with the consideration of seasonal effects. By studying the integral form solution of this general hyperbolic equation obtained through the method of integration along characteristics, we give a detailed proof of the uniqueness and existence of the solution in light of the contraction mapping theorem. With additional biologically natural assumptions, using the tick population growth as a motivating example, we derive an age-structured model with time-dependent periodic maturation delays, which is quite different from the existing population models with time-independent maturation delays. For this periodic differential system with seasonal delays, the basic reproduction number R0 is defined as the spectral radius of the next generation operator. Then, we show the tick population tends to die out when R0 1. When there is no intra-specific competition among immature individuals due to the sufficient availability of immature tick hosts, the global stability of the positive periodic state for the whole model system of four delay differential equations can be obtained with the observation that a scalar subsystem for the adult stage size can be decoupled. The challenge for the proof of such a global stability result can be overcome by introducing a new phase space, based on which, a periodic solution semiflow can be defined which is eventually strongly monotone and strictly subhomogeneous.
Farrell, Colm; Hayes, Siobhan C; Wire, Mary; Zhang, Jianping
2014-01-01
Aims To characterize the pharmacokinetics (PK)/pharmacodynamics (PD) of eltrombopag in chronic liver disease (CLD). Methods The PK/PD model was developed using data from 79 CLD patients using nonlinear mixed-effects modelling. Results The PK of eltrombopag were described by a two-compartment model with dual sequential first-order absorption. Gender, race and severity of CLD were predictors of the apparent clearance of eltrombopag. The PD of eltrombopag in CLD were adequately described by a four-compartment lifespan model, in which eltrombopag stimulated platelet precursor production rate. East Asian CLD patients were less sensitive to the stimulatory effect of eltrombopag. Following a daily dose regimen of 50 mg eltrombopag, the time to achieve peak platelet counts was longer for the CLD population compared with patients who had immune thrombocytopenic purpura, but was comparable to patients with hepatitis C. Likewise, it took a longer time for platelet counts to rebound back to baseline once eltrombopag treatment was discontinued. Conclusions The time course of the platelet response in CLD was different from that in immune thrombocytopenic purpura but comparable to that in hepatitis C. PMID:24117976
Culture, personality, and subjective well-being: integrating process models of life satisfaction.
Schimmack, Ulrich; Radhakrishnan, Phanikiran; Oishi, Shigehiro; Dzokoto, Vivian; Ahadi, Stephan
2002-04-01
The authors examined the interplay of personality and cultural factors in the prediction of the affective (hedonic balance) and the cognitive (life satisfaction) components of subjective well-being (SWB). They predicted that the influence of personality on life satisfaction is mediated by hedonic balance and that the relation between hedonic balance and life satisfaction is moderated by culture. As a consequence, they predicted that the influence of personality on life satisfaction is also moderated by culture. Participants from 2 individualistic cultures (United States, Germany) and 3 collectivistic cultures (Japan, Mexico, Ghana) completed measures of Extraversion, Neuroticism, hedonic balance, and life satisfaction. As predicted, Extraversion and Neuroticism influenced hedonic balance to the same degree in all cultures, and hedonic balance was a stronger predictor of life satisfaction in individualistic than in collectivistic cultures. The influence of Extraversion and Neuroticism on life satisfaction was largely mediated by hedonic balance. The results suggest that the influence of personality on the emotional component of SWB is pancultural, whereas the influence of personality on the cognitive component of SWB is moderated by culture.
Gentz, Steven J.; Ordway, David O.; Parsons, David S.; Garrison, Craig M.; Rodgers, C. Steven; Collins, Brian W.
2015-01-01
The NASA Engineering and Safety Center (NESC) received a request to develop an analysis model based on both frequency response and wave propagation analyses for predicting shock response spectrum (SRS) on composite materials subjected to pyroshock loading. The model would account for near-field environment (approximately 9 inches from the source) dominated by direct wave propagation, mid-field environment (approximately 2 feet from the source) characterized by wave propagation and structural resonances, and far-field environment dominated by lower frequency bending waves in the structure. This document contains appendices to the Volume I report.
Qi, Shouliang; Zhang, Baihua; Yue, Yong; Shen, Jing; Teng, Yueyang; Qian, Wei; Wu, Jianlin
2018-03-01
Tracheal Bronchus (TB) is a rare congenital anomaly characterized by the presence of an abnormal bronchus originating from the trachea or main bronchi and directed toward the upper lobe. The airflow pattern in tracheobronchial trees of TB subjects is critical, but has not been systemically studied. This study proposes to simulate the airflow using CT image based models and the computational fluid dynamics (CFD) method. Six TB subjects and three health controls (HC) are included. After the geometric model of tracheobronchial tree is extracted from CT images, the spatial distribution of velocity, wall pressure, wall shear stress (WSS) is obtained through CFD simulation, and the lobar distribution of air, flow pattern and global pressure drop are investigated. Compared with HC subjects, the main bronchus angle of TB subjects and the variation of volume are large, while the cross-sectional growth rate is small. High airflow velocity, wall pressure, and WSS are observed locally at the tracheal bronchus, but the global patterns of these measures are still similar to those of HC. The ratio of airflow into the tracheal bronchus accounts for 6.6-15.6% of the inhaled airflow, decreasing the ratio to the right upper lobe from 15.7-21.4% (HC) to 4.9-13.6%. The air into tracheal bronchus originates from the right dorsal near-wall region of the trachea. Tracheal bronchus does not change the global pressure drop which is dependent on multiple variables. Though the tracheobronchial trees of TB subjects present individualized features, several commonalities on the structural and airflow characteristics can be revealed. The observed local alternations might provide new insight into the reason of recurrent local infections, cough and acute respiratory distress related to TB.
Planck 2013 results. XV. CMB power spectra and likelihood
Planck Collaboration; Ade, P. A. R.; Aghanim, N.; Armitage-Caplan, C.; Arnaud, M.; Ashdown, M.; Atrio-Barandela, F.; Aumont, J.; Baccigalupi, C.; Banday, A. J.; Barreiro, R. B.; Bartlett, J. G.; Battaner, E.; Benabed, K.; Benoît, A.; Benoit-Lévy, A.; Bernard, J.-P.; Bersanelli, M.; Bielewicz, P.; Bobin, J.; Bock, J. J.; Bonaldi, A.; Bonavera, L.; Bond, J. R.; Borrill, J.; Bouchet, F. R.; Boulanger, F.; Bridges, M.; Bucher, M.; Burigana, C.; Butler, R. C.; Calabrese, E.; Cardoso, J.-F.; Catalano, A.; Challinor, A.; Chamballu, A.; Chiang, H. C.; Chiang, L.-Y.; Christensen, P. R.; Church, S.; Clements, D. L.; Colombi, S.; Colombo, L. P. L.; Combet, C.; Couchot, F.; Coulais, A.; Crill, B. P.; Curto, A.; Cuttaia, F.; Danese, L.; Davies, R. D.; Davis, R. J.; de Bernardis, P.; de Rosa, A.; de Zotti, G.; Delabrouille, J.; Delouis, J.-M.; Désert, F.-X.; Dickinson, C.; Diego, J. M.; Dole, H.; Donzelli, S.; Doré, O.; Douspis, M.; Dunkley, J.; Dupac, X.; Efstathiou, G.; Elsner, F.; Enßlin, T. A.; Eriksen, H. K.; Finelli, F.; Forni, O.; Frailis, M.; Fraisse, A. A.; Franceschi, E.; Gaier, T. C.; Galeotta, S.; Galli, S.; Ganga, K.; Giard, M.; Giardino, G.; Giraud-Héraud, Y.; Gjerløw, E.; González-Nuevo, J.; Górski, K. M.; Gratton, S.; Gregorio, A.; Gruppuso, A.; Gudmundsson, J. E.; Hansen, F. K.; Hanson, D.; Harrison, D.; Helou, G.; Henrot-Versillé, S.; Hernández-Monteagudo, C.; Herranz, D.; Hildebrandt, S. R.; Hivon, E.; Hobson, M.; Holmes, W. A.; Hornstrup, A.; Hovest, W.; Huffenberger, K. M.; Hurier, G.; Jaffe, A. H.; Jaffe, T. R.; Jewell, J.; Jones, W. C.; Juvela, M.; Keihänen, E.; Keskitalo, R.; Kiiveri, K.; Kisner, T. S.; Kneissl, R.; Knoche, J.; Knox, L.; Kunz, M.; Kurki-Suonio, H.; Lagache, G.; Lähteenmäki, A.; Lamarre, J.-M.; Lasenby, A.; Lattanzi, M.; Laureijs, R. J.; Lawrence, C. R.; Le Jeune, M.; Leach, S.; Leahy, J. P.; Leonardi, R.; León-Tavares, J.; Lesgourgues, J.; Liguori, M.; Lilje, P. B.; Linden-Vørnle, M.; Lindholm, V.; López-Caniego, M.; Lubin, P. M.; Macías-Pérez, J. F.; Maffei, B.; Maino, D.; Mandolesi, N.; Marinucci, D.; Maris, M.; Marshall, D. J.; Martin, P. G.; Martínez-González, E.; Masi, S.; Massardi, M.; Matarrese, S.; Matthai, F.; Mazzotta, P.; Meinhold, P. R.; Melchiorri, A.; Mendes, L.; Menegoni, E.; Mennella, A.; Migliaccio, M.; Millea, M.; Mitra, S.; Miville-Deschênes, M.-A.; Molinari, D.; Moneti, A.; Montier, L.; Morgante, G.; Mortlock, D.; Moss, A.; Munshi, D.; Murphy, J. A.; Naselsky, P.; Nati, F.; Natoli, P.; Netterfield, C. B.; Nørgaard-Nielsen, H. U.; Noviello, F.; Novikov, D.; Novikov, I.; O'Dwyer, I. J.; Orieux, F.; Osborne, S.; Oxborrow, C. A.; Paci, F.; Pagano, L.; Pajot, F.; Paladini, R.; Paoletti, D.; Partridge, B.; Pasian, F.; Patanchon, G.; Paykari, P.; Perdereau, O.; Perotto, L.; Perrotta, F.; Piacentini, F.; Piat, M.; Pierpaoli, E.; Pietrobon, D.; Plaszczynski, S.; Pointecouteau, E.; Polenta, G.; Ponthieu, N.; Popa, L.; Poutanen, T.; Pratt, G. W.; Prézeau, G.; Prunet, S.; Puget, J.-L.; Rachen, J. P.; Rahlin, A.; Rebolo, R.; Reinecke, M.; Remazeilles, M.; Renault, C.; Ricciardi, S.; Riller, T.; Ringeval, C.; Ristorcelli, I.; Rocha, G.; Rosset, C.; Roudier, G.; Rowan-Robinson, M.; Rubiño-Martín, J. A.; Rusholme, B.; Sandri, M.; Sanselme, L.; Santos, D.; Savini, G.; Scott, D.; Seiffert, M. D.; Shellard, E. P. S.; Spencer, L. D.; Starck, J.-L.; Stolyarov, V.; Stompor, R.; Sudiwala, R.; Sureau, F.; Sutton, D.; Suur-Uski, A.-S.; Sygnet, J.-F.; Tauber, J. A.; Tavagnacco, D.; Terenzi, L.; Toffolatti, L.; Tomasi, M.; Tristram, M.; Tucci, M.; Tuovinen, J.; Türler, M.; Valenziano, L.; Valiviita, J.; Van Tent, B.; Varis, J.; Vielva, P.; Villa, F.; Vittorio, N.; Wade, L. A.; Wandelt, B. D.; Wehus, I. K.; White, M.; White, S. D. M.; Yvon, D.; Zacchei, A.; Zonca, A.
2014-11-01
This paper presents the Planck 2013 likelihood, a complete statistical description of the two-point correlation function of the CMB temperature fluctuations that accounts for all known relevant uncertainties, both instrumental and astrophysical in nature. We use this likelihood to derive our best estimate of the CMB angular power spectrum from Planck over three decades in multipole moment, ℓ, covering 2 ≤ ℓ ≤ 2500. The main source of uncertainty at ℓ ≲ 1500 is cosmic variance. Uncertainties in small-scale foreground modelling and instrumental noise dominate the error budget at higher ℓs. For ℓ impact of residual foreground and instrumental uncertainties on the final cosmological parameters. We find good internal agreement among the high-ℓ cross-spectra with residuals below a few μK2 at ℓ ≲ 1000, in agreement with estimated calibration uncertainties. We compare our results with foreground-cleaned CMB maps derived from all Planck frequencies, as well as with cross-spectra derived from the 70 GHz Planck map, and find broad agreement in terms of spectrum residuals and cosmological parameters. We further show that the best-fit ΛCDM cosmology is in excellent agreement with preliminary PlanckEE and TE polarisation spectra. We find that the standard ΛCDM cosmology is well constrained by Planck from the measurements at ℓ ≲ 1500. One specific example is the spectral index of scalar perturbations, for which we report a 5.4σ deviation from scale invariance, ns = 1. Increasing the multipole range beyond ℓ ≃ 1500 does not increase our accuracy for the ΛCDM parameters, but instead allows us to study extensions beyond the standard model. We find no indication of significant departures from the ΛCDM framework. Finally, we report a tension between the Planck best-fit ΛCDM model and the low-ℓ spectrum in the form of a power deficit of 5-10% at ℓ ≲ 40, with a statistical significance of 2.5-3σ. Without a theoretically motivated model for
Directory of Open Access Journals (Sweden)
Han Tantri Hardini
2016-12-01
Full Text Available This research aims to know the influence of problem based learning model toward students’ activities and achievement on Financial Management subject for undergraduate program students of Accounting Education. It was a quantitative research that used true experimental design. Samples of this study were undergraduate program students of Accounting Education in the year of 2014. Class A were control class and class B were experimental class. Data were analyzed by using t-test in order to determine the differences of learning outcomes between control class and experimental class. Then, questionnaires were distributed to gather students’ activities information in their students’ learning model. Findings show that there is an influence of Problem Based Learning model toward students’ activities and learning outcomes on Financial Management subject for undergraduate program students of Accounting Education since t-count ≥ t-table. It is 6.120 ≥ 1.9904. Students’ learning activities with Problem Based Learning model are better than students who are taught by conventional learning model.
Centrifuge modelling of large diameter pile in sand subject to lateral loading
DEFF Research Database (Denmark)
Leth, Caspar Thrane
and cyclic behaviour of large diameter rigid piles in dry sand by use of physical modelling. The physical modelling has been carried out at Department of Civil Engineering at the Danish Technical University (DTU.BYG), in the period from 2005 to 2009. The main centrifuge facilities, and especially...... the equipment for lateral load tests were at the start of the research in 2005 outdated and a major part of the work with the geotechnical centrifuge included renovation and upgrading of the facilities. The research with respect to testing of large diameter piles included: Construction of equipment...... with embedment lengths of 6, 8 and 10 times the diameter. The tests have been carried out with a load eccentricity of 2.5 m to 6.5 m above the sand surface. The present report includes a description of the centrifuge facilities, applied test procedure and equipment along with presentation of the obtained results....
Memory effects on a resonate-and-fire neuron model subjected to Ornstein-Uhlenbeck noise
Paekivi, S.; Mankin, R.; Rekker, A.
2017-10-01
We consider a generalized Langevin equation with an exponentially decaying memory kernel as a model for the firing process of a resonate-and-fire neuron. The effect of temporally correlated random neuronal input is modeled as Ornstein-Uhlenbeck noise. In the noise-induced spiking regime of the neuron, we derive exact analytical formulas for the dependence of some statistical characteristics of the output spike train, such as the probability distribution of the interspike intervals (ISIs) and the survival probability, on the parameters of the input stimulus. Particularly, on the basis of these exact expressions, we have established sufficient conditions for the occurrence of memory-time-induced transitions between unimodal and multimodal structures of the ISI density and a critical damping coefficient which marks a dynamical transition in the behavior of the system.
Nitroglycerin provocation in normal subjects is not a useful human migraine model?
DEFF Research Database (Denmark)
Tvedskov, J F; Iversen, Helle Klingenberg; Olesen, J
2010-01-01
Provoking delayed migraine with nitroglycerin in migraine sufferers is a cumbersome model. Patients are difficult to recruit, migraine comes on late and variably and only 50-80% of patients develop an attack. A model using normal volunteers would be much more useful, but it should be validated...... aspirin 1000 mg, zolmitriptan 5 mg or placebo to normal healthy volunteers. The design was double-blind, placebo-controlled three-way crossover. Our hypothesis was that these drugs would be effective in the treatment of the mild constant headache induced by long-lasting GTN infusion. The headaches did...... experiment suggests that headache caused by direct nitric oxide (NO) action in the continued presence of NO is very resistance to analgesics and to specific acute migraine treatments. This suggests that NO works very deep in the cascade of events associated with vascular headache, whereas tested drugs work...
An evaluation of the hemiplegic subject based on the Bobath approach. Part I: The model.
Guarna, F; Corriveau, H; Chamberland, J; Arsenault, A B; Dutil, E; Drouin, G
1988-01-01
An evaluation, based on the Bobath approach to treatment has been developed. A model, substantiating this evaluation is presented. In this model, the three stages of motor recovery presented by Bobath have been extended to six, to better follow the progression of the patient. Six parameters have also been identified. These are the elements to be quantified so that the progress of the patient through the stages of motor recovery can be followed. Four of these parameters are borrowed from the Bobath approach, that is: postural reaction, muscle tone, reflex activity and active movement. Two have been added: sensorium and pain. An accompanying paper presents the evaluation protocol along with the operational definition of each of these parameters.
Prediction model for carbonation depth of concrete subjected to freezing-thawing cycles
Xiao, Qian Hui; Li, Qiang; Guan, Xiao; Xian Zou, Ying
2018-03-01
Through the indoor simulation test of the concrete durability under the coupling effect of freezing-thawing and carbonation, the variation regularity of concrete neutralization depth under freezing-thawing and carbonation was obtained. Based on concrete carbonation mechanism, the relationship between the air diffusion coefficient and porosity in concrete was analyzed and the calculation method of porosity in Portland cement concrete and fly ash cement concrete was investigated, considering the influence of the freezing-thawing damage on the concrete diffusion coefficient. Finally, a prediction model of carbonation depth of concrete under freezing-thawing circumstance was established. The results obtained using this prediction model agreed well with the experimental test results, and provided a theoretical reference and basis for the concrete durability analysis under multi-factor environments.
International Nuclear Information System (INIS)
Weatherby, J.R.; Clauss, D.B.
1987-01-01
Under the sponsorship of the US Nuclear Regulatory Commission, Sandia National Laboratories is investigating methods for predicting the structural performance of nuclear reactor containment buildings under hypothesized severe accident conditions. As part of this program, a 1/6th-scale reinforced concrete containment model will be pressurized to failure in early 1987. Data generated by the test will be compared to analytical predictions of the structural response in order to assess the accuracy and reliability of the analytical techniques. As part of the pretest analysis effort, Sandia has conducted a number of analyses of the containment structure using the ABAQUS general purpose finite element code. This paper describes results from a nonlinear axisymmetric shell analysis as well as the material models and failure criteria used in conjunction with the analysis
Modelling critical degrees of saturation of porous building materials subjected to freezing
DEFF Research Database (Denmark)
Hansen, Ernst Jan De Place
1996-01-01
of SCR based on fracture mechanics and phase geometry of two-phase materials has been developed.The degradation is modelled as being caused by different eigenstrains of the pore phase and the solid phase when freezing, leading to stress concentrations and crack propagation. Simplifications are made......Frost resistance of porous materials can be characterized by the critical degree of saturation, SCR, and the actual degree of saturation, SACT. An experimental determination of SCR is very laborious and therefore only seldom used when testing frost resistance. A theoretical model for prediction...... to describe the development of stresses and the pore structure, because a mathematical description of the physical theories explaining the process of freezing of water in porous materials is lacking.Calculations are based on porosity, modulus of elasticity and tensile strength, and parameters characterizing...
Directory of Open Access Journals (Sweden)
Mosbeh R. Kaloop
2016-10-01
Full Text Available The present study investigates the prediction efficiency of nonlinear system-identification models, in assessing the behavior of a coupled structure-passive vibration controller. Two system-identification models, including Nonlinear AutoRegresive with eXogenous inputs (NARX and adaptive neuro-fuzzy inference system (ANFIS, are used to model the behavior of an experimentally scaled three-story building incorporated with a tuned mass damper (TMD subjected to seismic loads. The experimental study is performed to generate the input and output data sets for training and testing the designed models. The parameters of root-mean-squared error, mean absolute error and determination coefficient statistics are used to compare the performance of the aforementioned models. A TMD controller system works efficiently to mitigate the structural vibration. The results revealed that the NARX and ANFIS models could be used to identify the response of a controlled structure. The parameters of both two time-delays of the structure response and the seismic load were proven to be effective tools in identifying the performance of the models. A comparison based on the parametric evaluation of the two methods showed that the NARX model outperforms the ANFIS model in identifying structures response.
DeSmitt, Holly J; Domire, Zachary J
2016-12-01
Biomechanical models are sensitive to the choice of model parameters. Therefore, determination of accurate subject specific model parameters is important. One approach to generate these parameters is to optimize the values such that the model output will match experimentally measured strength curves. This approach is attractive as it is inexpensive and should provide an excellent match to experimentally measured strength. However, given the problem of muscle redundancy, it is not clear that this approach generates accurate individual muscle forces. The purpose of this investigation is to evaluate this approach using simulated data to enable a direct comparison. It is hypothesized that the optimization approach will be able to recreate accurate muscle model parameters when information from measurable parameters is given. A model of isometric knee extension was developed to simulate a strength curve across a range of knee angles. In order to realistically recreate experimentally measured strength, random noise was added to the modeled strength. Parameters were solved for using a genetic search algorithm. When noise was added to the measurements the strength curve was reasonably recreated. However, the individual muscle model parameters and force curves were far less accurate. Based upon this examination, it is clear that very different sets of model parameters can recreate similar strength curves. Therefore, experimental variation in strength measurements has a significant influence on the results. Given the difficulty in accurately recreating individual muscle parameters, it may be more appropriate to perform simulations with lumped actuators representing similar muscles.
Maximum likelihood estimation for cytogenetic dose-response curves
International Nuclear Information System (INIS)
Frome, E.L.; DuFrain, R.J.
1986-01-01
In vitro dose-response curves are used to describe the relation between chromosome aberrations and radiation dose for human lymphocytes. The lymphocytes are exposed to low-LET radiation, and the resulting dicentric chromosome aberrations follow the Poisson distribution. The expected yield depends on both the magnitude and the temporal distribution of the dose. A general dose-response model that describes this relation has been presented by Kellerer and Rossi (1972, Current Topics on Radiation Research Quarterly 8, 85-158; 1978, Radiation Research 75, 471-488) using the theory of dual radiation action. Two special cases of practical interest are split-dose and continuous exposure experiments, and the resulting dose-time-response models are intrinsically nonlinear in the parameters. A general-purpose maximum likelihood estimation procedure is described, and estimation for the nonlinear models is illustrated with numerical examples from both experimental designs. Poisson regression analysis is used for estimation, hypothesis testing, and regression diagnostics. Results are discussed in the context of exposure assessment procedures for both acute and chronic human radiation exposure
Towards Subject-Specific Strength Training Design through Predictive Use of Musculoskeletal Models
Directory of Open Access Journals (Sweden)
Michael Plüss
2018-01-01
Full Text Available Lower extremity dysfunction is often associated with hip muscle strength deficiencies. Detailed knowledge of the muscle forces generated in the hip under specific external loading conditions enables specific structures to be trained. The aim of this study was to find the most effective movement type and loading direction to enable the training of specific parts of the hip muscles using a standing posture and a pulley system. In a novel approach to release the predictive power of musculoskeletal modelling techniques based on inverse dynamics, flexion/extension and ab-/adduction movements were virtually created. To demonstrate the effectiveness of this approach, three hip orientations and an external loading force that was systematically rotated around the body were simulated using a state-of-the art OpenSim model in order to establish ideal designs for training of the anterior and posterior parts of the M. gluteus medius (GM. The external force direction as well as the hip orientation greatly influenced the muscle forces in the different parts of the GM. No setting was found for simultaneous training of the anterior and posterior parts with a muscle force higher than 50% of the maximum. Importantly, this study has demonstrated the use of musculoskeletal models as an approach to predict muscle force variations for different strength and rehabilitation exercise variations.
Analytical modeling of tube-to-tubesheet joints subjected to plasticity and creep
International Nuclear Information System (INIS)
Bouzid, A.-H.; Laghzale, N-E.
2009-01-01
The mechanism of failure of heat exchanger and steam generator tube-to-tubesheet joints is related to the level of residual stresses produced in the tube expansion and transition zones during the expansion process and their variation during operation. The accurate prediction of these stresses based of the plastic and creep properties of the joint materials involved can help to design for better leak tightness and strength. Existing design calculations are based on an elastic perfectly plastic behavior of the expansion joint materials and do not account for creep. The proposed model is based on a linear strain hardening material behavior and considers the joint contact pressure relaxation with time. The interaction of the tube and the tubesheet is simulated during the process of the application of the expansion pressure and operation. The effects of the gap, material strain hardening and creep properties are to be emphasized. The developed model results are validated and confronted against the more accurate numerical FEA models. (author)
Directory of Open Access Journals (Sweden)
Lars Marcus
2018-04-01
Full Text Available The world is witnessing unprecedented urbanization, bringing extreme challenges to contemporary practices in urban planning and design. This calls for improved urban models that can generate new knowledge and enhance practical skill. Importantly, any urban model embodies a conception of the relation between humans and the physical environment. In urban modeling this is typically conceived of as a relation between human subjects and an environmental object, thereby reproducing a humans-environment dichotomy. Alternative modeling traditions, such as space syntax that originates in architecture rather than geography, have tried to overcome this dichotomy. Central in this effort is the development of new representations of urban space, such as in the case of space syntax, the axial map. This form of representation aims to integrate both human behavior and the physical environment into one and the same description. Interestingly, models based on these representations have proved to better capture pedestrian movement than regular models. Pedestrian movement, as well as other kinds of human flows in urban space, is essential for urban modeling, since increasingly flows of this kind are understood as the driver in urban processes. Critical for a full understanding of space syntax modeling is the ontology of its' representations, such as the axial map. Space syntax theory here often refers to James Gibson's “Theory of affordances,” where the concept of affordances, in a manner similar to axial maps, aims to bridge the subject-object dichotomy by neither constituting physical properties of the environment or human behavior, but rather what emerges in the meeting between the two. In extension of this, the axial map can be interpreted as a representation of how the physical form of the environment affords human accessibility and visibility in urban space. This paper presents a close examination of the form of representations developed in space syntax
Chung, Chi-Jung; Kuo, Yu-Chen; Hsieh, Yun-Yu; Li, Tsai-Chung; Lin, Cheng-Chieh; Liang, Wen-Miin; Liao, Li-Na; Li, Chia-Ing; Lin, Hsueh-Chun
2017-11-01
This study applied open source technology to establish a subject-enabled analytics model that can enhance measurement statistics of case studies with the public health data in cloud computing. The infrastructure of the proposed model comprises three domains: 1) the health measurement data warehouse (HMDW) for the case study repository, 2) the self-developed modules of online health risk information statistics (HRIStat) for cloud computing, and 3) the prototype of a Web-based process automation system in statistics (PASIS) for the health risk assessment of case studies with subject-enabled evaluation. The system design employed freeware including Java applications, MySQL, and R packages to drive a health risk expert system (HRES). In the design, the HRIStat modules enforce the typical analytics methods for biomedical statistics, and the PASIS interfaces enable process automation of the HRES for cloud computing. The Web-based model supports both modes, step-by-step analysis and auto-computing process, respectively for preliminary evaluation and real time computation. The proposed model was evaluated by computing prior researches in relation to the epidemiological measurement of diseases that were caused by either heavy metal exposures in the environment or clinical complications in hospital. The simulation validity was approved by the commercial statistics software. The model was installed in a stand-alone computer and in a cloud-server workstation to verify computing performance for a data amount of more than 230K sets. Both setups reached efficiency of about 10 5 sets per second. The Web-based PASIS interface can be used for cloud computing, and the HRIStat module can be flexibly expanded with advanced subjects for measurement statistics. The analytics procedure of the HRES prototype is capable of providing assessment criteria prior to estimating the potential risk to public health. Copyright © 2017 Elsevier B.V. All rights reserved.
Capasso, Roberto; Zurlo, Maria Clelia; Smith, Andrew P
2018-02-01
This study integrates different aspects of ethnicity and work-related stress dimensions (based on the Demands-Resources-Individual-Effects model, DRIVE [Mark, G. M., and A. P. Smith. 2008. "Stress Models: A Review and Suggested New Direction." In Occupational Health Psychology, edited by J. Houdmont and S. Leka, 111-144. Nottingham: Nottingham University Press]) and aims to test a multi-dimensional model that combines individual differences, ethnicity dimensions, work characteristics, and perceived job satisfaction/stress as independent variables in the prediction of subjectives reports of health by workers differing in ethnicity. A questionnaire consisting of the following sections was submitted to 900 workers in Southern Italy: for individual and cultural characteristics, coping strategies, personality behaviours, and acculturation strategies; for work characteristics, perceived job demands and job resources/rewards; for appraisals, perceived job stress/satisfaction and racial discrimination; for subjective reports of health, psychological disorders and general health. To test the reliability and construct validity of the extracted factors referred to all dimensions involved in the proposed model and logistic regression analyses to evaluate the main effects of the independent variables on the health outcomes were conducted. Principal component analysis (PCA) yielded seven factors for individual and cultural characteristics (emotional/relational coping, objective coping, Type A behaviour, negative affectivity, social inhibition, affirmation/maintenance culture, and search identity/adoption of the host culture); three factors for work characteristics (work demands, intrinsic/extrinsic rewards, and work resources); three factors for appraisals (perceived job satisfaction, perceived job stress, perceived racial discrimination) and three factors for subjective reports of health (interpersonal disorders, anxious-depressive disorders, and general health). Logistic
Wong, Y Joel; Tsai, Pei-Chun; Liu, Tao; Zhu, Qingqing; Wei, Meifen
2014-10-01
This study examined male Asian international college students' perceptions of racial discrimination, subjective masculinity stress, centrality of masculine identity, and psychological distress by testing a moderated mediation model. Participants were 160 male Asian international college students from 2 large public universities. Participants' perceived racial discrimination was positively related to their subjective masculinity stress only at high (but not low) levels of masculine identity centrality. Additionally, subjective masculinity stress was positively related to psychological distress, although this association was stronger among those who reported high levels of masculine identity centrality. The authors also detected a moderated mediation effect in which subjective masculinity stress mediated the relationship between perceived racial discrimination and psychological distress only at high (but not low) levels of masculine identity centrality. These findings contribute to the counseling psychology literature by highlighting the connections between race- and gender-related stressors as well as the relevance of masculine identity to an understanding of men's mental health. PsycINFO Database Record (c) 2014 APA, all rights reserved.
Maximum Likelihood Learning of Conditional MTE Distributions
DEFF Research Database (Denmark)
Langseth, Helge; Nielsen, Thomas Dyhre; Rumí, Rafael
2009-01-01
We describe a procedure for inducing conditional densities within the mixtures of truncated exponentials (MTE) framework. We analyse possible conditional MTE speciﬁcations and propose a model selection scheme, based on the BIC score, for partitioning the domain of the conditioning variables....... Finally, experimental results demonstrate the applicability of the learning procedure as well as the expressive power of the conditional MTE distribution....
Song, Hong Ji; Paek, Yu Jin; Choi, Min Kyu; Yoo, Ki-Bong; Kang, Jae-Heon; Lee, Hae-Jeung
2017-06-01
The aim of the present study was to investigate the association between hypertension and carbonated sugar-sweetened beverages (SSB) intake according to gender and obesity. The study used data from 2007, 2008 and 2009 Korea National Health and Nutrition Examination Surveys. A total of 9869 subjects (men = 3845 and women = 6024) were included. SSB intakes were calculated from food frequency questionnaires. Odds ratios (ORs) and 95 % confidence interval (CI) for hypertension were assessed using survey logistic regression and multivariable adjusted models. A total of 14.5 % of individuals were classified as having hypertension. The likelihood of hypertension in the third, fourth and fifth quintiles for SSB intake increased to OR 1.00, 1.20 and 1.42 respectively, after adjusting for confounding factors. Compared to the participants in the lowest tertile for SSB intake, participants in the third tertile showed an increased likelihood of hypertension with ORs (CI) of 2.00 (1.21-3.31) and 1.75 (1.23-2.49) for obese women and non-obese men, respectively. The present study showed gender differences in the relationship between carbonated SSB intake and the hypertension according to obesity.
Modeling indoor air pollution of outdoor origin in homes of SAPALDIA subjects in Switzerland.
Meier, Reto; Schindler, Christian; Eeftens, Marloes; Aguilera, Inmaculada; Ducret-Stich, Regina E; Ineichen, Alex; Davey, Mark; Phuleria, Harish C; Probst-Hensch, Nicole; Tsai, Ming-Yi; Künzli, Nino
2015-09-01
Given the shrinking spatial contrasts in outdoor air pollution in Switzerland and the trends toward tightly insulated buildings, the Swiss Cohort Study on Air Pollution and Lung and Heart Diseases in Adults (SAPALDIA) needs to understand to what extent outdoor air pollution remains a determinant for residential indoor exposure. The objectives of this paper are to identify determining factors for indoor air pollution concentrations of particulate matter (PM), ultrafine particles in the size range from 15 to 300nm, black smoke measured as light absorbance of PM (PMabsorbance) and nitrogen dioxide (NO2) and to develop predictive indoor models for SAPALDIA. Multivariable regression models were developed based on indoor and outdoor measurements among homes of selected SAPALDIA participants in three urban (Basel, Geneva, Lugano) and one rural region (Wald ZH) in Switzerland, various home characteristics and reported indoor sources such as cooking. Outdoor levels of air pollutants were important predictors for indoor air pollutants, except for the coarse particle fraction. The fractions of outdoor concentrations infiltrating indoors were between 30% and 66%, the highest one was observed for PMabsorbance. A modifying effect of open windows was found for NO2 and the ultrafine particle number concentration. Cooking was associated with increased particle and NO2 levels. This study shows that outdoor air pollution remains an important determinant of residential indoor air pollution in Switzerland. Copyright © 2015 Elsevier Ltd. All rights reserved.
Directory of Open Access Journals (Sweden)
Hojjat A. Izadi
2011-01-01
Full Text Available The decentralized model predictive control (DMPC of multiple cooperative vehicles with the possibility of communication loss/delay is investigated. The neighboring vehicles exchange their predicted trajectories at every sample time to maintain the cooperation objectives. In the event of a communication loss (packet dropout, the most recent available information, which is potentially delayed, is used. Then the communication loss problem changes to a cooperative problem when random large communication delays are present. Such large communication delays can lead to poor cooperation performance and unsafe behaviors such as collisions. A new DMPC approach is developed to improve the cooperation performance and achieve safety in the presence of the large communication delays. The proposed DMPC architecture estimates the tail of neighbor's trajectory which is not available due to the large communication delays for improving the performance. The concept of the tube MPC is also employed to provide the safety of the fleet against collisions, in the presence of large intervehicle communication delays. In this approach, a tube shaped trajectory set is assumed around the trajectory of the neighboring vehicles whose trajectory is delayed/lost. The radius of tube is a function of the communication delay and vehicle's maneuverability (in the absence of model uncertainty. The simulation of formation problem of multiple vehicles is employed to illustrate the effectiveness of the proposed approach.
Non-constant link tension coefficient in the tumbling-snake model subjected to simple shear
Stephanou, Pavlos S.; Kröger, Martin
2017-11-01
The authors of the present study have recently presented evidence that the tumbling-snake model for polymeric systems has the necessary capacity to predict the appearance of pronounced undershoots in the time-dependent shear viscosity as well as an absence of equally pronounced undershoots in the transient two normal stress coefficients. The undershoots were found to appear due to the tumbling behavior of the director u when a rotational Brownian diffusion term is considered within the equation of motion of polymer segments, and a theoretical basis concerning the use of a link tension coefficient given through the nematic order parameter had been provided. The current work elaborates on the quantitative predictions of the tumbling-snake model to demonstrate its capacity to predict undershoots in the time-dependent shear viscosity. These predictions are shown to compare favorably with experimental rheological data for both polymer melts and solutions, help us to clarify the microscopic origin of the observed phenomena, and demonstrate in detail why a constant link tension coefficient has to be abandoned.
Directory of Open Access Journals (Sweden)
Daniel O. Aikhuele
2017-06-01
Full Text Available In this paper, a subjective and objective fuzzy-based Analytical Hierarchy Process (AHP model is proposed. The model which is based on a newly defined evaluation matrix replaces the fuzzy comparison matrix (FCM in the traditional fuzzy AHP model, which has been found ineffective and time-consuming when criteria/alternatives are increased. The main advantage of the new model is that it is straightforward and completely eliminates the repetitive adjustment of data that is common with the FCM in traditional AHP model. The model reduces the complete dependen-cy on human judgment in prioritization assessment since the weights values are solved automati-cally using the evaluation matrix and the modified priority weight formula in the proposed mod-el. By virtue of a numerical case study, the model is successfully applied in the determination of the implementation priorities of lean practices for a product development environment and com-pared with similar computational methods in the literature.
Beltrachini, L.; Blenkmann, A.; von Ellenrieder, N.; Petroni, A.; Urquina, H.; Manes, F.; Ibáñez, A.; Muravchik, C. H.
2011-12-01
The major goal of evoked related potential studies arise in source localization techniques to identify the loci of neural activity that give rise to a particular voltage distribution measured on the surface of the scalp. In this paper we evaluate the effect of the head model adopted in order to estimate the N170 component source in attention deficit hyperactivity disorder (ADHD) patients and control subjects, considering faces and words stimuli. The standardized low resolution brain electromagnetic tomography algorithm (sLORETA) is used to compare between the three shell spherical head model and a fully realistic model based on the ICBM-152 atlas. We compare their variance on source estimation and analyze the impact on the N170 source localization. Results show that the often used three shell spherical model may lead to erroneous solutions, specially on ADHD patients, so its use is not recommended. Our results also suggest that N170 sources are mainly located in the right occipital fusiform gyrus for faces stimuli and in the left occipital fusiform gyrus for words stimuli, for both control subjects and ADHD patients. We also found a notable decrease on the N170 estimated source amplitude on ADHD patients, resulting in a plausible marker of the disease.
International Nuclear Information System (INIS)
Beltrachini, L; Blenkmann, A; Ellenrieder, N von; Muravchik, C H; Petroni, A; Urquina, H; Manes, F; Ibáñez, A
2011-01-01
The major goal of evoked related potential studies arise in source localization techniques to identify the loci of neural activity that give rise to a particular voltage distribution measured on the surface of the scalp. In this paper we evaluate the effect of the head model adopted in order to estimate the N170 component source in attention deficit hyperactivity disorder (ADHD) patients and control subjects, considering faces and words stimuli. The standardized low resolution brain electromagnetic tomography algorithm (sLORETA) is used to compare between the three shell spherical head model and a fully realistic model based on the ICBM-152 atlas. We compare their variance on source estimation and analyze the impact on the N170 source localization. Results show that the often used three shell spherical model may lead to erroneous solutions, specially on ADHD patients, so its use is not recommended. Our results also suggest that N170 sources are mainly located in the right occipital fusiform gyrus for faces stimuli and in the left occipital fusiform gyrus for words stimuli, for both control subjects and ADHD patients. We also found a notable decrease on the N170 estimated source amplitude on ADHD patients, resulting in a plausible marker of the disease.
Energy Technology Data Exchange (ETDEWEB)
Beltrachini, L; Blenkmann, A; Ellenrieder, N von; Muravchik, C H [Laboratory of Industrial Electronics, Control and Instrumentation (LEICI), National University of La Plata (Argentina); Petroni, A [Integrative Neuroscience Laboratory, Physics Department, University of Buenos Aires, Buenos Aires (Argentina); Urquina, H; Manes, F; Ibanez, A [Institute of Cognitive Neurology (INECO) and Institute of Neuroscience, Favaloro University, Buenos Aires (Argentina)
2011-12-23
The major goal of evoked related potential studies arise in source localization techniques to identify the loci of neural activity that give rise to a particular voltage distribution measured on the surface of the scalp. In this paper we evaluate the effect of the head model adopted in order to estimate the N170 component source in attention deficit hyperactivity disorder (ADHD) patients and control subjects, considering faces and words stimuli. The standardized low resolution brain electromagnetic tomography algorithm (sLORETA) is used to compare between the three shell spherical head model and a fully realistic model based on the ICBM-152 atlas. We compare their variance on source estimation and analyze the impact on the N170 source localization. Results show that the often used three shell spherical model may lead to erroneous solutions, specially on ADHD patients, so its use is not recommended. Our results also suggest that N170 sources are mainly located in the right occipital fusiform gyrus for faces stimuli and in the left occipital fusiform gyrus for words stimuli, for both control subjects and ADHD patients. We also found a notable decrease on the N170 estimated source amplitude on ADHD patients, resulting in a plausible marker of the disease.
Brief communication: Drought likelihood for East Africa
Yang, Hui; Huntingford, Chris
2018-02-01
The East Africa drought in autumn of year 2016 caused malnutrition, illness and death. Close to 16 million people across Somalia, Ethiopia and Kenya needed food, water and medical assistance. Many factors influence drought stress and response. However, inevitably the following question is asked: are elevated greenhouse gas concentrations altering extreme rainfall deficit frequency? We investigate this with general circulation models (GCMs). After GCM bias correction to match the climatological mean of the CHIRPS data-based rainfall product, climate models project small decreases in probability of drought with the same (or worse) severity as 2016 ASO (August to October) East African event. This is by the end of the 21st century compared to the probabilities for present day. However, when further adjusting the climatological variability of GCMs to also match CHIRPS data, by additionally bias-correcting for variance, then the probability of drought occurrence will increase slightly over the same period.
Brief communication: Drought likelihood for East Africa
Directory of Open Access Journals (Sweden)
H. Yang
2018-02-01
Full Text Available The East Africa drought in autumn of year 2016 caused malnutrition, illness and death. Close to 16 million people across Somalia, Ethiopia and Kenya needed food, water and medical assistance. Many factors influence drought stress and response. However, inevitably the following question is asked: are elevated greenhouse gas concentrations altering extreme rainfall deficit frequency? We investigate this with general circulation models (GCMs. After GCM bias correction to match the climatological mean of the CHIRPS data-based rainfall product, climate models project small decreases in probability of drought with the same (or worse severity as 2016 ASO (August to October East African event. This is by the end of the 21st century compared to the probabilities for present day. However, when further adjusting the climatological variability of GCMs to also match CHIRPS data, by additionally bias-correcting for variance, then the probability of drought occurrence will increase slightly over the same period.
Koriat, Asher; Sorka, Hila
2015-01-01
The classification of objects to natural categories exhibits cross-person consensus and within-person consistency, but also some degree of between-person variability and within-person instability. What is more, the variability in categorization is also not entirely random but discloses systematic patterns. In this study, we applied the Self-Consistency Model (SCM, Koriat, 2012) to category membership decisions, examining the possibility that confidence judgments and decision latency track the stable and variable components of categorization responses. The model assumes that category membership decisions are constructed on the fly depending on a small set of clues that are sampled from a commonly shared population of pertinent clues. The decision and confidence are based on the balance of evidence in favor of a positive or a negative response. The results confirmed several predictions derived from SCM. For each participant, consensual responses to items were more confident than non-consensual responses, and for each item, participants who made the consensual response tended to be more confident than those who made the nonconsensual response. The difference in confidence between consensual and nonconsensual responses increased with the proportion of participants who made the majority response for the item. A similar pattern was observed for response speed. The pattern of results obtained for cross-person consensus was replicated by the results for response consistency when the responses were classified in terms of within-person agreement across repeated presentations. These results accord with the sampling assumption of SCM, that confidence and response speed should be higher when the decision is consistent with what follows from the entire population of clues than when it deviates from it. Results also suggested that the context for classification can bias the sample of clues underlying the decision, and that confidence judgments mirror the effects of context on
Composite Estimation for Single-Index Models with Responses Subject to Detection Limits
Tang, Yanlin; Wang, Huixia Judy; Liang, Hua
2017-01-01
We propose a semiparametric estimator for single-index models with censored responses due to detection limits. In the presence of left censoring, the mean function cannot be identified without any parametric distributional assumptions, but the quantile function is still identifiable at upper quantile levels. To avoid parametric distributional assumption, we propose to fit censored quantile regression and combine information across quantile levels to estimate the unknown smooth link function and the index parameter. Under some regularity conditions, we show that the estimated link function achieves the non-parametric optimal convergence rate, and the estimated index parameter is asymptotically normal. The simulation study shows that the proposed estimator is competitive with the omniscient least squares estimator based on the latent uncensored responses for data with normal errors but much more efficient for heavy-tailed data under light and moderate censoring. The practical value of the proposed method is demonstrated through the analysis of a human immunodeficiency virus antibody data set.
A review of shear strength models for rock joints subjected to constant normal stiffness
Directory of Open Access Journals (Sweden)
Sivanathan Thirukumaran
2016-06-01
Full Text Available The typical shear behaviour of rough joints has been studied under constant normal load/stress (CNL boundary conditions, but recent studies have shown that this boundary condition may not replicate true practical situations. Constant normal stiffness (CNS is more appropriate to describe the stress–strain response of field joints since the CNS boundary condition is more realistic than CNL. The practical implications of CNS are movements of unstable blocks in the roof or walls of an underground excavation, reinforced rock wedges sliding in a rock slope or foundation, and the vertical movement of rock-socketed concrete piles. In this paper, the highlights and limitations of the existing models used to predict the shear strength/behaviour of joints under CNS conditions are discussed in depth.
Dynamic modeling of a thermo-piezo-electrically actuated nanosize beam subjected to a magnetic field
Ebrahimi, Farzad; Barati, Mohammad Reza
2016-04-01
In this article, free vibration behavior of magneto-electro-thermo-elastic functionally graded nanobeams is investigated based on a higher order shear deformation beam theory. Four types of thermal loading including uniform and linear temperature change as well as heat conduction and sinusoidal temperature rise through the thickness are assumed. Magneto-electro-thermo-elastic properties of FG nanobeam are supposed to change continuously throughout the thickness based on power-law model. Via nonlocal elasticity theory of Eringen, the small size effects are adopted. Based upon Hamilton's principle, the coupled nonlocal governing equations for higher order shear deformable METE-FG nanobeams are obtained and they are solved applying analytical solution. It is shown that the vibrational behavior of METE-FG nanobeams is significantly affected by various temperature rises, magnetic potential, external electric voltage, power-law index, nonlocal parameter and slenderness ratio.
Composite Estimation for Single-Index Models with Responses Subject to Detection Limits
Tang, Yanlin
2017-11-03
We propose a semiparametric estimator for single-index models with censored responses due to detection limits. In the presence of left censoring, the mean function cannot be identified without any parametric distributional assumptions, but the quantile function is still identifiable at upper quantile levels. To avoid parametric distributional assumption, we propose to fit censored quantile regression and combine information across quantile levels to estimate the unknown smooth link function and the index parameter. Under some regularity conditions, we show that the estimated link function achieves the non-parametric optimal convergence rate, and the estimated index parameter is asymptotically normal. The simulation study shows that the proposed estimator is competitive with the omniscient least squares estimator based on the latent uncensored responses for data with normal errors but much more efficient for heavy-tailed data under light and moderate censoring. The practical value of the proposed method is demonstrated through the analysis of a human immunodeficiency virus antibody data set.
Assessing Compatibility of Direct Detection Data: Halo-Independent Global Likelihood Analyses
Gelmini, Graciela B.
2016-10-18
We present two different halo-independent methods utilizing a global maximum likelihood that can assess the compatibility of dark matter direct detection data given a particular dark matter model. The global likelihood we use is comprised of at least one extended likelihood and an arbitrary number of Poisson or Gaussian likelihoods. In the first method we find the global best fit halo function and construct a two sided pointwise confidence band, which can then be compared with those derived from the extended likelihood alone to assess the joint compatibility of the data. In the second method we define a "constrained parameter goodness-of-fit" test statistic, whose $p$-value we then use to define a "plausibility region" (e.g. where $p \\geq 10\\%$). For any halo function not entirely contained within the plausibility region, the level of compatibility of the data is very low (e.g. $p < 10 \\%$). As an example we apply these methods to CDMS-II-Si and SuperCDMS data, assuming dark matter particles with elastic s...
Planck 2015 results: XI. CMB power spectra, likelihoods, and robustness of parameters
DEFF Research Database (Denmark)
Aghanim, N.; Arnaud, M.; Ashdown, M.
2016-01-01
on the same hybrid approach used for the previous release, i.e., a pixel-based likelihood at low multipoles (ℓ data and of Planck polarization......This paper presents the Planck 2015 likelihoods, statistical descriptions of the 2-point correlationfunctions of the cosmic microwave background (CMB) temperature and polarization fluctuations that account for relevant uncertainties, both instrumental and astrophysical in nature. They are based...... information, along with more detailed models of foregrounds and instrumental uncertainties. The increased redundancy brought by more than doubling the amount of data analysed enables further consistency checks and enhanced immunity to systematic effects. It also improves the constraining power of Planck...
de Guzman, Allan B.; Lagdaan, Lovely France M.; Lagoy, Marie Lauren V.
2015-01-01
Subjective memory complaints are one of the major concerns of the elderly and remain a challenging area in gerontology. There are previous studies that identify different factors affecting subjective memory complaints. However, an extended model that correlates life-space on subjective memory complaints remains a blank spot. The objective of this…
Efficient algorithms for maximum likelihood decoding in the surface code
Bravyi, Sergey; Suchara, Martin; Vargo, Alexander
2014-09-01
We describe two implementations of the optimal error correction algorithm known as the maximum likelihood decoder (MLD) for the two-dimensional surface code with a noiseless syndrome extraction. First, we show how to implement MLD exactly in time O (n2), where n is the number of code qubits. Our implementation uses a reduction from MLD to simulation of matchgate quantum circuits. This reduction however requires a special noise model with independent bit-flip and phase-flip errors. Secondly, we show how to implement MLD approximately for more general noise models using matrix product states (MPS). Our implementation has running time O (nχ3), where χ is a parameter that controls the approximation precision. The key step of our algorithm, borrowed from the density matrix renormalization-group method, is a subroutine for contracting a tensor network on the two-dimensional grid. The subroutine uses MPS with a bond dimension χ to approximate the sequence of tensors arising in the course of contraction. We benchmark the MPS-based decoder against the standard minimum weight matching decoder observing a significant reduction of the logical error probability for χ ≥4.
Maximum likelihood estimation for cytogenetic dose-response curves
International Nuclear Information System (INIS)
Frome, E.L; DuFrain, R.J.
1983-10-01
In vitro dose-response curves are used to describe the relation between the yield of dicentric chromosome aberrations and radiation dose for human lymphocytes. The dicentric yields follow the Poisson distribution, and the expected yield depends on both the magnitude and the temporal distribution of the dose for low LET radiation. A general dose-response model that describes this relation has been obtained by Kellerer and Rossi using the theory of dual radiation action. The yield of elementary lesions is kappa[γd + g(t, tau)d 2 ], where t is the time and d is dose. The coefficient of the d 2 term is determined by the recovery function and the temporal mode of irradiation. Two special cases of practical interest are split-dose and continuous exposure experiments, and the resulting models are intrinsically nonlinear in the parameters. A general purpose maximum likelihood estimation procedure is described and illustrated with numerical examples from both experimental designs. Poisson regression analysis is used for estimation, hypothesis testing, and regression diagnostics. Results are discussed in the context of exposure assessment procedures for both acute and chronic human radiation exposure
Maximum likelihood estimation for cytogenetic dose-response curves
Energy Technology Data Exchange (ETDEWEB)
Frome, E.L; DuFrain, R.J.
1983-10-01
In vitro dose-response curves are used to describe the relation between the yield of dicentric chromosome aberrations and radiation dose for human lymphocytes. The dicentric yields follow the Poisson distribution, and the expected yield depends on both the magnitude and the temporal distribution of the dose for low LET radiation. A general dose-response model that describes this relation has been obtained by Kellerer and Rossi using the theory of dual radiation action. The yield of elementary lesions is kappa(..gamma..d + g(t, tau)d/sup 2/), where t is the time and d is dose. The coefficient of the d/sup 2/ term is determined by the recovery function and the temporal mode of irradiation. Two special cases of practical interest are split-dose and continuous exposure experiments, and the resulting models are intrinsically nonlinear in the parameters. A general purpose maximum likelihood estimation procedure is described and illustrated with numerical examples from both experimental designs. Poisson regression analysis is used for estimation, hypothesis testing, and regression diagnostics. Results are discussed in the context of exposure assessment procedures for both acute and chronic human radiation exposure.