WorldWideScience

Sample records for times higher likelihood

  1. Maximum likelihood window for time delay estimation

    International Nuclear Information System (INIS)

    Lee, Young Sup; Yoon, Dong Jin; Kim, Chi Yup

    2004-01-01

    Time delay estimation for the detection of leak location in underground pipelines is critically important. Because the exact leak location depends upon the precision of the time delay between sensor signals due to leak noise and the speed of elastic waves, the research on the estimation of time delay has been one of the key issues in leak lovating with the time arrival difference method. In this study, an optimal Maximum Likelihood window is considered to obtain a better estimation of the time delay. This method has been proved in experiments, which can provide much clearer and more precise peaks in cross-correlation functions of leak signals. The leak location error has been less than 1 % of the distance between sensors, for example the error was not greater than 3 m for 300 m long underground pipelines. Apart from the experiment, an intensive theoretical analysis in terms of signal processing has been described. The improved leak locating with the suggested method is due to the windowing effect in frequency domain, which offers a weighting in significant frequencies.

  2. A Non-standard Empirical Likelihood for Time Series

    DEFF Research Database (Denmark)

    Nordman, Daniel J.; Bunzel, Helle; Lahiri, Soumendra N.

    Standard blockwise empirical likelihood (BEL) for stationary, weakly dependent time series requires specifying a fixed block length as a tuning parameter for setting confidence regions. This aspect can be difficult and impacts coverage accuracy. As an alternative, this paper proposes a new version...... of BEL based on a simple, though non-standard, data-blocking rule which uses a data block of every possible length. Consequently, the method involves no block selection and is also anticipated to exhibit better coverage performance. Its non-standard blocking scheme, however, induces non......-standard asymptotics and requires a significantly different development compared to standard BEL. We establish the large-sample distribution of log-ratio statistics from the new BEL method for calibrating confidence regions for mean or smooth function parameters of time series. This limit law is not the usual chi...

  3. Relative likelihood for life as a function of cosmic time

    Energy Technology Data Exchange (ETDEWEB)

    Loeb, Abraham [Astronomy department, Harvard University, 60 Garden Street, Cambridge, MA 02138 (United States); Batista, Rafael A.; Sloan, David, E-mail: aloeb@cfa.harvard.edu, E-mail: rafael.alvesbatista@physics.ox.ac.uk, E-mail: david.sloan@physics.ox.ac.uk [Department of Physics - Astrophysics, University of Oxford, DWB, Keble Road, OX1 3RH, Oxford (United Kingdom)

    2016-08-01

    Is life most likely to emerge at the present cosmic time near a star like the Sun? We address this question by calculating the relative formation probability per unit time of habitable Earth-like planets within a fixed comoving volume of the Universe, dP ( t )/ dt , starting from the first stars and continuing to the distant cosmic future. We conservatively restrict our attention to the context of ''life as we know it'' and the standard cosmological model, ΛCDM . We find that unless habitability around low mass stars is suppressed, life is most likely to exist near ∼ 0.1 M {sub ⊙} stars ten trillion years from now. Spectroscopic searches for biosignatures in the atmospheres of transiting Earth-mass planets around low mass stars will determine whether present-day life is indeed premature or typical from a cosmic perspective.

  4. Relative likelihood for life as a function of cosmic time

    International Nuclear Information System (INIS)

    Loeb, Abraham; Batista, Rafael A.; Sloan, David

    2016-01-01

    Is life most likely to emerge at the present cosmic time near a star like the Sun? We address this question by calculating the relative formation probability per unit time of habitable Earth-like planets within a fixed comoving volume of the Universe, dP ( t )/ dt , starting from the first stars and continuing to the distant cosmic future. We conservatively restrict our attention to the context of ''life as we know it'' and the standard cosmological model, ΛCDM . We find that unless habitability around low mass stars is suppressed, life is most likely to exist near ∼ 0.1 M ⊙ stars ten trillion years from now. Spectroscopic searches for biosignatures in the atmospheres of transiting Earth-mass planets around low mass stars will determine whether present-day life is indeed premature or typical from a cosmic perspective.

  5. Anticipating cognitive effort: roles of perceived error-likelihood and time demands.

    Science.gov (United States)

    Dunn, Timothy L; Inzlicht, Michael; Risko, Evan F

    2017-11-13

    Why are some actions evaluated as effortful? In the present set of experiments we address this question by examining individuals' perception of effort when faced with a trade-off between two putative cognitive costs: how much time a task takes vs. how error-prone it is. Specifically, we were interested in whether individuals anticipate engaging in a small amount of hard work (i.e., low time requirement, but high error-likelihood) vs. a large amount of easy work (i.e., high time requirement, but low error-likelihood) as being more effortful. In between-subject designs, Experiments 1 through 3 demonstrated that individuals anticipate options that are high in perceived error-likelihood (yet less time consuming) as more effortful than options that are perceived to be more time consuming (yet low in error-likelihood). Further, when asked to evaluate which of the two tasks was (a) more effortful, (b) more error-prone, and (c) more time consuming, effort-based and error-based choices closely tracked one another, but this was not the case for time-based choices. Utilizing a within-subject design, Experiment 4 demonstrated overall similar pattern of judgments as Experiments 1 through 3. However, both judgments of error-likelihood and time demand similarly predicted effort judgments. Results are discussed within the context of extant accounts of cognitive control, with considerations of how error-likelihood and time demands may independently and conjunctively factor into judgments of cognitive effort.

  6. Maximum Likelihood Blind Channel Estimation for Space-Time Coding Systems

    Directory of Open Access Journals (Sweden)

    Hakan A. Çırpan

    2002-05-01

    Full Text Available Sophisticated signal processing techniques have to be developed for capacity enhancement of future wireless communication systems. In recent years, space-time coding is proposed to provide significant capacity gains over the traditional communication systems in fading wireless channels. Space-time codes are obtained by combining channel coding, modulation, transmit diversity, and optional receive diversity in order to provide diversity at the receiver and coding gain without sacrificing the bandwidth. In this paper, we consider the problem of blind estimation of space-time coded signals along with the channel parameters. Both conditional and unconditional maximum likelihood approaches are developed and iterative solutions are proposed. The conditional maximum likelihood algorithm is based on iterative least squares with projection whereas the unconditional maximum likelihood approach is developed by means of finite state Markov process modelling. The performance analysis issues of the proposed methods are studied. Finally, some simulation results are presented.

  7. A theory of timing in scintillation counters based on maximum likelihood estimation

    International Nuclear Information System (INIS)

    Tomitani, Takehiro

    1982-01-01

    A theory of timing in scintillation counters based on the maximum likelihood estimation is presented. An optimum filter that minimizes the variance of timing is described. A simple formula to estimate the variance of timing is presented as a function of photoelectron number, scintillation decay constant and the single electron transit time spread in the photomultiplier. The present method was compared with the theory by E. Gatti and V. Svelto. The proposed method was applied to two simple models and rough estimations of potential time resolution of several scintillators are given. The proposed method is applicable to the timing in Cerenkov counters and semiconductor detectors as well. (author)

  8. Maximum-likelihood methods for array processing based on time-frequency distributions

    Science.gov (United States)

    Zhang, Yimin; Mu, Weifeng; Amin, Moeness G.

    1999-11-01

    This paper proposes a novel time-frequency maximum likelihood (t-f ML) method for direction-of-arrival (DOA) estimation for non- stationary signals, and compares this method with conventional maximum likelihood DOA estimation techniques. Time-frequency distributions localize the signal power in the time-frequency domain, and as such enhance the effective SNR, leading to improved DOA estimation. The localization of signals with different t-f signatures permits the division of the time-frequency domain into smaller regions, each contains fewer signals than those incident on the array. The reduction of the number of signals within different time-frequency regions not only reduces the required number of sensors, but also decreases the computational load in multi- dimensional optimizations. Compared to the recently proposed time- frequency MUSIC (t-f MUSIC), the proposed t-f ML method can be applied in coherent environments, without the need to perform any type of preprocessing that is subject to both array geometry and array aperture.

  9. Likelihood of being seen within emergency departments’ assigned urgency times for poisoned and injured individuals

    Directory of Open Access Journals (Sweden)

    Rachel L. Rosenthal

    2014-10-01

    Full Text Available The objective of the present study is to determine the likelihood of injured or poisoned patients in special populations, such as those patients that are elderly and self-injurious, being seen within an emergency department’s triage nurse assigned urgency. Data from the National Hospital Ambulatory Medical Care Survey (2007 was utilized in this study. Multi-level models and multivariate linear regression models were used; patient age, sex, reported pain levels, wait time, and injury type were examined as potential predictors of being seen within assigned urgency. From a random sample across all US Emergency Departments, 5616 patients nested in 312 hospital emergency departments were included into the study. Typically, approximately 1 in 5 emergency department patients were not seen within their triage nurse assigned urgencies. The typical patient in the average hospital had an 81% likelihood of being seen within their assigned urgency. P atients who were oldest [odds ratio (OR=0.0990] and had self-inflicted injuries (vs assault OR=1.246 and OR=1.596 had the least likelihood to be seen within their assigned urgencies. As actual wait-time increased for patients, they were less likely to be seen within their assigned urgencies. The most powerful predictors of the study’s outcome were injury type and age, indicating that patients from special populations such as the elderly or those with injuries resulting from deliberate self-harm are less likely to be actually priority patients independent of triage nurse assigned urgencies.

  10. Maximum Likelihood Time-of-Arrival Estimation of Optical Pulses via Photon-Counting Photodetectors

    Science.gov (United States)

    Erkmen, Baris I.; Moision, Bruce E.

    2010-01-01

    Many optical imaging, ranging, and communications systems rely on the estimation of the arrival time of an optical pulse. Recently, such systems have been increasingly employing photon-counting photodetector technology, which changes the statistics of the observed photocurrent. This requires time-of-arrival estimators to be developed and their performances characterized. The statistics of the output of an ideal photodetector, which are well modeled as a Poisson point process, were considered. An analytical model was developed for the mean-square error of the maximum likelihood (ML) estimator, demonstrating two phenomena that cause deviations from the minimum achievable error at low signal power. An approximation was derived to the threshold at which the ML estimator essentially fails to provide better than a random guess of the pulse arrival time. Comparing the analytic model performance predictions to those obtained via simulations, it was verified that the model accurately predicts the ML performance over all regimes considered. There is little prior art that attempts to understand the fundamental limitations to time-of-arrival estimation from Poisson statistics. This work establishes both a simple mathematical description of the error behavior, and the associated physical processes that yield this behavior. Previous work on mean-square error characterization for ML estimators has predominantly focused on additive Gaussian noise. This work demonstrates that the discrete nature of the Poisson noise process leads to a distinctly different error behavior.

  11. Finite mixture model: A maximum likelihood estimation approach on time series data

    Science.gov (United States)

    Yen, Phoong Seuk; Ismail, Mohd Tahir; Hamzah, Firdaus Mohamad

    2014-09-01

    Recently, statistician emphasized on the fitting of finite mixture model by using maximum likelihood estimation as it provides asymptotic properties. In addition, it shows consistency properties as the sample sizes increases to infinity. This illustrated that maximum likelihood estimation is an unbiased estimator. Moreover, the estimate parameters obtained from the application of maximum likelihood estimation have smallest variance as compared to others statistical method as the sample sizes increases. Thus, maximum likelihood estimation is adopted in this paper to fit the two-component mixture model in order to explore the relationship between rubber price and exchange rate for Malaysia, Thailand, Philippines and Indonesia. Results described that there is a negative effect among rubber price and exchange rate for all selected countries.

  12. Fast Maximum-Likelihood Decoder for Quasi-Orthogonal Space-Time Block Code

    Directory of Open Access Journals (Sweden)

    Adel Ahmadi

    2015-01-01

    Full Text Available Motivated by the decompositions of sphere and QR-based methods, in this paper we present an extremely fast maximum-likelihood (ML detection approach for quasi-orthogonal space-time block code (QOSTBC. The proposed algorithm with a relatively simple design exploits structure of quadrature amplitude modulation (QAM constellations to achieve its goal and can be extended to any arbitrary constellation. Our decoder utilizes a new decomposition technique for ML metric which divides the metric into independent positive parts and a positive interference part. Search spaces of symbols are substantially reduced by employing the independent parts and statistics of noise. Symbols within the search spaces are successively evaluated until the metric is minimized. Simulation results confirm that the proposed decoder’s performance is superior to many of the recently published state-of-the-art solutions in terms of complexity level. More specifically, it was possible to verify that application of the new algorithms with 1024-QAM would decrease the computational complexity compared to state-of-the-art solution with 16-QAM.

  13. Improved efficiency of maximum likelihood analysis of time series with temporally correlated errors

    Science.gov (United States)

    Langbein, John

    2017-08-01

    Most time series of geophysical phenomena have temporally correlated errors. From these measurements, various parameters are estimated. For instance, from geodetic measurements of positions, the rates and changes in rates are often estimated and are used to model tectonic processes. Along with the estimates of the size of the parameters, the error in these parameters needs to be assessed. If temporal correlations are not taken into account, or each observation is assumed to be independent, it is likely that any estimate of the error of these parameters will be too low and the estimated value of the parameter will be biased. Inclusion of better estimates of uncertainties is limited by several factors, including selection of the correct model for the background noise and the computational requirements to estimate the parameters of the selected noise model for cases where there are numerous observations. Here, I address the second problem of computational efficiency using maximum likelihood estimates (MLE). Most geophysical time series have background noise processes that can be represented as a combination of white and power-law noise, 1/f^{α } with frequency, f. With missing data, standard spectral techniques involving FFTs are not appropriate. Instead, time domain techniques involving construction and inversion of large data covariance matrices are employed. Bos et al. (J Geod, 2013. doi: 10.1007/s00190-012-0605-0) demonstrate one technique that substantially increases the efficiency of the MLE methods, yet is only an approximate solution for power-law indices >1.0 since they require the data covariance matrix to be Toeplitz. That restriction can be removed by simply forming a data filter that adds noise processes rather than combining them in quadrature. Consequently, the inversion of the data covariance matrix is simplified yet provides robust results for a wider range of power-law indices.

  14. Higher educational attainment associated with reduced likelihood of abnormal cervical lesions among Zambian women - a cross sectional study.

    Science.gov (United States)

    Hamoonga, Twaambo Euphemia; Likwa, Rosemary Ndonyo; Musonda, Patrick; Michelo, Charles

    2017-10-13

    The high burden of cervical cancer in Zambia prompted the Ministry of Health and partners to develop the cervical cancer prevention program in Zambia (CCPPZ) in 2006. Despite this intervention more women continue to die from the disease and there is little understanding of factors that may be linked with abnormal cervical lesions in the general population. We therefore examined if educational attainment is associated with abnormal cervical lesions among Zambian women aged 15 to 49 years. This study used data from the cervical cancer prevention program in Zambia, where a total of 14,294 women aged 15 to 49 years were screened for cervical cancer at nine health facilities between October 2013 and September 2014. The data represents women from six provinces of Zambia, namely Southern, Central, Copperbelt, Luapula, North-western and Eastern provinces. Step-wise logistic regression analysis using the Statistical Package for the Social Sciences (SPSS) version 21 was used to estimate adjusted odds ratios (AOR) and 95% confidence intervals (CIs) for educational attainment with presence of abnormal cervical lesions as outcome. Multiple imputation was further used to obtain the imputed stabilized estimates for educational attainment. The prevalence of abnormal cervical lesions, using the Visual Inspection with Acetic-acid (VIA) test was 10.7% (n = 1523). Educational attainment was inversely associated with abnormal cervical lesions (AOR = 0.75; 95% CI:0.70-0.81, AOR = 0.74; 95% CI:0.68-0.81 and AOR = 0.46; 95% CI:0.41-0.51) among women with primary, secondary and tertiary education, respectively, compared to those with no formal education. We find reduced likelihood of abnormal cervical lesions in educated women, suggesting a differential imbalance with women who have no formal education. These findings may be a reflection of inequalities associated with access to cervical cancer screening, making the service inadequately accessible for lower educated groups. This

  15. FlowMax: A Computational Tool for Maximum Likelihood Deconvolution of CFSE Time Courses.

    Directory of Open Access Journals (Sweden)

    Maxim Nikolaievich Shokhirev

    Full Text Available The immune response is a concerted dynamic multi-cellular process. Upon infection, the dynamics of lymphocyte populations are an aggregate of molecular processes that determine the activation, division, and longevity of individual cells. The timing of these single-cell processes is remarkably widely distributed with some cells undergoing their third division while others undergo their first. High cell-to-cell variability and technical noise pose challenges for interpreting popular dye-dilution experiments objectively. It remains an unresolved challenge to avoid under- or over-interpretation of such data when phenotyping gene-targeted mouse models or patient samples. Here we develop and characterize a computational methodology to parameterize a cell population model in the context of noisy dye-dilution data. To enable objective interpretation of model fits, our method estimates fit sensitivity and redundancy by stochastically sampling the solution landscape, calculating parameter sensitivities, and clustering to determine the maximum-likelihood solution ranges. Our methodology accounts for both technical and biological variability by using a cell fluorescence model as an adaptor during population model fitting, resulting in improved fit accuracy without the need for ad hoc objective functions. We have incorporated our methodology into an integrated phenotyping tool, FlowMax, and used it to analyze B cells from two NFκB knockout mice with distinct phenotypes; we not only confirm previously published findings at a fraction of the expended effort and cost, but reveal a novel phenotype of nfkb1/p105/50 in limiting the proliferative capacity of B cells following B-cell receptor stimulation. In addition to complementing experimental work, FlowMax is suitable for high throughput analysis of dye dilution studies within clinical and pharmacological screens with objective and quantitative conclusions.

  16. Monte Carlo Maximum Likelihood Estimation for Generalized Long-Memory Time Series Models

    NARCIS (Netherlands)

    Mesters, G.; Koopman, S.J.; Ooms, M.

    2016-01-01

    An exact maximum likelihood method is developed for the estimation of parameters in a non-Gaussian nonlinear density function that depends on a latent Gaussian dynamic process with long-memory properties. Our method relies on the method of importance sampling and on a linear Gaussian approximating

  17. Incorporating real-time traffic and weather data to explore road accident likelihood and severity in urban arterials.

    Science.gov (United States)

    Theofilatos, Athanasios

    2017-06-01

    The effective treatment of road accidents and thus the enhancement of road safety is a major concern to societies due to the losses in human lives and the economic and social costs. The investigation of road accident likelihood and severity by utilizing real-time traffic and weather data has recently received significant attention by researchers. However, collected data mainly stem from freeways and expressways. Consequently, the aim of the present paper is to add to the current knowledge by investigating accident likelihood and severity by exploiting real-time traffic and weather data collected from urban arterials in Athens, Greece. Random Forests (RF) are firstly applied for preliminary analysis purposes. More specifically, it is aimed to rank candidate variables according to their relevant importance and provide a first insight on the potential significant variables. Then, Bayesian logistic regression as well finite mixture and mixed effects logit models are applied to further explore factors associated with accident likelihood and severity respectively. Regarding accident likelihood, the Bayesian logistic regression showed that variations in traffic significantly influence accident occurrence. On the other hand, accident severity analysis revealed a generally mixed influence of traffic variations on accident severity, although international literature states that traffic variations increase severity. Lastly, weather parameters did not find to have a direct influence on accident likelihood or severity. The study added to the current knowledge by incorporating real-time traffic and weather data from urban arterials to investigate accident occurrence and accident severity mechanisms. The identification of risk factors can lead to the development of effective traffic management strategies to reduce accident occurrence and severity of injuries in urban arterials. Copyright © 2017 Elsevier Ltd and National Safety Council. All rights reserved.

  18. Analysis of hourly crash likelihood using unbalanced panel data mixed logit model and real-time driving environmental big data.

    Science.gov (United States)

    Chen, Feng; Chen, Suren; Ma, Xiaoxiang

    2018-06-01

    Driving environment, including road surface conditions and traffic states, often changes over time and influences crash probability considerably. It becomes stretched for traditional crash frequency models developed in large temporal scales to capture the time-varying characteristics of these factors, which may cause substantial loss of critical driving environmental information on crash prediction. Crash prediction models with refined temporal data (hourly records) are developed to characterize the time-varying nature of these contributing factors. Unbalanced panel data mixed logit models are developed to analyze hourly crash likelihood of highway segments. The refined temporal driving environmental data, including road surface and traffic condition, obtained from the Road Weather Information System (RWIS), are incorporated into the models. Model estimation results indicate that the traffic speed, traffic volume, curvature and chemically wet road surface indicator are better modeled as random parameters. The estimation results of the mixed logit models based on unbalanced panel data show that there are a number of factors related to crash likelihood on I-25. Specifically, weekend indicator, November indicator, low speed limit and long remaining service life of rutting indicator are found to increase crash likelihood, while 5-am indicator and number of merging ramps per lane per mile are found to decrease crash likelihood. The study underscores and confirms the unique and significant impacts on crash imposed by the real-time weather, road surface, and traffic conditions. With the unbalanced panel data structure, the rich information from real-time driving environmental big data can be well incorporated. Copyright © 2018 National Safety Council and Elsevier Ltd. All rights reserved.

  19. Induced abortion is not associated with a higher likelihood of depression in Curaçao women.

    Science.gov (United States)

    Boersma, Adriana A; van den Berg, Desirée; van Lunsen, Rik H W; Laan, Ellen T M

    2014-10-01

    To investigate the risk of developing a depression after induced abortion. A prospective cohort study conducted in Curaçao which involved 92 women having an induced abortion and 37 women delivering after an unplanned or unwanted pregnancy, who served as controls. All participants completed the Center of Epidemiological Studies Depression (CES-D) scale before and two to three weeks after the abortion or delivery. Following the abortion, significantly fewer women were at risk of depression (30%) as compared to when still pregnant (60%). Mean depression scores were significantly lower after- than before the procedure. The likelihood of depression post-abortum (30%) was similar to that after delivery of an unplanned/unwanted child (22%). Even though women in the abortion group more often reported having suffered from depression in the past than controls, they were not at greater risk of depression after their pregnancy had ended. Curaçao women's risk of developing a depression following an (early) induced abortion is not greater than that after carrying to term an unplanned/unwanted pregnancy. We recommend that the results of this study be taken into account in case the Curaçao government should consider legalisation of induced abortion in the near future.

  20. Empirical likelihood

    CERN Document Server

    Owen, Art B

    2001-01-01

    Empirical likelihood provides inferences whose validity does not depend on specifying a parametric model for the data. Because it uses a likelihood, the method has certain inherent advantages over resampling methods: it uses the data to determine the shape of the confidence regions, and it makes it easy to combined data from multiple sources. It also facilitates incorporating side information, and it simplifies accounting for censored, truncated, or biased sampling.One of the first books published on the subject, Empirical Likelihood offers an in-depth treatment of this method for constructing confidence regions and testing hypotheses. The author applies empirical likelihood to a range of problems, from those as simple as setting a confidence region for a univariate mean under IID sampling, to problems defined through smooth functions of means, regression models, generalized linear models, estimating equations, or kernel smooths, and to sampling with non-identically distributed data. Abundant figures offer vi...

  1. Approximate Likelihood

    CERN Multimedia

    CERN. Geneva

    2015-01-01

    Most physics results at the LHC end in a likelihood ratio test. This includes discovery and exclusion for searches as well as mass, cross-section, and coupling measurements. The use of Machine Learning (multivariate) algorithms in HEP is mainly restricted to searches, which can be reduced to classification between two fixed distributions: signal vs. background. I will show how we can extend the use of ML classifiers to distributions parameterized by physical quantities like masses and couplings as well as nuisance parameters associated to systematic uncertainties. This allows for one to approximate the likelihood ratio while still using a high dimensional feature vector for the data. Both the MEM and ABC approaches mentioned above aim to provide inference on model parameters (like cross-sections, masses, couplings, etc.). ABC is fundamentally tied Bayesian inference and focuses on the “likelihood free” setting where only a simulator is available and one cannot directly compute the likelihood for the dat...

  2. pplacer: linear time maximum-likelihood and Bayesian phylogenetic placement of sequences onto a fixed reference tree

    Directory of Open Access Journals (Sweden)

    Kodner Robin B

    2010-10-01

    Full Text Available Abstract Background Likelihood-based phylogenetic inference is generally considered to be the most reliable classification method for unknown sequences. However, traditional likelihood-based phylogenetic methods cannot be applied to large volumes of short reads from next-generation sequencing due to computational complexity issues and lack of phylogenetic signal. "Phylogenetic placement," where a reference tree is fixed and the unknown query sequences are placed onto the tree via a reference alignment, is a way to bring the inferential power offered by likelihood-based approaches to large data sets. Results This paper introduces pplacer, a software package for phylogenetic placement and subsequent visualization. The algorithm can place twenty thousand short reads on a reference tree of one thousand taxa per hour per processor, has essentially linear time and memory complexity in the number of reference taxa, and is easy to run in parallel. Pplacer features calculation of the posterior probability of a placement on an edge, which is a statistically rigorous way of quantifying uncertainty on an edge-by-edge basis. It also can inform the user of the positional uncertainty for query sequences by calculating expected distance between placement locations, which is crucial in the estimation of uncertainty with a well-sampled reference tree. The software provides visualizations using branch thickness and color to represent number of placements and their uncertainty. A simulation study using reads generated from 631 COG alignments shows a high level of accuracy for phylogenetic placement over a wide range of alignment diversity, and the power of edge uncertainty estimates to measure placement confidence. Conclusions Pplacer enables efficient phylogenetic placement and subsequent visualization, making likelihood-based phylogenetics methodology practical for large collections of reads; it is freely available as source code, binaries, and a web service.

  3. The Impact of Parental Divorce on Children's Educational Attainment, Marital Timing, and Likelihood of Divorce.

    Science.gov (United States)

    Keith, Verna M.; Finlay, Barbara

    1988-01-01

    Examined combined sample of national data to determine impact of parental divorce on children. Found parental divorce associated with lower educational attainment and earlier age at marriage for sons and daughters. Daughters of divorced parents had higher probability of being divorced. For sons of divorced parents, probability of ever marrying and…

  4. Higher dimensional time-energy entanglement

    International Nuclear Information System (INIS)

    Richart, Daniel Lampert

    2014-01-01

    Judging by the compelling number of innovations based on taming quantum mechanical effects, such as the development of transistors and lasers, further research in this field promises to tackle further technological challenges in the years to come. This statement gains even more importance in the information processing scenario. Here, the growing data generation and the correspondingly higher need for more efficient computational resources and secure high bandwidth networks are central problems which need to be tackled. In this sense, the required CPU minituarization makes the design of structures at atomic levels inevitable, as foreseen by Moore's law. From these perspectives, it is necessary to concentrate further research efforts into controlling and manipulating quantum mechanical systems. This enables for example to encode quantum superposition states to tackle problems which are computationally NP hard and which therefore cannot be solved efficiently by classical computers. The only limitation affecting these solutions is the low scalability of existing quantum systems. Similarly, quantum communication schemes are devised to certify the secure transmission of quantum information, but are still limited by a low transmission bandwidth. This thesis follows the guideline defined by these research projects and aims to further increase the scalability of the quantum mechanical systems required to perform these tasks. The method used here is to encode quantum states into photons generated by spontaneous parametric down-conversion (SPDC). An intrinsic limitation of photons is that the scalability of quantum information schemes employing them is limited by the low detection efficiency of commercial single photon detectors. This is addressed by encoding higher dimensional quantum states into two photons, increasing the scalability of the scheme in comparison to multi-photon states. Further on, the encoding of quantum information into the emission-time degree of

  5. Higher dimensional time-energy entanglement

    Energy Technology Data Exchange (ETDEWEB)

    Richart, Daniel Lampert

    2014-07-08

    Judging by the compelling number of innovations based on taming quantum mechanical effects, such as the development of transistors and lasers, further research in this field promises to tackle further technological challenges in the years to come. This statement gains even more importance in the information processing scenario. Here, the growing data generation and the correspondingly higher need for more efficient computational resources and secure high bandwidth networks are central problems which need to be tackled. In this sense, the required CPU minituarization makes the design of structures at atomic levels inevitable, as foreseen by Moore's law. From these perspectives, it is necessary to concentrate further research efforts into controlling and manipulating quantum mechanical systems. This enables for example to encode quantum superposition states to tackle problems which are computationally NP hard and which therefore cannot be solved efficiently by classical computers. The only limitation affecting these solutions is the low scalability of existing quantum systems. Similarly, quantum communication schemes are devised to certify the secure transmission of quantum information, but are still limited by a low transmission bandwidth. This thesis follows the guideline defined by these research projects and aims to further increase the scalability of the quantum mechanical systems required to perform these tasks. The method used here is to encode quantum states into photons generated by spontaneous parametric down-conversion (SPDC). An intrinsic limitation of photons is that the scalability of quantum information schemes employing them is limited by the low detection efficiency of commercial single photon detectors. This is addressed by encoding higher dimensional quantum states into two photons, increasing the scalability of the scheme in comparison to multi-photon states. Further on, the encoding of quantum information into the emission-time degree of

  6. Parameterizing Spatial Models of Infectious Disease Transmission that Incorporate Infection Time Uncertainty Using Sampling-Based Likelihood Approximations.

    Directory of Open Access Journals (Sweden)

    Rajat Malik

    Full Text Available A class of discrete-time models of infectious disease spread, referred to as individual-level models (ILMs, are typically fitted in a Bayesian Markov chain Monte Carlo (MCMC framework. These models quantify probabilistic outcomes regarding the risk of infection of susceptible individuals due to various susceptibility and transmissibility factors, including their spatial distance from infectious individuals. The infectious pressure from infected individuals exerted on susceptible individuals is intrinsic to these ILMs. Unfortunately, quantifying this infectious pressure for data sets containing many individuals can be computationally burdensome, leading to a time-consuming likelihood calculation and, thus, computationally prohibitive MCMC-based analysis. This problem worsens when using data augmentation to allow for uncertainty in infection times. In this paper, we develop sampling methods that can be used to calculate a fast, approximate likelihood when fitting such disease models. A simple random sampling approach is initially considered followed by various spatially-stratified schemes. We test and compare the performance of our methods with both simulated data and data from the 2001 foot-and-mouth disease (FMD epidemic in the U.K. Our results indicate that substantial computation savings can be obtained--albeit, of course, with some information loss--suggesting that such techniques may be of use in the analysis of very large epidemic data sets.

  7. The dorsal medial frontal cortex is sensitive to time on task, not response conflict or error likelihood.

    Science.gov (United States)

    Grinband, Jack; Savitskaya, Judith; Wager, Tor D; Teichert, Tobias; Ferrera, Vincent P; Hirsch, Joy

    2011-07-15

    The dorsal medial frontal cortex (dMFC) is highly active during choice behavior. Though many models have been proposed to explain dMFC function, the conflict monitoring model is the most influential. It posits that dMFC is primarily involved in detecting interference between competing responses thus signaling the need for control. It accurately predicts increased neural activity and response time (RT) for incompatible (high-interference) vs. compatible (low-interference) decisions. However, it has been shown that neural activity can increase with time on task, even when no decisions are made. Thus, the greater dMFC activity on incompatible trials may stem from longer RTs rather than response conflict. This study shows that (1) the conflict monitoring model fails to predict the relationship between error likelihood and RT, and (2) the dMFC activity is not sensitive to congruency, error likelihood, or response conflict, but is monotonically related to time on task. Copyright © 2010 Elsevier Inc. All rights reserved.

  8. Effect of travel distance and time to radiotherapy on likelihood of receiving mastectomy.

    Science.gov (United States)

    Goyal, Sharad; Chandwani, Sheenu; Haffty, Bruce G; Demissie, Kitaw

    2015-04-01

    Breast-conserving surgery (BCS) followed by adjuvant radiation therapy (RT) is the standard of care for women with early-stage breast cancer as an alternative to mastectomy. The purpose of this study was to examine the relationship between receipt of mastectomy and travel distance and time to RT facility in New Jersey (NJ). Data were collected from a cohort of 634 NJ women diagnosed with early-stage breast cancer. In patients receiving RT, the precise RT facility was used, whereas in patients not receiving RT, surgeons were contacted to determine the location of RT referral. Travel distance and time to RT facility from the patients' residential address were modeled separately using multiple binomial regression to examine their association with choice of surgery while adjusting for clinical and sociodemographic factors. Overall, 58.5 % patients underwent BCS with median travel distance to the radiation facility of 4.8 miles (vs. 6.6 miles for mastectomy) and median travel time of 12.0 min (vs. 15.0 min for mastectomy). Patients residing > 9.2 miles compared with ≤ 9.2 miles from radiation facility were 44 % more likely to receive mastectomy. Additionally, patients requiring > 19 min compared with ≤ 19 min of travel time were 36 % more likely to receive mastectomy. These data found that travel distance and time from RT facility act as barriers to undergoing BCS in women with early-stage breast cancer. Despite being in an urban region, a significant number of women in NJ with early-stage breast cancer did not receive BCS.

  9. Quasi-Maximum Likelihood Estimation and Bootstrap Inference in Fractional Time Series Models with Heteroskedasticity of Unknown Form

    DEFF Research Database (Denmark)

    Cavaliere, Giuseppe; Nielsen, Morten Ørregaard; Taylor, Robert

    We consider the problem of conducting estimation and inference on the parameters of univariate heteroskedastic fractionally integrated time series models. We first extend existing results in the literature, developed for conditional sum-of squares estimators in the context of parametric fractional...... time series models driven by conditionally homoskedastic shocks, to allow for conditional and unconditional heteroskedasticity both of a quite general and unknown form. Global consistency and asymptotic normality are shown to still obtain; however, the covariance matrix of the limiting distribution...... of the estimator now depends on nuisance parameters derived both from the weak dependence and heteroskedasticity present in the shocks. We then investigate classical methods of inference based on the Wald, likelihood ratio and Lagrange multiplier tests for linear hypotheses on either or both of the long and short...

  10. Likelihood of Null Effects of Large NHLBI Clinical Trials Has Increased over Time.

    Directory of Open Access Journals (Sweden)

    Robert M Kaplan

    Full Text Available We explore whether the number of null results in large National Heart Lung, and Blood Institute (NHLBI funded trials has increased over time.We identified all large NHLBI supported RCTs between 1970 and 2012 evaluating drugs or dietary supplements for the treatment or prevention of cardiovascular disease. Trials were included if direct costs >$500,000/year, participants were adult humans, and the primary outcome was cardiovascular risk, disease or death. The 55 trials meeting these criteria were coded for whether they were published prior to or after the year 2000, whether they registered in clinicaltrials.gov prior to publication, used active or placebo comparator, and whether or not the trial had industry co-sponsorship. We tabulated whether the study reported a positive, negative, or null result on the primary outcome variable and for total mortality.17 of 30 studies (57% published prior to 2000 showed a significant benefit of intervention on the primary outcome in comparison to only 2 among the 25 (8% trials published after 2000 (χ2=12.2,df= 1, p=0.0005. There has been no change in the proportion of trials that compared treatment to placebo versus active comparator. Industry co-sponsorship was unrelated to the probability of reporting a significant benefit. Pre-registration in clinical trials.gov was strongly associated with the trend toward null findings.The number NHLBI trials reporting positive results declined after the year 2000. Prospective declaration of outcomes in RCTs, and the adoption of transparent reporting standards, as required by clinicaltrials.gov, may have contributed to the trend toward null findings.

  11. Higher Education: A Time for Triage?

    Science.gov (United States)

    Lagowski, J. J.

    1995-10-01

    Higher education faces unprecedented challenges. The confluence of changing economic and demographic tends; new patterns of federal and state spending; more explicit expectations by students and their families for affordable, accessible education; and heightened scrutiny by those who claim a legitimate interest in higher education is inescapably altering the environment in which this system operates. Higher education will never again be as it was before. Further, many believe that tinkering around the margins is no longer an adequate response to the new demands. Fundamental change is deemed necessary to meet the challenge of this melange of pressures. A number of commentators have observed that political and corporate America have responded to their challenges by instituting a fundamental restructuring of those institutions. The medical community is also in the midst of a similar basic restructuring of the health care delivery system in this country. Now its education's turn. People are questioning the historically expressed mission of higher education. They make the claim that we cost too much, spend carelessly, teach poorly, plan myopically, and when questioned, act defensively. Educational administrators, from department chairs up, are confronted with the task of simultaneously reforming and cutting back. They have no choice. They must establish politically sophisticated priority settings and effect a hard-nosed reallocation of resources in a social environment where competing public needs have equivalent--or stronger--emotional pulls. Triage in a medical context involves confronting an emergency in which the demand for attention far outstrips available assistance by establishing a sequence of care in which one key individual orchestrates the application of harsh priorities which have been designed to maximize the number of survivors. In recent years, the decisions that have been made in some centers of higher education bear a striking similarity. The literature

  12. The phylogenetic likelihood library.

    Science.gov (United States)

    Flouri, T; Izquierdo-Carrasco, F; Darriba, D; Aberer, A J; Nguyen, L-T; Minh, B Q; Von Haeseler, A; Stamatakis, A

    2015-03-01

    We introduce the Phylogenetic Likelihood Library (PLL), a highly optimized application programming interface for developing likelihood-based phylogenetic inference and postanalysis software. The PLL implements appropriate data structures and functions that allow users to quickly implement common, error-prone, and labor-intensive tasks, such as likelihood calculations, model parameter as well as branch length optimization, and tree space exploration. The highly optimized and parallelized implementation of the phylogenetic likelihood function and a thorough documentation provide a framework for rapid development of scalable parallel phylogenetic software. By example of two likelihood-based phylogenetic codes we show that the PLL improves the sequential performance of current software by a factor of 2-10 while requiring only 1 month of programming time for integration. We show that, when numerical scaling for preventing floating point underflow is enabled, the double precision likelihood calculations in the PLL are up to 1.9 times faster than those in BEAGLE. On an empirical DNA dataset with 2000 taxa the AVX version of PLL is 4 times faster than BEAGLE (scaling enabled and required). The PLL is available at http://www.libpll.org under the GNU General Public License (GPL). © The Author(s) 2014. Published by Oxford University Press, on behalf of the Society of Systematic Biologists.

  13. Women’s higher likelihood of disability pension: the role of health, family and work. A 5–7 years follow-up of the Hordaland Health Study

    Science.gov (United States)

    2012-01-01

    Background Women’s higher risk of disability pension compared with men is found in countries with high female work participation and universal welfare schemes. The aim of the study was to examine the extent to which self-perceived health, family situation and work factors explain women’s higher risk of disability pension. We also explored how these factors influenced the gender difference across educational strata. Methods The population-based Hordaland Health Study (HUSK) was conducted in 1997–99 and included inhabitants born in 1953–57 in Hordaland County, Norway. The current study included 5,959 men and 6,306 women in paid work with valid information on education and self-perceived health. Follow-up data on disability pension, for a period of 5–7 years, was obtained by linking the health survey to a national registry of disability pension. Cox regression analyses were employed. Results During the follow-up period 99 (1.7%) men and 230 (3.6%) women were awarded disability pension, giving a twofold risk of disability pension for women compared with men. Except for a moderate impact of self-perceived health, adjustment for family situation and work factors did not influence the gender difference in risk. Repeating the analyses in strata of education, the gender difference in risk of disability pension among the highly educated was fully explained by self-perceived health and work factors. In the lower strata of education there remained a substantial unexplained gender difference in risk. Conclusions In a Norwegian cohort of middle-aged men and women, self-perceived health, family situation and work factors could not explain women’s higher likelihood of disability pension. However, analyses stratified by educational level indicate that mechanisms behind the gender gap in disability pension differ by educational levels. Recognizing the heterogeneity within gender may contribute to a deeper understanding of women’s higher risk of disability pension. PMID

  14. Women's higher likelihood of disability pension: the role of health, family and work. A 5-7 years follow-up of the Hordaland Health Study.

    Science.gov (United States)

    Haukenes, Inger; Gjesdal, Sturla; Rortveit, Guri; Riise, Trond; Maeland, John Gunnar

    2012-08-31

    Women's higher risk of disability pension compared with men is found in countries with high female work participation and universal welfare schemes. The aim of the study was to examine the extent to which self-perceived health, family situation and work factors explain women's higher risk of disability pension. We also explored how these factors influenced the gender difference across educational strata. The population-based Hordaland Health Study (HUSK) was conducted in 1997-99 and included inhabitants born in 1953-57 in Hordaland County, Norway. The current study included 5,959 men and 6,306 women in paid work with valid information on education and self-perceived health. Follow-up data on disability pension, for a period of 5-7 years, was obtained by linking the health survey to a national registry of disability pension. Cox regression analyses were employed. During the follow-up period 99 (1.7%) men and 230 (3.6%) women were awarded disability pension, giving a twofold risk of disability pension for women compared with men. Except for a moderate impact of self-perceived health, adjustment for family situation and work factors did not influence the gender difference in risk. Repeating the analyses in strata of education, the gender difference in risk of disability pension among the highly educated was fully explained by self-perceived health and work factors. In the lower strata of education there remained a substantial unexplained gender difference in risk. In a Norwegian cohort of middle-aged men and women, self-perceived health, family situation and work factors could not explain women's higher likelihood of disability pension. However, analyses stratified by educational level indicate that mechanisms behind the gender gap in disability pension differ by educational levels. Recognizing the heterogeneity within gender may contribute to a deeper understanding of women's higher risk of disability pension.

  15. Joint Maximum Likelihood Time Delay Estimation of Unknown Event-Related Potential Signals for EEG Sensor Signal Quality Enhancement

    Science.gov (United States)

    Kim, Kyungsoo; Lim, Sung-Ho; Lee, Jaeseok; Kang, Won-Seok; Moon, Cheil; Choi, Ji-Woong

    2016-01-01

    Electroencephalograms (EEGs) measure a brain signal that contains abundant information about the human brain function and health. For this reason, recent clinical brain research and brain computer interface (BCI) studies use EEG signals in many applications. Due to the significant noise in EEG traces, signal processing to enhance the signal to noise power ratio (SNR) is necessary for EEG analysis, especially for non-invasive EEG. A typical method to improve the SNR is averaging many trials of event related potential (ERP) signal that represents a brain’s response to a particular stimulus or a task. The averaging, however, is very sensitive to variable delays. In this study, we propose two time delay estimation (TDE) schemes based on a joint maximum likelihood (ML) criterion to compensate the uncertain delays which may be different in each trial. We evaluate the performance for different types of signals such as random, deterministic, and real EEG signals. The results show that the proposed schemes provide better performance than other conventional schemes employing averaged signal as a reference, e.g., up to 4 dB gain at the expected delay error of 10°. PMID:27322267

  16. Joint Maximum Likelihood Time Delay Estimation of Unknown Event-Related Potential Signals for EEG Sensor Signal Quality Enhancement

    Directory of Open Access Journals (Sweden)

    Kyungsoo Kim

    2016-06-01

    Full Text Available Electroencephalograms (EEGs measure a brain signal that contains abundant information about the human brain function and health. For this reason, recent clinical brain research and brain computer interface (BCI studies use EEG signals in many applications. Due to the significant noise in EEG traces, signal processing to enhance the signal to noise power ratio (SNR is necessary for EEG analysis, especially for non-invasive EEG. A typical method to improve the SNR is averaging many trials of event related potential (ERP signal that represents a brain’s response to a particular stimulus or a task. The averaging, however, is very sensitive to variable delays. In this study, we propose two time delay estimation (TDE schemes based on a joint maximum likelihood (ML criterion to compensate the uncertain delays which may be different in each trial. We evaluate the performance for different types of signals such as random, deterministic, and real EEG signals. The results show that the proposed schemes provide better performance than other conventional schemes employing averaged signal as a reference, e.g., up to 4 dB gain at the expected delay error of 10°.

  17. A likelihood-based time series modeling approach for application in dendrochronology to examine the growth-climate relations and forest disturbance history

    Science.gov (United States)

    A time series intervention analysis (TSIA) of dendrochronological data to infer the tree growth-climate-disturbance relations and forest disturbance history is described. Maximum likelihood is used to estimate the parameters of a structural time series model with components for ...

  18. An Estimation of the Likelihood of Significant Eruptions During 2000-2009 Using Poisson Statistics on Two-Point Moving Averages of the Volcanic Time Series

    Science.gov (United States)

    Wilson, Robert M.

    2001-01-01

    Since 1750, the number of cataclysmic volcanic eruptions (volcanic explosivity index (VEI)>=4) per decade spans 2-11, with 96 percent located in the tropics and extra-tropical Northern Hemisphere. A two-point moving average of the volcanic time series has higher values since the 1860's than before, being 8.00 in the 1910's (the highest value) and 6.50 in the 1980's, the highest since the 1910's peak. Because of the usual behavior of the first difference of the two-point moving averages, one infers that its value for the 1990's will measure approximately 6.50 +/- 1, implying that approximately 7 +/- 4 cataclysmic volcanic eruptions should be expected during the present decade (2000-2009). Because cataclysmic volcanic eruptions (especially those having VEI>=5) nearly always have been associated with short-term episodes of global cooling, the occurrence of even one might confuse our ability to assess the effects of global warming. Poisson probability distributions reveal that the probability of one or more events with a VEI>=4 within the next ten years is >99 percent. It is approximately 49 percent for an event with a VEI>=5, and 18 percent for an event with a VEI>=6. Hence, the likelihood that a climatically significant volcanic eruption will occur within the next ten years appears reasonably high.

  19. An omnibus likelihood test statistic and its factorization for change detection in time series of polarimetric SAR data

    DEFF Research Database (Denmark)

    Nielsen, Allan Aasbjerg; Conradsen, Knut; Skriver, Henning

    2016-01-01

    Based on an omnibus likelihood ratio test statistic for the equality of several variance-covariance matrices following the complex Wishart distribution with an associated p-value and a factorization of this test statistic, change analysis in a short sequence of multilook, polarimetric SAR data...... in the covariance matrix representation is carried out. The omnibus test statistic and its factorization detect if and when change(s) occur. The technique is demonstrated on airborne EMISAR L-band data but may be applied to Sentinel-1, Cosmo-SkyMed, TerraSAR-X, ALOS and RadarSat-2 or other dual- and quad...

  20. Charged fluid distribution in higher dimensional spheroidal space-time

    Indian Academy of Sciences (India)

    A general solution of Einstein field equations corresponding to a charged fluid distribution on the background of higher dimensional spheroidal space-time is obtained. The solution generates several known solutions for superdense star having spheroidal space-time geometry.

  1. Just in Time Research: Data Breaches in Higher Education

    Science.gov (United States)

    Grama, Joanna

    2014-01-01

    This "Just in Time" research is in response to recent discussions on the EDUCAUSE Higher Education Information Security Council (HEISC) discussion list about data breaches in higher education. Using data from the Privacy Rights Clearinghouse, this research analyzes data breaches attributed to higher education. The results from this…

  2. Part-Time Higher Education: Employer Engagement under Threat?

    Science.gov (United States)

    Mason, Geoff

    2014-01-01

    Employer support for employees who are studying part-time for higher education qualifications constitutes a form of indirect employer engagement with higher education institutions that has contributed strongly to the development of work-related skills and knowledge over the years. However, this form of employer engagement with higher education…

  3. Time-Discrete Higher-Order ALE Formulations: Stability

    KAUST Repository

    Bonito, Andrea; Kyza, Irene; Nochetto, Ricardo H.

    2013-01-01

    on the stability of the PDE but may influence that of a discrete scheme. We examine this critical issue for higher-order time stepping without space discretization. We propose time-discrete discontinuous Galerkin (dG) numerical schemes of any order for a time

  4. Ecuador's higher education system in times of change

    OpenAIRE

    Van Hoof, Hubert B.; Estrella, Mateo; Eljuri, María Isabel; Torres León, Leonardo

    2013-01-01

    Ecuador’s higher education system is undergoing dramatic changes. The National Constitution of 2008 and the Higher Education Law of 2010 have changed the way Ecuador’s universities are funded, administered, and accredited. The importance of research was elevated and drastic changes were made to the academic qualifications and employment conditions of full-time faculty. This article describes the attempt to raise the level of Ecuador’s system of higher education and its impact on faculty and a...

  5. Ecuador's Higher Education System in Times of Change

    OpenAIRE

    Van Hoof, Hubert

    2013-01-01

    Ecuador’s higher education system is undergoing dramatic changes. The National Constitution of 2008 and the Higher Education Law of 2010 have changed the way Ecuador’s universities are funded, administered, and accredited. The importance of research was elevated and drastic changes were made to the academic qualifications and employment conditions of full-time faculty. This article describes the attempt to raise the level of Ecuador’s system of higher education and its impact on faculty and a...

  6. The Times Higher Education Ranking Product: Visualising Excellence through Media

    Science.gov (United States)

    Stack, Michelle L.

    2013-01-01

    This paper will examine the Times Higher Education's (THE) World University Rankings as a corporate media product. A number of empirical studies have critiqued the methodology of the THE, yet individuals, Higher Education Institutions (HEIs) and governments continue to use them for decision-making. This paper analyses the influence of…

  7. Adopting Consumer Time: Potential Issues for Higher Education

    Science.gov (United States)

    Gibbs, Paul

    2009-01-01

    Time and temporality have received little attention in the consumerism, marketing or, until recently, higher education literature. This paper attempts to compare the notions of timing implicit in education as "paideia" (transitional personal growth) with that implicit in consumerism and the marketing practices which foster it. This…

  8. Evaluation of penalized likelihood estimation reconstruction on a digital time-of-flight PET/CT scanner for 18F-FDG whole-body examinations.

    Science.gov (United States)

    Lindström, Elin; Sundin, Anders; Trampal, Carlos; Lindsjö, Lars; Ilan, Ezgi; Danfors, Torsten; Antoni, Gunnar; Sörensen, Jens; Lubberink, Mark

    2018-02-15

    Resolution and quantitative accuracy of positron emission tomography (PET) are highly influenced by the reconstruction method. Penalized likelihood estimation algorithms allow for fully convergent iterative reconstruction, generating a higher image contrast while limiting noise compared to ordered subsets expectation maximization (OSEM). In this study, block-sequential regularized expectation maximization (BSREM) was compared to time-of-flight OSEM (TOF-OSEM). Various strengths of noise penalization factor β were tested along with scan durations and transaxial field of views (FOVs) with the aim to evaluate the performance and clinical use of BSREM for 18 F-FDG-PET-computed tomography (CT), both in quantitative terms and in a qualitative visual evaluation. Methods: Eleven clinical whole-body 18 F-FDG-PET/CT examinations acquired on a digital TOF PET/CT scanner were included. The data were reconstructed using BSREM with point spread function (PSF) recovery and β 133, 267, 400 and 533, and TOF-OSEM with PSF, for various acquisition times/bed position (bp) and FOVs. Noise, signal-to-noise ratio (SNR), signal-to-background ratio (SBR), and standardized uptake values (SUVs) were analysed. A blinded visual image quality evaluation, rating several aspects, performed by two nuclear medicine physicians complemented the analysis. Results: The lowest levels of noise were reached with the highest β resulting in the highest SNR, which in turn resulted in the lowest SBR. Noise equivalence to TOF-OSEM was found with β 400 but produced a significant increase of SUV max (11%), SNR (22%) and SBR (12%) compared to TOF-OSEM. BSREM with β 533 at decreased acquisition (2 min/bp) was comparable to TOF-OSEM at full acquisition duration (3 min/bp). Reconstructed FOV had an impact on BSREM outcome measures, SNR increased while SBR decreased when shifting FOV from 70 to 50 cm. The visual image quality evaluation resulted in similar scores for reconstructions although β 400 obtained the

  9. Does switching contraceptive from oral to a patch or vaginal ring change the likelihood of timely prescription refill?

    Science.gov (United States)

    Law, Amy; Lee, Yi-Chien; Gorritz, Magdaliz; Plouffe, Leo

    2014-08-01

    This study evaluated contraceptive refill patterns of women insured commercially in the US who switched from oral contraceptives (OCs) to the patch or vaginal ring and assessed if switching contraceptive methods changes refill patterns. Women aged 15-44 with ≥2 patch or ring prescriptions and ≥2 OC prescriptions before the first patch/ring prescription were identified from the MarketScan® Commercial database (1/1/2002-6/30/2011). Refill patterns 1-year pre- and postindex date (first patch/ring prescription) were evaluated, and women were categorized as timely or delayed refillers on OCs and patch/ring. Regression modeling was used to investigate the association between refill patterns and contraceptive methods and switching effects on refill patterns. Of 17,814 women identified, 7901 switched to the patch, and 9913 switched to the ring. Among timely OC refillers, the percentage of timely refills decreased (patch: 95.6% to 79.4%, pcontraceptive efficacy by simply switching to the patch or ring. The impact on timely refills of switching from OCs to either the patch or ring is complex and varies depending on the pattern of timely refills on OCs. Copyright © 2014 Elsevier Inc. All rights reserved.

  10. Naked singularities in higher dimensional Vaidya space-times

    International Nuclear Information System (INIS)

    Ghosh, S. G.; Dadhich, Naresh

    2001-01-01

    We investigate the end state of the gravitational collapse of a null fluid in higher-dimensional space-times. Both naked singularities and black holes are shown to be developing as the final outcome of the collapse. The naked singularity spectrum in a collapsing Vaidya region (4D) gets covered with the increase in dimensions and hence higher dimensions favor a black hole in comparison to a naked singularity. The cosmic censorship conjecture will be fully respected for a space of infinite dimension

  11. Extended likelihood inference in reliability

    International Nuclear Information System (INIS)

    Martz, H.F. Jr.; Beckman, R.J.; Waller, R.A.

    1978-10-01

    Extended likelihood methods of inference are developed in which subjective information in the form of a prior distribution is combined with sampling results by means of an extended likelihood function. The extended likelihood function is standardized for use in obtaining extended likelihood intervals. Extended likelihood intervals are derived for the mean of a normal distribution with known variance, the failure-rate of an exponential distribution, and the parameter of a binomial distribution. Extended second-order likelihood methods are developed and used to solve several prediction problems associated with the exponential and binomial distributions. In particular, such quantities as the next failure-time, the number of failures in a given time period, and the time required to observe a given number of failures are predicted for the exponential model with a gamma prior distribution on the failure-rate. In addition, six types of life testing experiments are considered. For the binomial model with a beta prior distribution on the probability of nonsurvival, methods are obtained for predicting the number of nonsurvivors in a given sample size and for predicting the required sample size for observing a specified number of nonsurvivors. Examples illustrate each of the methods developed. Finally, comparisons are made with Bayesian intervals in those cases where these are known to exist

  12. Length of time in Ghana is associated with the likelihood of exclusive breastfeeding among Liberian refugees living in Buduburam.

    Science.gov (United States)

    Woldeghebriel, Meley; Hromi-Fiedler, Amber; Lartey, Anna; Gallego-Perez, Daniel; Sandow, Adam; Pérez-Escamilla, Rafael

    2017-07-01

    While literature describing immigrant's breastfeeding practices exists, especially among those living within developed countries, there is a significant gap in knowledge on how the host culture may influence the EBF behaviors of refugees, especially those living in protracted situations within sub-Saharan Africa. A cross-sectional study was conducted in the Buduburam Refugee Settlement in Ghana from July-August 2008 to explore the association between the amount of time living in Ghana and exclusive breastfeeding practices among Liberian refugees and Ghanaians in surround villages. The study included 480 women: 239 Liberians living in 12 settlement zones (in two of which Liberians and Ghanaians co-exist), 121 Ghanaians living in two settlement zones, and 120 Ghanaians living in nearby urban village of Awutu. Liberian mothers who lived in Ghana at least eight years were significantly more likely to exclusively breastfeed (OR: 1.78, 95% CI: 1.02, 3.09) compared to Ghanaian mothers living in Awutu. These findings suggest that increased time living in Buduburam improved the chances of EBF success among Liberians, perhaps as a result of unique EBF education/support opportunities offered in the settlement to Liberian refugees that were not readily available to Ghanaians. Further research to understand the "mechanisms" explaining exclusive breastfeeding differences as a function of time spent in host country is needed for improving breastfeeding support in refugee settlements and host communities. © 2016 John Wiley & Sons Ltd.

  13. Programming real-time executives in higher order language

    Science.gov (United States)

    Foudriat, E. C.

    1982-01-01

    Methods by which real-time executive programs can be implemented in a higher order language are discussed, using HAL/S and Path Pascal languages as program examples. Techniques are presented by which noncyclic tasks can readily be incorporated into the executive system. Situations are shown where the executive system can fail to meet its task scheduling and yet be able to recover either by rephasing the clock or stacking the information for later processing. The concept of deadline processing is shown to enable more effective mixing of time and information synchronized systems.

  14. Likelihood of treatment in a coronary care unit for a first-time myocardial infarction in relation to sex, country of birth and socioeconomic position in Sweden.

    Science.gov (United States)

    Yang, Dong; James, Stefan; de Faire, Ulf; Alfredsson, Lars; Jernberg, Tomas; Moradi, Tahereh

    2013-01-01

    To examine the relationship between sex, country of birth, level of education as an indicator of socioeconomic position, and the likelihood of treatment in a coronary care unit (CCU) for a first-time myocardial infarction. Nationwide register based study. Sweden. 199 906 patients (114 387 men and 85,519 women) of all ages who were admitted to hospital for first-time myocardial infarction between 2001 and 2009. Admission to a coronary care unit due to myocardial infarction. Despite the observed increasing access to coronary care units over time, the proportion of women treated in a coronary care unit was 13% less than for men. As compared with men, the multivariable adjusted odds ratio among women was 0.80 (95% confidence interval 0.77 to 0.82). This lower proportion of women treated in a CCU varied by age and year of diagnosis and country of birth. Overall, there was no evidence of a difference in likelihood of treatment in a coronary care unit between Sweden-born and foreign-born patients. As compared with patients with high education, the adjusted odds ratio among patients with a low level of education was 0.93 (95% confidence interval 0.89 to 0.96). Foreign-born and Sweden-born first-time myocardial infarction patients had equal opportunity of being treated in a coronary care unit in Sweden; this is in contrast to the situation in many other countries with large immigrant populations. However, the apparent lower rate of coronary care unit admission after first-time myocardial infarction among women and patients with low socioeconomic position warrants further investigation.

  15. Time-Discrete Higher-Order ALE Formulations: Stability

    KAUST Repository

    Bonito, Andrea

    2013-01-01

    Arbitrary Lagrangian Eulerian (ALE) formulations deal with PDEs on deformable domains upon extending the domain velocity from the boundary into the bulk with the purpose of keeping mesh regularity. This arbitrary extension has no effect on the stability of the PDE but may influence that of a discrete scheme. We examine this critical issue for higher-order time stepping without space discretization. We propose time-discrete discontinuous Galerkin (dG) numerical schemes of any order for a time-dependent advection-diffusion-model problem in moving domains, and study their stability properties. The analysis hinges on the validity of the Reynold\\'s identity for dG. Exploiting the variational structure and assuming exact integration, we prove that our conservative and nonconservative dG schemes are equivalent and unconditionally stable. The same results remain true for piecewise polynomial ALE maps of any degree and suitable quadrature that guarantees the validity of the Reynold\\'s identity. This approach generalizes the so-called geometric conservation law to higher-order methods. We also prove that simpler Runge-Kutta-Radau methods of any order are conditionally stable, that is, subject to a mild ALE constraint on the time steps. Numerical experiments corroborate and complement our theoretical results. © 2013 Society for Industrial and Applied Mathematics.

  16. Earthquake likelihood model testing

    Science.gov (United States)

    Schorlemmer, D.; Gerstenberger, M.C.; Wiemer, S.; Jackson, D.D.; Rhoades, D.A.

    2007-01-01

    INTRODUCTIONThe Regional Earthquake Likelihood Models (RELM) project aims to produce and evaluate alternate models of earthquake potential (probability per unit volume, magnitude, and time) for California. Based on differing assumptions, these models are produced to test the validity of their assumptions and to explore which models should be incorporated in seismic hazard and risk evaluation. Tests based on physical and geological criteria are useful but we focus on statistical methods using future earthquake catalog data only. We envision two evaluations: a test of consistency with observed data and a comparison of all pairs of models for relative consistency. Both tests are based on the likelihood method, and both are fully prospective (i.e., the models are not adjusted to fit the test data). To be tested, each model must assign a probability to any possible event within a specified region of space, time, and magnitude. For our tests the models must use a common format: earthquake rates in specified “bins” with location, magnitude, time, and focal mechanism limits.Seismology cannot yet deterministically predict individual earthquakes; however, it should seek the best possible models for forecasting earthquake occurrence. This paper describes the statistical rules of an experiment to examine and test earthquake forecasts. The primary purposes of the tests described below are to evaluate physical models for earthquakes, assure that source models used in seismic hazard and risk studies are consistent with earthquake data, and provide quantitative measures by which models can be assigned weights in a consensus model or be judged as suitable for particular regions.In this paper we develop a statistical method for testing earthquake likelihood models. A companion paper (Schorlemmer and Gerstenberger 2007, this issue) discusses the actual implementation of these tests in the framework of the RELM initiative.Statistical testing of hypotheses is a common task and a

  17. Term-time Employment and Student Attainment in Higher Education

    Directory of Open Access Journals (Sweden)

    Cath Dennis

    2018-04-01

    Full Text Available The number of UK full-time university students engaging in term-time employment (TTE is rising. Students engaging in TTE have previously been found to achieve less well academically than those who do not. This study aimed to explore patterns of TTE and academic achievement of undergraduates at a large UK higher education institution. Self-reported TTE hours were matched to attainment data for 1304 undergraduate students in levels 1-4 of study (SQCF levels 7-10. The majority of students in TTE (71%, n=621 reported undertaking TTE to cover essential living expenses. Compared to students not undertaking TTE, attainment was significantly better at low levels of TTE (1-10 hours, and only significantly worse when TTE was >30 hours/week. This pattern was magnified when job type was taken into account – students employed in skilled roles for ≤10 hours/week on average attained grades 7% higher than those not in TTE; students working >10 hours/week in unskilled positions showed a mean 1.6% lower grade. The impact of ‘academic potential’ (measured via incoming UCAS tariff was accounted for in the model. The finding that students engaging in some categories of TTE achieve better academic outcomes than their non-employed peers is worthy of further investigation. This study is unable to provide direct evidence of possible causation, but would tentatively suggest that students may benefit from taking on 10 or fewer hours of TTE per week.

  18. Mapping the Information Trace in Local Field Potentials by a Computational Method of Two-Dimensional Time-Shifting Synchronization Likelihood Based on Graphic Processing Unit Acceleration.

    Science.gov (United States)

    Zhao, Zi-Fang; Li, Xue-Zhu; Wan, You

    2017-12-01

    The local field potential (LFP) is a signal reflecting the electrical activity of neurons surrounding the electrode tip. Synchronization between LFP signals provides important details about how neural networks are organized. Synchronization between two distant brain regions is hard to detect using linear synchronization algorithms like correlation and coherence. Synchronization likelihood (SL) is a non-linear synchronization-detecting algorithm widely used in studies of neural signals from two distant brain areas. One drawback of non-linear algorithms is the heavy computational burden. In the present study, we proposed a graphic processing unit (GPU)-accelerated implementation of an SL algorithm with optional 2-dimensional time-shifting. We tested the algorithm with both artificial data and raw LFP data. The results showed that this method revealed detailed information from original data with the synchronization values of two temporal axes, delay time and onset time, and thus can be used to reconstruct the temporal structure of a neural network. Our results suggest that this GPU-accelerated method can be extended to other algorithms for processing time-series signals (like EEG and fMRI) using similar recording techniques.

  19. Measuring Stratigraphic Congruence Across Trees, Higher Taxa, and Time.

    Science.gov (United States)

    O'Connor, Anne; Wills, Matthew A

    2016-09-01

    The congruence between the order of cladistic branching and the first appearance dates of fossil lineages can be quantified using a variety of indices. Good matching is a prerequisite for the accurate time calibration of trees, while the distribution of congruence indices across large samples of cladograms has underpinned claims about temporal and taxonomic patterns of completeness in the fossil record. The most widely used stratigraphic congruence indices are the stratigraphic consistency index (SCI), the modified Manhattan stratigraphic measure (MSM*), and the gap excess ratio (GER) (plus its derivatives; the topological GER and the modified GER). Many factors are believed to variously bias these indices, with several empirical and simulation studies addressing some subset of the putative interactions. This study combines both approaches to quantify the effects (on all five indices) of eight variables reasoned to constrain the distribution of possible values (the number of taxa, tree balance, tree resolution, range of first occurrence (FO) dates, center of gravity of FO dates, the variability of FO dates, percentage of extant taxa, and percentage of taxa with no fossil record). Our empirical data set comprised 647 published animal and plant cladograms spanning the entire Phanerozoic, and for these data we also modeled the effects of mean age of FOs (as a proxy for clade age), the taxonomic rank of the clade, and the higher taxonomic group to which it belonged. The center of gravity of FO dates had not been investigated hitherto, and this was found to correlate most strongly with some measures of stratigraphic congruence in our empirical study (top-heavy clades had better congruence). The modified GER was the index least susceptible to bias. We found significant differences across higher taxa for all indices; arthropods had lower congruence and tetrapods higher congruence. Stratigraphic congruence-however measured-also varied throughout the Phanerozoic, reflecting

  20. A stable higher order space time Galerkin marching-on-in-time scheme

    KAUST Repository

    Pray, Andrew J.; Shanker, Balasubramaniam; Bagci, Hakan

    2013-01-01

    We present a method for the stable solution of time-domain integral equations. The method uses a technique developed in [1] to accurately evaluate matrix elements. As opposed to existing stabilization schemes, the method presented uses higher order

  1. A higher order space-time Galerkin scheme for time domain integral equations

    KAUST Repository

    Pray, Andrew J.; Beghein, Yves; Nair, Naveen V.; Cools, Kristof; Bagci, Hakan; Shanker, Balasubramaniam

    2014-01-01

    Stability of time domain integral equation (TDIE) solvers has remained an elusive goal formany years. Advancement of this research has largely progressed on four fronts: 1) Exact integration, 2) Lubich quadrature, 3) smooth temporal basis functions, and 4) space-time separation of convolutions with the retarded potential. The latter method's efficacy in stabilizing solutions to the time domain electric field integral equation (TD-EFIE) was previously reported for first-order surface descriptions (flat elements) and zeroth-order functions as the temporal basis. In this work, we develop the methodology necessary to extend the scheme to higher order surface descriptions as well as to enable its use with higher order basis functions in both space and time. These basis functions are then used in a space-time Galerkin framework. A number of results are presented that demonstrate convergence in time. The viability of the space-time separation method in producing stable results is demonstrated experimentally for these examples.

  2. Logic of likelihood

    International Nuclear Information System (INIS)

    Wall, M.J.W.

    1992-01-01

    The notion of open-quotes probabilityclose quotes is generalized to that of open-quotes likelihood,close quotes and a natural logical structure is shown to exist for any physical theory which predicts likelihoods. Two physically based axioms are given for this logical structure to form an orthomodular poset, with an order-determining set of states. The results strengthen the basis of the quantum logic approach to axiomatic quantum theory. 25 refs

  3. ARMA-Based SEM When the Number of Time Points T Exceeds the Number of Cases N: Raw Data Maximum Likelihood.

    Science.gov (United States)

    Hamaker, Ellen L.; Dolan, Conor V.; Molenaar, Peter C. M.

    2003-01-01

    Demonstrated, through simulation, that stationary autoregressive moving average (ARMA) models may be fitted readily when T>N, using normal theory raw maximum likelihood structural equation modeling. Also provides some illustrations based on real data. (SLD)

  4. The composition of engineered cartilage at the time of implantation determines the likelihood of regenerating tissue with a normal collagen architecture.

    Science.gov (United States)

    Nagel, Thomas; Kelly, Daniel J

    2013-04-01

    The biomechanical functionality of articular cartilage is derived from both its biochemical composition and the architecture of the collagen network. Failure to replicate this normal Benninghoff architecture in regenerating articular cartilage may in turn predispose the tissue to failure. In this article, the influence of the maturity (or functionality) of a tissue-engineered construct at the time of implantation into a tibial chondral defect on the likelihood of recapitulating a normal Benninghoff architecture was investigated using a computational model featuring a collagen remodeling algorithm. Such a normal tissue architecture was predicted to form in the intact tibial plateau due to the interplay between the depth-dependent extracellular matrix properties, foremost swelling pressures, and external mechanical loading. In the presence of even small empty defects in the articular surface, the collagen architecture in the surrounding cartilage was predicted to deviate significantly from the native state, indicating a possible predisposition for osteoarthritic changes. These negative alterations were alleviated by the implantation of tissue-engineered cartilage, where a mature implant was predicted to result in the formation of a more native-like collagen architecture than immature implants. The results of this study highlight the importance of cartilage graft functionality to maintain and/or re-establish joint function and suggest that engineering a tissue with a native depth-dependent composition may facilitate the establishment of a normal Benninghoff collagen architecture after implantation into load-bearing defects.

  5. A stable higher order space time Galerkin marching-on-in-time scheme

    KAUST Repository

    Pray, Andrew J.

    2013-07-01

    We present a method for the stable solution of time-domain integral equations. The method uses a technique developed in [1] to accurately evaluate matrix elements. As opposed to existing stabilization schemes, the method presented uses higher order basis functions in time to improve the accuracy of the solver. The method is validated by showing convergence in temporal basis function order, time step size, and geometric discretization order. © 2013 IEEE.

  6. Timing of Enhanced Post-Deployment Screening: Exploration of Participants' Preferences and of the Associations among Timing, the Prevalence of Health Problems, and the Likelihood of Referral

    National Research Council Canada - National Science Library

    Zamorski, Mark A

    2006-01-01

    .... The optimal timing of such screening is uncertain: If done immediately upon return, few members endorse health concerns, perhaps because of a "honeymoon" effect in which homecoming is seen as the solution to any and...

  7. Ecuador's Higher Education System in Times of Change

    Science.gov (United States)

    Van Hoof, Hubert B.; Estrella, Mateo; Eljuri, Marie-Isabel; León, Leonardo Torres

    2013-01-01

    Ecuador's higher education system is undergoing dramatic changes. The National Constitution of 2008 and the Higher Education Law of 2010 have changed the way Ecuador's universities are funded, administered, and accredited. The importance of research was elevated and drastic changes were made to the academic qualifications and employment conditions…

  8. Time Management and Academic Achievement of Higher Secondary Students

    Science.gov (United States)

    Cyril, A. Vences

    2015-01-01

    The only thing, which can't be changed by man, is time. One cannot get back time lost or gone Nothing can be substituted for time. Time management is actually self management. The skills that people need to manage others are the same skills that are required to manage themselves. The purpose of the present study was to explore the relation between…

  9. The Laplace Likelihood Ratio Test for Heteroscedasticity

    Directory of Open Access Journals (Sweden)

    J. Martin van Zyl

    2011-01-01

    Full Text Available It is shown that the likelihood ratio test for heteroscedasticity, assuming the Laplace distribution, gives good results for Gaussian and fat-tailed data. The likelihood ratio test, assuming normality, is very sensitive to any deviation from normality, especially when the observations are from a distribution with fat tails. Such a likelihood test can also be used as a robust test for a constant variance in residuals or a time series if the data is partitioned into groups.

  10. Restyling the Humanities Curriculum of Higher Education for Posthuman Times

    Science.gov (United States)

    Siddiqui, Jamila R.

    2016-01-01

    The future viability of the humanities in higher education has been broadly debated. Yet, most of these debates are missing an important consideration. The humanities' object of study is the human, an object that some would argue has been replaced in our onto-epistemological systems by the posthuman. In her 2013 book, "The Posthuman,"…

  11. Rebooting Irish Higher Education: Policy Challenges for Challenging Times

    Science.gov (United States)

    Hazelkorn, Ellen

    2014-01-01

    The 2008 global financial crisis cast a long shadow over Ireland's higher education and research system. The IMF said Ireland experienced an "unprecedented economic correction", while Ireland's National Economic and Social Development Office said Ireland was beset by five different crises: a banking crisis, a fiscal crisis, an economic…

  12. Higher Education in Times of Financial Distress: The Minnesota Experience

    Science.gov (United States)

    Severns, Roger

    2012-01-01

    Like many states, Minnesota has incurred large budget deficits during the past two years. Those deficits have, in turn, led to changes in a number of areas of state government, particularly higher education. Faculty have incurred pay freezes and layoffs, programs have closed, and tuition increased. Campuses within the MnSCU system have been…

  13. Learning and Teaching Problems in Part-Time Higher Education.

    Science.gov (United States)

    Trotman-Dickenson, D. I.

    1988-01-01

    Results of a British survey of the administrations of six universities and six public colleges, employers, and employees who were part-time students are reported and discussed. The survey assessed the perceptions of those groups concerning problems in the instruction and learning of part-time students. (MSE)

  14. A higher order space-time Galerkin scheme for time domain integral equations

    KAUST Repository

    Pray, Andrew J.

    2014-12-01

    Stability of time domain integral equation (TDIE) solvers has remained an elusive goal formany years. Advancement of this research has largely progressed on four fronts: 1) Exact integration, 2) Lubich quadrature, 3) smooth temporal basis functions, and 4) space-time separation of convolutions with the retarded potential. The latter method\\'s efficacy in stabilizing solutions to the time domain electric field integral equation (TD-EFIE) was previously reported for first-order surface descriptions (flat elements) and zeroth-order functions as the temporal basis. In this work, we develop the methodology necessary to extend the scheme to higher order surface descriptions as well as to enable its use with higher order basis functions in both space and time. These basis functions are then used in a space-time Galerkin framework. A number of results are presented that demonstrate convergence in time. The viability of the space-time separation method in producing stable results is demonstrated experimentally for these examples.

  15. Effects on noise properties of GPS time series caused by higher-order ionospheric corrections

    Science.gov (United States)

    Jiang, Weiping; Deng, Liansheng; Li, Zhao; Zhou, Xiaohui; Liu, Hongfei

    2014-04-01

    Higher-order ionospheric (HOI) effects are one of the principal technique-specific error sources in precise global positioning system (GPS) analysis. These effects also influence the non-linear characteristics of GPS coordinate time series. In this paper, we investigate these effects on coordinate time series in terms of seasonal variations and noise amplitudes. Both power spectral techniques and maximum likelihood estimators (MLE) are used to evaluate these effects quantitatively and qualitatively. Our results show an overall improvement for the analysis of global sites if HOI effects are considered. We note that the noise spectral index that is used for the determination of the optimal noise models in our analysis ranged between -1 and 0 both with and without HOI corrections, implying that the coloured noise cannot be removed by these corrections. However, the corrections were found to have improved noise properties for global sites. After the corrections were applied, the noise amplitudes at most sites decreased, among which the white noise amplitudes decreased remarkably. The white noise amplitudes of up to 81.8% of the selected sites decreased in the up component, and the flicker noise of 67.5% of the sites decreased in the north component. Stacked periodogram results show that, no matter whether the HOI effects are considered or not, a common fundamental period of 1.04 cycles per year (cpy), together with the expected annual and semi-annual signals, can explain all peaks of the north and up components well. For the east component, however, reasonable results can be obtained only based on HOI corrections. HOI corrections are useful for better detecting the periodic signals in GPS coordinate time series. Moreover, the corrections contributed partly to the seasonal variations of the selected sites, especially for the up component. Statistically, HOI corrections reduced more than 50% and more than 65% of the annual and semi-annual amplitudes respectively at the

  16. Timing and Magnitude of Initial Change in Disease Activity Score 28 Predicts the Likelihood of Achieving Low Disease Activity at 1 Year in Rheumatoid Arthritis Patients Treated with Certolizumab Pegol: A Post-hoc Analysis of the RAPID 1 Trial

    NARCIS (Netherlands)

    van der Heijde, Désirée; Keystone, Edward C.; Curtis, Jeffrey R.; Landewé, Robert B.; Schiff, Michael H.; Khanna, Dinesh; Kvien, Tore K.; Ionescu, Lucian; Gervitz, Leon M.; Davies, Owen R.; Luijtens, Kristel; Furst, Daniel E.

    2012-01-01

    Objective. To determine the relationship between timing and magnitude of Disease Activity Score [DAS28(ESR)] nonresponse (DAS28 improvement thresholds not reached) during the first 12 weeks of treatment with certolizumab pegol (CZP) plus methotrexate, and the likelihood of achieving low disease

  17. Higher time derivatives of the generalized Liapunov function

    International Nuclear Information System (INIS)

    Schieve, W.C.; Bulsara, A.R.

    1975-01-01

    Using the generalized N-body expression for a Liapunov functional developed by Prigogine and coworkers, a condition is obtained whereby the successive time derivatives of this function alternate in sign for weakly coupled systems. This generalized Liapunov function contains contributions from the diagonal as well as off-diagonal (correlation) components of the density matrix. The alternating sign condition is applied (and seen to hold true) for the cases of elastic phonon scattering in a lattice, three-phonon scattering (the anharmonic lattice), and the quantum electron gas. It is also proved simply for the Friedrichs model

  18. Likelihood devices in spatial statistics

    NARCIS (Netherlands)

    Zwet, E.W. van

    1999-01-01

    One of the main themes of this thesis is the application to spatial data of modern semi- and nonparametric methods. Another, closely related theme is maximum likelihood estimation from spatial data. Maximum likelihood estimation is not common practice in spatial statistics. The method of moments

  19. Part-Time Higher Education in English Colleges: Adult Identities in Diminishing Spaces

    Science.gov (United States)

    Esmond, Bill

    2015-01-01

    Adult participation in higher education has frequently entailed mature students studying part time in lower-ranked institutions. In England, higher education policies have increasingly emphasised higher education provision in vocational further education colleges, settings which have extensive adult traditions but which mainly teach…

  20. Time as the Fourth Dimension in the Globalization of Higher Education

    Science.gov (United States)

    Walker, Judith

    2009-01-01

    This paper calls for an analysis of time to be integrated into the theories on the globalization of higher education. Specifically, the author argues that academic capitalism, fuelled by globalization, has led to changes in the university visible in time/space compression, time acceleration, the reification of time and our internalization of the…

  1. The Benefits of Part-Time Undergraduate Study and UK Higher Education Policy: A Literature Review

    Science.gov (United States)

    Bennion, Alice; Scesa, Anna; Williams, Ruth

    2011-01-01

    Part-time study in the UK is significant: nearly 40 per cent of higher education students study part-time. This article reports on a literature review that sought to understand the economic and social benefits of part-time study in the UK. It concludes that there are substantial and wide-ranging benefits from studying part-time. The article also…

  2. Maximum likelihood estimation for integrated diffusion processes

    DEFF Research Database (Denmark)

    Baltazar-Larios, Fernando; Sørensen, Michael

    We propose a method for obtaining maximum likelihood estimates of parameters in diffusion models when the data is a discrete time sample of the integral of the process, while no direct observations of the process itself are available. The data are, moreover, assumed to be contaminated...... EM-algorithm to obtain maximum likelihood estimates of the parameters in the diffusion model. As part of the algorithm, we use a recent simple method for approximate simulation of diffusion bridges. In simulation studies for the Ornstein-Uhlenbeck process and the CIR process the proposed method works...... by measurement errors. Integrated volatility is an example of this type of observations. Another example is ice-core data on oxygen isotopes used to investigate paleo-temperatures. The data can be viewed as incomplete observations of a model with a tractable likelihood function. Therefore we propose a simulated...

  3. "The Balancing Act"--Irish Part-Time Undergraduate Students in Higher Education

    Science.gov (United States)

    Darmody, Merike; Fleming, Bairbre

    2009-01-01

    While the numbers of part-time students has increased in higher education in Ireland, little is known about these students or about how they balance their study and other commitments. Drawing on a larger study on Irish students' experiences in higher education, this article attempts to address this gap in research and reports on Irish part-time…

  4. Higher Education Institution Leaders' Identity Constructions in Times of Changing Structures and Legislation

    Science.gov (United States)

    Tigerstedt, Christa

    2016-01-01

    The focus in this paper is on the leadership of higher education institutions (HEI) in Finland and more specifically on the rector's leadership. The higher education sector is undergoing many changes and has been so for a long time. How, then, do the current changes become visible from a leadership perspective? The leadership discourse is here…

  5. The Supply of Part-Time Higher Education in the UK. Research Report

    Science.gov (United States)

    Callender, Claire; Birkbeck, Anne Jamieson; Mason, Geoff

    2010-01-01

    This report explores the supply of part-time higher education in the UK, with particular consideration to the study of part-time undergraduate provision in England. It is the final publication in the series of reports on individual student markets that were commissioned by Universities UK following the publication of the reports on the Future size…

  6. The Motivations and Outcomes of Studying for Part-Time Mature Students in Higher Education

    Science.gov (United States)

    Swain, Jon; Hammond, Cathie

    2011-01-01

    This paper examines the motivations and outcomes for mature students who study part-time in higher education (HE) in the UK. Although many students in HE are mature part-time learners, they have not been the specific focus of much research or policy interest. In-depth narrative interviews were carried out with 18 graduates who had studied…

  7. Obtaining reliable Likelihood Ratio tests from simulated likelihood functions

    DEFF Research Database (Denmark)

    Andersen, Laura Mørch

    It is standard practice by researchers and the default option in many statistical programs to base test statistics for mixed models on simulations using asymmetric draws (e.g. Halton draws). This paper shows that when the estimated likelihood functions depend on standard deviations of mixed param...

  8. Detrended fluctuation analysis based on higher-order moments of financial time series

    Science.gov (United States)

    Teng, Yue; Shang, Pengjian

    2018-01-01

    In this paper, a generalized method of detrended fluctuation analysis (DFA) is proposed as a new measure to assess the complexity of a complex dynamical system such as stock market. We extend DFA and local scaling DFA to higher moments such as skewness and kurtosis (labeled SMDFA and KMDFA), so as to investigate the volatility scaling property of financial time series. Simulations are conducted over synthetic and financial data for providing the comparative study. We further report the results of volatility behaviors in three American countries, three Chinese and three European stock markets by using DFA and LSDFA method based on higher moments. They demonstrate the dynamics behaviors of time series in different aspects, which can quantify the changes of complexity for stock market data and provide us with more meaningful information than single exponent. And the results reveal some higher moments volatility and higher moments multiscale volatility details that cannot be obtained using the traditional DFA method.

  9. Shorter Perceived Outpatient MRI Wait Times Associated With Higher Patient Satisfaction.

    Science.gov (United States)

    Holbrook, Anna; Glenn, Harold; Mahmood, Rabia; Cai, Qingpo; Kang, Jian; Duszak, Richard

    2016-05-01

    The aim of this study was to assess differences in perceived versus actual wait times among patients undergoing outpatient MRI examinations and to correlate those times with patient satisfaction. Over 15 weeks, 190 patients presenting for outpatient MR in a radiology department in which "patient experience" is one of the stated strategic priorities were asked to (1) estimate their wait times for various stages in the imaging process and (2) state their satisfaction with their imaging experience. Perceived times were compared with actual electronic time stamps. Perceived and actual times were compared and correlated with standardized satisfaction scores using Kendall τ correlation. The mean actual wait time between patient arrival and examination start was 53.4 ± 33.8 min, whereas patients perceived a mean wait time of 27.8 ± 23.1 min, a statistically significant underestimation of 25.6 min (P perceived wait times at all points during patient encounters were correlated with higher satisfaction scores (P perceived and actual wait times were both correlated with higher satisfaction scores. As satisfaction surveys play a larger role in an environment of metric transparency and value-based payments, better understanding of such factors will be increasingly important. Copyright © 2016 American College of Radiology. Published by Elsevier Inc. All rights reserved.

  10. The peculiarities' study of higher education applicants' employment in pharmaceutical specialties of full-time training

    Directory of Open Access Journals (Sweden)

    A. A. Kotvitska

    2017-08-01

    Full Text Available Employment of applicants of pharmaceutical higher education has both positive and negative impact on the quality of educational services provided by institutions, especially in terms of knowledge and skills acquired by student. Objective is to study peculiarities of higher education employment, full-time training, and features driving them to conclude labor agreements. Materials and methods. During the study, we used juridical and comparative legal methods of analysis. Results. The study has defined the following features of the employment of applicants of higher education in the health care institutions, pharmaceutical enterprises and organizations. The current legislation provides the applicants of higher education enrolled in HEIs for full-time training with a right to make a free choice of the field of study, profession, type of occupation and work. The relationship developed between an applicant and higher education institutions are not to be regarded as an employment relationship. The working under the items of labor agreement for person who combine it with the full-time education is not a part or combination or sharing, and is considered the main place of job. Thus, it stipulates maintenance of records book of the employed worker according to the general procedure. An applicant of higher education has discretion to choose working hours (full- or part-time working day, full- or part-time working week with taking into consideration the HEIs schedule and only in the free time. When full-time operating in frameworks of collective agreement at enterprise, institution, or organization, having accounted peculiarities of operation, non-standardized working day for some positions can be set. The current legislation stipulates possibility of employment for persons without higher pharmaceutical education to the health care institutions on the clearly defined positions. Conclusions.The country authority has created and is providing favorable

  11. A Linear-Elasticity Solver for Higher-Order Space-Time Mesh Deformation

    Science.gov (United States)

    Diosady, Laslo T.; Murman, Scott M.

    2018-01-01

    A linear-elasticity approach is presented for the generation of meshes appropriate for a higher-order space-time discontinuous finite-element method. The equations of linear-elasticity are discretized using a higher-order, spatially-continuous, finite-element method. Given an initial finite-element mesh, and a specified boundary displacement, we solve for the mesh displacements to obtain a higher-order curvilinear mesh. Alternatively, for moving-domain problems we use the linear-elasticity approach to solve for a temporally discontinuous mesh velocity on each time-slab and recover a continuous mesh deformation by integrating the velocity. The applicability of this methodology is presented for several benchmark test cases.

  12. "Times Higher Education" 100 under 50 Ranking: Old Wine in a New Bottle?

    Science.gov (United States)

    Soh, Kaycheng

    2013-01-01

    "Times Higher Education" 100 under 50 ranking is a new twist to the university ranking. It focuses on universities that have a history of 50 years or less with the purpose of offsetting the advantage of prestige of the older ones. This article re-analysed the data publicly available and looked into relevant conceptual and statistical issues. The…

  13. Employers' Demand for and the Provision of Part-Time Higher Education for Employees.

    Science.gov (United States)

    Trotman-Dickenson, Danusia

    1987-01-01

    A study of public and private employers' demand for part-time higher education for their employees and the response of institutions is reported. The study focuses on Wales and on the regional economic and social trends affecting educational demand and supply. Improved communication between employers, employees, and institutions is recommended.…

  14. Higher order multi-term time-fractional partial differential equations involving Caputo-Fabrizio derivative

    Directory of Open Access Journals (Sweden)

    Erkinjon Karimov

    2017-10-01

    Full Text Available In this work we discuss higher order multi-term partial differential equation (PDE with the Caputo-Fabrizio fractional derivative in time. Using method of separation of variables, we reduce fractional order partial differential equation to the integer order. We represent explicit solution of formulated problem in particular case by Fourier series.

  15. Higher order multi-term time-fractional partial differential equations involving Caputo-Fabrizio derivative

    OpenAIRE

    Erkinjon Karimov; Sardor Pirnafasov

    2017-01-01

    In this work we discuss higher order multi-term partial differential equation (PDE) with the Caputo-Fabrizio fractional derivative in time. Using method of separation of variables, we reduce fractional order partial differential equation to the integer order. We represent explicit solution of formulated problem in particular case by Fourier series.

  16. Phylogenetic analysis using parsimony and likelihood methods.

    Science.gov (United States)

    Yang, Z

    1996-02-01

    The assumptions underlying the maximum-parsimony (MP) method of phylogenetic tree reconstruction were intuitively examined by studying the way the method works. Computer simulations were performed to corroborate the intuitive examination. Parsimony appears to involve very stringent assumptions concerning the process of sequence evolution, such as constancy of substitution rates between nucleotides, constancy of rates across nucleotide sites, and equal branch lengths in the tree. For practical data analysis, the requirement of equal branch lengths means similar substitution rates among lineages (the existence of an approximate molecular clock), relatively long interior branches, and also few species in the data. However, a small amount of evolution is neither a necessary nor a sufficient requirement of the method. The difficulties involved in the application of current statistical estimation theory to tree reconstruction were discussed, and it was suggested that the approach proposed by Felsenstein (1981, J. Mol. Evol. 17: 368-376) for topology estimation, as well as its many variations and extensions, differs fundamentally from the maximum likelihood estimation of a conventional statistical parameter. Evidence was presented showing that the Felsenstein approach does not share the asymptotic efficiency of the maximum likelihood estimator of a statistical parameter. Computer simulations were performed to study the probability that MP recovers the true tree under a hierarchy of models of nucleotide substitution; its performance relative to the likelihood method was especially noted. The results appeared to support the intuitive examination of the assumptions underlying MP. When a simple model of nucleotide substitution was assumed to generate data, the probability that MP recovers the true topology could be as high as, or even higher than, that for the likelihood method. When the assumed model became more complex and realistic, e.g., when substitution rates were

  17. Ego involvement increases doping likelihood.

    Science.gov (United States)

    Ring, Christopher; Kavussanu, Maria

    2018-08-01

    Achievement goal theory provides a framework to help understand how individuals behave in achievement contexts, such as sport. Evidence concerning the role of motivation in the decision to use banned performance enhancing substances (i.e., doping) is equivocal on this issue. The extant literature shows that dispositional goal orientation has been weakly and inconsistently associated with doping intention and use. It is possible that goal involvement, which describes the situational motivational state, is a stronger determinant of doping intention. Accordingly, the current study used an experimental design to examine the effects of goal involvement, manipulated using direct instructions and reflective writing, on doping likelihood in hypothetical situations in college athletes. The ego-involving goal increased doping likelihood compared to no goal and a task-involving goal. The present findings provide the first evidence that ego involvement can sway the decision to use doping to improve athletic performance.

  18. Efficient Detection of Repeating Sites to Accelerate Phylogenetic Likelihood Calculations.

    Science.gov (United States)

    Kobert, K; Stamatakis, A; Flouri, T

    2017-03-01

    The phylogenetic likelihood function (PLF) is the major computational bottleneck in several applications of evolutionary biology such as phylogenetic inference, species delimitation, model selection, and divergence times estimation. Given the alignment, a tree and the evolutionary model parameters, the likelihood function computes the conditional likelihood vectors for every node of the tree. Vector entries for which all input data are identical result in redundant likelihood operations which, in turn, yield identical conditional values. Such operations can be omitted for improving run-time and, using appropriate data structures, reducing memory usage. We present a fast, novel method for identifying and omitting such redundant operations in phylogenetic likelihood calculations, and assess the performance improvement and memory savings attained by our method. Using empirical and simulated data sets, we show that a prototype implementation of our method yields up to 12-fold speedups and uses up to 78% less memory than one of the fastest and most highly tuned implementations of the PLF currently available. Our method is generic and can seamlessly be integrated into any phylogenetic likelihood implementation. [Algorithms; maximum likelihood; phylogenetic likelihood function; phylogenetics]. © The Author(s) 2016. Published by Oxford University Press, on behalf of the Society of Systematic Biologists.

  19. Time-discrete higher order ALE formulations: a priori error analysis

    KAUST Repository

    Bonito, Andrea

    2013-03-16

    We derive optimal a priori error estimates for discontinuous Galerkin (dG) time discrete schemes of any order applied to an advection-diffusion model defined on moving domains and written in the Arbitrary Lagrangian Eulerian (ALE) framework. Our estimates hold without any restrictions on the time steps for dG with exact integration or Reynolds\\' quadrature. They involve a mild restriction on the time steps for the practical Runge-Kutta-Radau methods of any order. The key ingredients are the stability results shown earlier in Bonito et al. (Time-discrete higher order ALE formulations: stability, 2013) along with a novel ALE projection. Numerical experiments illustrate and complement our theoretical results. © 2013 Springer-Verlag Berlin Heidelberg.

  20. Narrow band interference cancelation in OFDM: Astructured maximum likelihood approach

    KAUST Repository

    Sohail, Muhammad Sadiq; Al-Naffouri, Tareq Y.; Al-Ghadhban, Samir N.

    2012-01-01

    This paper presents a maximum likelihood (ML) approach to mitigate the effect of narrow band interference (NBI) in a zero padded orthogonal frequency division multiplexing (ZP-OFDM) system. The NBI is assumed to be time variant and asynchronous

  1. Likelihood estimators for multivariate extremes

    KAUST Repository

    Huser, Raphaë l; Davison, Anthony C.; Genton, Marc G.

    2015-01-01

    The main approach to inference for multivariate extremes consists in approximating the joint upper tail of the observations by a parametric family arising in the limit for extreme events. The latter may be expressed in terms of componentwise maxima, high threshold exceedances or point processes, yielding different but related asymptotic characterizations and estimators. The present paper clarifies the connections between the main likelihood estimators, and assesses their practical performance. We investigate their ability to estimate the extremal dependence structure and to predict future extremes, using exact calculations and simulation, in the case of the logistic model.

  2. Likelihood estimators for multivariate extremes

    KAUST Repository

    Huser, Raphaël

    2015-11-17

    The main approach to inference for multivariate extremes consists in approximating the joint upper tail of the observations by a parametric family arising in the limit for extreme events. The latter may be expressed in terms of componentwise maxima, high threshold exceedances or point processes, yielding different but related asymptotic characterizations and estimators. The present paper clarifies the connections between the main likelihood estimators, and assesses their practical performance. We investigate their ability to estimate the extremal dependence structure and to predict future extremes, using exact calculations and simulation, in the case of the logistic model.

  3. Increased Total Anesthetic Time Leads to Higher Rates of Surgical Site Infections in Spinal Fusions.

    Science.gov (United States)

    Puffer, Ross C; Murphy, Meghan; Maloney, Patrick; Kor, Daryl; Nassr, Ahmad; Freedman, Brett; Fogelson, Jeremy; Bydon, Mohamad

    2017-06-01

    A retrospective review of a consecutive series of spinal fusions comparing patient and procedural characteristics of patients who developed surgical site infections (SSIs) after spinal fusion. It is known that increased surgical time (incision to closure) is associated with a higher rate of postoperative SSIs. We sought to determine whether increased total anesthetic time (intubation to extubation) is a factor in the development of SSIs as well. In spine surgery for deformity and degenerative disease, SSI has been associated with operative time, revealing a nearly 10-fold increase in SSI rates in prolonged surgery. Surgical time is associated with infections in other surgical disciplines as well. No studies have reported whether total anesthetic time (intubation to extubation) has an association with SSIs. Surgical records were searched in a retrospective fashion to identify all spine fusion procedures performed between January 2010 and July 2012. All SSIs during that timeframe were recorded and compared with the list of cases performed between 2010 and 2012 in a case-control design. There were 20 (1.7%) SSIs in this fusion cohort. On univariate analyses of operative factors, there was a significant association between total anesthetic time (Infection 7.6 ± 0.5 hrs vs. no infection -6.0 ± 0.1 hrs, P operative time (infection 5.5 ± 0.4 hrs vs. no infection - 4.4 ± 0.06 hrs, P infections, whereas level of pathology and emergent surgery were not significant. On multivariate logistic analysis, BMI and total anesthetic time remained independent predictors of SSI whereas ASA status and operative time did not. Increasing BMI and total anesthetic time were independent predictors of SSIs in this cohort of over 1000 consecutive spinal fusions. 3.

  4. YAOPBM-II: extension to higher degrees and to shorter time series

    Energy Technology Data Exchange (ETDEWEB)

    Korzennik, S G [Harvard-Smithsonian Center for Astrophysics, Cambridge, MA (United States)], E-mail: skorzennik@cfa.harvard.edu

    2008-10-15

    In 2005, I presented a new fitting methodology (Yet AnOther Peak Bagging Method -YAOPBM), derived for very-long time series (2088-day-long) and applied it to low degree modes, {iota} {<=} 25. That very-long time series was also sub-divided into shorter segments (728-day-long) that were each fitted over the same range of degrees, to estimate changes with solar activity levels. I present here the extension of this method in several 'directions': a) to substantially higher degrees ({iota} {<=} 125); b) to shorter time series (364- and 182-day-long); and c) to additional 728-day-long segments, covering now some 10 years of observations. I discuss issues with the fitting, namely the leakage matrix, and the f- and p1 mode at very low frequencies, and I present some of the characteristics of the observed temporal changes.

  5. Euclidean scalar Green function in a higher dimensional global monopole space-time

    International Nuclear Information System (INIS)

    Bezerra de Mello, E.R.

    2002-01-01

    We construct the explicit Euclidean scalar Green function associated with a massless field in a higher dimensional global monopole space-time, i.e., a (1+d)-space-time with d≥3 which presents a solid angle deficit. Our result is expressed in terms of an infinite sum of products of Legendre functions with Gegenbauer polynomials. Although this Green function cannot be expressed in a closed form, for the specific case where the solid angle deficit is very small, it is possible to develop the sum and obtain the Green function in a more workable expression. Having this expression it is possible to calculate the vacuum expectation value of some relevant operators. As an application of this formalism, we calculate the renormalized vacuum expectation value of the square of the scalar field, 2 (x)> Ren , and the energy-momentum tensor, μν (x)> Ren , for the global monopole space-time with spatial dimensions d=4 and d=5

  6. Maximum likelihood of phylogenetic networks.

    Science.gov (United States)

    Jin, Guohua; Nakhleh, Luay; Snir, Sagi; Tuller, Tamir

    2006-11-01

    Horizontal gene transfer (HGT) is believed to be ubiquitous among bacteria, and plays a major role in their genome diversification as well as their ability to develop resistance to antibiotics. In light of its evolutionary significance and implications for human health, developing accurate and efficient methods for detecting and reconstructing HGT is imperative. In this article we provide a new HGT-oriented likelihood framework for many problems that involve phylogeny-based HGT detection and reconstruction. Beside the formulation of various likelihood criteria, we show that most of these problems are NP-hard, and offer heuristics for efficient and accurate reconstruction of HGT under these criteria. We implemented our heuristics and used them to analyze biological as well as synthetic data. In both cases, our criteria and heuristics exhibited very good performance with respect to identifying the correct number of HGT events as well as inferring their correct location on the species tree. Implementation of the criteria as well as heuristics and hardness proofs are available from the authors upon request. Hardness proofs can also be downloaded at http://www.cs.tau.ac.il/~tamirtul/MLNET/Supp-ML.pdf

  7. Higher spins tunneling from a time dependent and spherically symmetric black hole

    International Nuclear Information System (INIS)

    Siahaan, Haryanto M.

    2016-01-01

    The discussions of Hawking radiation via tunneling method have been performed extensively in the case of scalar particles. Moreover, there are also several works in discussing the tunneling method for Hawking radiation by using higher spins, e.g. neutrino, photon, and gravitino, in the background of static black holes. Interestingly, it is found that the Hawking temperature for static black holes using the higher spins particles has no difference compared to the one computed using scalars. In this paper, we study the Hawking radiation for a spherically symmetric and time dependent black holes using the tunneling of Dirac particles, photon, and gravitino. We find that the obtained Hawking temperature is similar to the one derived in the tunneling method by using scalars. (orig.)

  8. Higher spins tunneling from a time dependent and spherically symmetric black hole

    Energy Technology Data Exchange (ETDEWEB)

    Siahaan, Haryanto M. [Parahyangan Catholic University, Physics Department, Bandung (Indonesia)

    2016-03-15

    The discussions of Hawking radiation via tunneling method have been performed extensively in the case of scalar particles. Moreover, there are also several works in discussing the tunneling method for Hawking radiation by using higher spins, e.g. neutrino, photon, and gravitino, in the background of static black holes. Interestingly, it is found that the Hawking temperature for static black holes using the higher spins particles has no difference compared to the one computed using scalars. In this paper, we study the Hawking radiation for a spherically symmetric and time dependent black holes using the tunneling of Dirac particles, photon, and gravitino. We find that the obtained Hawking temperature is similar to the one derived in the tunneling method by using scalars. (orig.)

  9. Calculating Higher-Order Moments of Phylogenetic Stochastic Mapping Summaries in Linear Time

    Science.gov (United States)

    Dhar, Amrit

    2017-01-01

    Abstract Stochastic mapping is a simulation-based method for probabilistically mapping substitution histories onto phylogenies according to continuous-time Markov models of evolution. This technique can be used to infer properties of the evolutionary process on the phylogeny and, unlike parsimony-based mapping, conditions on the observed data to randomly draw substitution mappings that do not necessarily require the minimum number of events on a tree. Most stochastic mapping applications simulate substitution mappings only to estimate the mean and/or variance of two commonly used mapping summaries: the number of particular types of substitutions (labeled substitution counts) and the time spent in a particular group of states (labeled dwelling times) on the tree. Fast, simulation-free algorithms for calculating the mean of stochastic mapping summaries exist. Importantly, these algorithms scale linearly in the number of tips/leaves of the phylogenetic tree. However, to our knowledge, no such algorithm exists for calculating higher-order moments of stochastic mapping summaries. We present one such simulation-free dynamic programming algorithm that calculates prior and posterior mapping variances and scales linearly in the number of phylogeny tips. Our procedure suggests a general framework that can be used to efficiently compute higher-order moments of stochastic mapping summaries without simulations. We demonstrate the usefulness of our algorithm by extending previously developed statistical tests for rate variation across sites and for detecting evolutionarily conserved regions in genomic sequences. PMID:28177780

  10. Flexible Learning and Teaching: Looking Beyond the Binary of Full-time/Part-time Provision in South African Higher Education

    Directory of Open Access Journals (Sweden)

    Barbara M Jones

    2015-06-01

    Full Text Available This paper engages with literature on flexible learning and teaching in order to explore whether it may be possible, within the South African context, to have flexible learning and teaching provide a third way which goes beyond the current practice of full-time/part-time provision. This binary classification of students is a proxy for day-time/after-hours delivery.  The argument is made that effective, flexible learning and teaching requires a fundamental shift in thinking about learning and teaching in higher education that moves us beyond such binaries. The paper proposes that in order to ensure access and success for students, ‘common knowledge’ (Edwards, 2010 will need to be co-constructed which understands flexible learning and teaching in ways which will meet needs of a diversity of students, including working students. It will require ‘resourceful leadership’ (Edwards, 2014 within the university that recognises, enhances and gives purpose to the capability of colleagues at every level of the systems they lead. Also, it will require the building of ‘common knowledge’ between certain sectors of universities and particular workplaces.

  11. TIME MANAGEMENT SKILLS IN HIGHER INSTITUTIONS: A CASE STUDY OF ELECTRICAL, ELECTRONIC & SYSTEMS ENGINEERING UNDERGRADUATE STUDENTS

    Directory of Open Access Journals (Sweden)

    NORBAHIAH MISRAN

    2016-11-01

    Full Text Available Time management is an important skill that every student in higher education institutions should acquire since it is one of the key factors in assuring excellent achievement in academic. Students with poor time-management skills are far more likely to be tressed and, as a result, have a negative impact on the quality of life. Thus, this paper discusses this issue based on a study among students of Electrical, Electronic & System Engineering at Universiti Kebangsaan Malaysia according to year of study and then establishes the relationship with the student's academic performance. Data were collected using a set of questionnaire carried out on 272 undergraduate students from year one to year four for 2015/2016 session. These data were then analysed using ANOVA statistical inference and Pearson correlations. Results revealed that time management skills of the respondents were at moderate level and established a negative correlation with year of study. This study also found significant findings where time management skills have a positive but weak correlation with student’s academic performance. These findings suggest the need for additional research to further refine the justifications of these measures. The university is also anticipated to provide a good platform for students to develop their time management skills at the early stage of their admission to university.

  12. Wavelet Transform Based Higher Order Statistical Analysis of Wind and Wave Time Histories

    Science.gov (United States)

    Habib Huseni, Gulamhusenwala; Balaji, Ramakrishnan

    2017-10-01

    Wind, blowing on the surface of the ocean, imparts the energy to generate the waves. Understanding the wind-wave interactions is essential for an oceanographer. This study involves higher order spectral analyses of wind speeds and significant wave height time histories, extracted from European Centre for Medium-Range Weather Forecast database at an offshore location off Mumbai coast, through continuous wavelet transform. The time histories were divided by the seasons; pre-monsoon, monsoon, post-monsoon and winter and the analysis were carried out to the individual data sets, to assess the effect of various seasons on the wind-wave interactions. The analysis revealed that the frequency coupling of wind speeds and wave heights of various seasons. The details of data, analysing technique and results are presented in this paper.

  13. Superfast maximum-likelihood reconstruction for quantum tomography

    Science.gov (United States)

    Shang, Jiangwei; Zhang, Zhengyun; Ng, Hui Khoon

    2017-06-01

    Conventional methods for computing maximum-likelihood estimators (MLE) often converge slowly in practical situations, leading to a search for simplifying methods that rely on additional assumptions for their validity. In this work, we provide a fast and reliable algorithm for maximum-likelihood reconstruction that avoids this slow convergence. Our method utilizes the state-of-the-art convex optimization scheme, an accelerated projected-gradient method, that allows one to accommodate the quantum nature of the problem in a different way than in the standard methods. We demonstrate the power of our approach by comparing its performance with other algorithms for n -qubit state tomography. In particular, an eight-qubit situation that purportedly took weeks of computation time in 2005 can now be completed in under a minute for a single set of data, with far higher accuracy than previously possible. This refutes the common claim that MLE reconstruction is slow and reduces the need for alternative methods that often come with difficult-to-verify assumptions. In fact, recent methods assuming Gaussian statistics or relying on compressed sensing ideas are demonstrably inapplicable for the situation under consideration here. Our algorithm can be applied to general optimization problems over the quantum state space; the philosophy of projected gradients can further be utilized for optimization contexts with general constraints.

  14. Improving Spiking Dynamical Networks: Accurate Delays, Higher-Order Synapses, and Time Cells.

    Science.gov (United States)

    Voelker, Aaron R; Eliasmith, Chris

    2018-03-01

    Researchers building spiking neural networks face the challenge of improving the biological plausibility of their model networks while maintaining the ability to quantitatively characterize network behavior. In this work, we extend the theory behind the neural engineering framework (NEF), a method of building spiking dynamical networks, to permit the use of a broad class of synapse models while maintaining prescribed dynamics up to a given order. This theory improves our understanding of how low-level synaptic properties alter the accuracy of high-level computations in spiking dynamical networks. For completeness, we provide characterizations for both continuous-time (i.e., analog) and discrete-time (i.e., digital) simulations. We demonstrate the utility of these extensions by mapping an optimal delay line onto various spiking dynamical networks using higher-order models of the synapse. We show that these networks nonlinearly encode rolling windows of input history, using a scale invariant representation, with accuracy depending on the frequency content of the input signal. Finally, we reveal that these methods provide a novel explanation of time cell responses during a delay task, which have been observed throughout hippocampus, striatum, and cortex.

  15. Modelling maximum likelihood estimation of availability

    International Nuclear Information System (INIS)

    Waller, R.A.; Tietjen, G.L.; Rock, G.W.

    1975-01-01

    Suppose the performance of a nuclear powered electrical generating power plant is continuously monitored to record the sequence of failure and repairs during sustained operation. The purpose of this study is to assess one method of estimating the performance of the power plant when the measure of performance is availability. That is, we determine the probability that the plant is operational at time t. To study the availability of a power plant, we first assume statistical models for the variables, X and Y, which denote the time-to-failure and the time-to-repair variables, respectively. Once those statistical models are specified, the availability, A(t), can be expressed as a function of some or all of their parameters. Usually those parameters are unknown in practice and so A(t) is unknown. This paper discusses the maximum likelihood estimator of A(t) when the time-to-failure model for X is an exponential density with parameter, lambda, and the time-to-repair model for Y is an exponential density with parameter, theta. Under the assumption of exponential models for X and Y, it follows that the instantaneous availability at time t is A(t)=lambda/(lambda+theta)+theta/(lambda+theta)exp[-[(1/lambda)+(1/theta)]t] with t>0. Also, the steady-state availability is A(infinity)=lambda/(lambda+theta). We use the observations from n failure-repair cycles of the power plant, say X 1 , X 2 , ..., Xsub(n), Y 1 , Y 2 , ..., Ysub(n) to present the maximum likelihood estimators of A(t) and A(infinity). The exact sampling distributions for those estimators and some statistical properties are discussed before a simulation model is used to determine 95% simulation intervals for A(t). The methodology is applied to two examples which approximate the operating history of two nuclear power plants. (author)

  16. A novel fast gas chromatography method for higher time resolution measurements of speciated monoterpenes in air

    Science.gov (United States)

    Jones, C. E.; Kato, S.; Nakashima, Y.; Kajii, Y.

    2014-05-01

    Biogenic emissions supply the largest fraction of non-methane volatile organic compounds (VOC) from the biosphere to the atmospheric boundary layer, and typically comprise a complex mixture of reactive terpenes. Due to this chemical complexity, achieving comprehensive measurements of biogenic VOC (BVOC) in air within a satisfactory time resolution is analytically challenging. To address this, we have developed a novel, fully automated Fast Gas Chromatography (Fast-GC) based technique to provide higher time resolution monitoring of monoterpenes (and selected other C9-C15 terpenes) during plant emission studies and in ambient air. To our knowledge, this is the first study to apply a Fast-GC based separation technique to achieve quantification of terpenes in ambient air. Three chromatography methods have been developed for atmospheric terpene analysis under different sampling scenarios. Each method facilitates chromatographic separation of selected BVOC within a significantly reduced analysis time compared to conventional GC methods, whilst maintaining the ability to quantify individual monoterpene structural isomers. Using this approach, the C9-C15 BVOC composition of single plant emissions may be characterised within a 14.5 min analysis time. Moreover, in-situ quantification of 12 monoterpenes in unpolluted ambient air may be achieved within an 11.7 min chromatographic separation time (increasing to 19.7 min when simultaneous quantification of multiple oxygenated C9-C10 terpenoids is required, and/or when concentrations of anthropogenic VOC are significant). These analysis times potentially allow for a twofold to fivefold increase in measurement frequency compared to conventional GC methods. Here we outline the technical details and analytical capability of this chromatographic approach, and present the first in-situ Fast-GC observations of 6 monoterpenes and the oxygenated BVOC (OBVOC) linalool in ambient air. During this field deployment within a suburban forest

  17. Maximum likelihood versus likelihood-free quantum system identification in the atom maser

    International Nuclear Information System (INIS)

    Catana, Catalin; Kypraios, Theodore; Guţă, Mădălin

    2014-01-01

    We consider the problem of estimating a dynamical parameter of a Markovian quantum open system (the atom maser), by performing continuous time measurements in the system's output (outgoing atoms). Two estimation methods are investigated and compared. Firstly, the maximum likelihood estimator (MLE) takes into account the full measurement data and is asymptotically optimal in terms of its mean square error. Secondly, the ‘likelihood-free’ method of approximate Bayesian computation (ABC) produces an approximation of the posterior distribution for a given set of summary statistics, by sampling trajectories at different parameter values and comparing them with the measurement data via chosen statistics. Building on previous results which showed that atom counts are poor statistics for certain values of the Rabi angle, we apply MLE to the full measurement data and estimate its Fisher information. We then select several correlation statistics such as waiting times, distribution of successive identical detections, and use them as input of the ABC algorithm. The resulting posterior distribution follows closely the data likelihood, showing that the selected statistics capture ‘most’ statistical information about the Rabi angle. (paper)

  18. Agency Beliefs Over Time and Across Cultures: Free Will Beliefs Predict Higher Job Satisfaction

    Science.gov (United States)

    Feldman, Gilad; Farh, Jiing-Lih; Wong, Kin Fai Ellick

    2017-01-01

    In three studies, we examined the relationship between free will beliefs and job satisfaction over time and across cultures. Study 1 examined 252 Taiwanese real-estate agents over a 3-months period. Study 2 examined job satisfaction for 137 American workers on an online labor market over a 6-months period. Study 3 extended to a large sample of 14,062 employees from 16 countries and examined country-level moderators. We found a consistent positive relationship between the belief in free will and job satisfaction. The relationship was above and beyond other agency constructs (Study 2), mediated by perceived autonomy (Studies 2-3), and stronger in countries with a higher national endorsement of the belief in free will (Study 3). We conclude that free-will beliefs predict outcomes over time and across cultures beyond other agency constructs. We call for more cross-cultural and longitudinal studies examining free-will beliefs as predictors of real-life outcomes. PMID:29191084

  19. Algorithms of maximum likelihood data clustering with applications

    Science.gov (United States)

    Giada, Lorenzo; Marsili, Matteo

    2002-12-01

    We address the problem of data clustering by introducing an unsupervised, parameter-free approach based on maximum likelihood principle. Starting from the observation that data sets belonging to the same cluster share a common information, we construct an expression for the likelihood of any possible cluster structure. The likelihood in turn depends only on the Pearson's coefficient of the data. We discuss clustering algorithms that provide a fast and reliable approximation to maximum likelihood configurations. Compared to standard clustering methods, our approach has the advantages that (i) it is parameter free, (ii) the number of clusters need not be fixed in advance and (iii) the interpretation of the results is transparent. In order to test our approach and compare it with standard clustering algorithms, we analyze two very different data sets: time series of financial market returns and gene expression data. We find that different maximization algorithms produce similar cluster structures whereas the outcome of standard algorithms has a much wider variability.

  20. Likelihood for transcriptions in a genetic regulatory system under asymmetric stable Lévy noise.

    Science.gov (United States)

    Wang, Hui; Cheng, Xiujun; Duan, Jinqiao; Kurths, Jürgen; Li, Xiaofan

    2018-01-01

    This work is devoted to investigating the evolution of concentration in a genetic regulation system, when the synthesis reaction rate is under additive and multiplicative asymmetric stable Lévy fluctuations. By focusing on the impact of skewness (i.e., non-symmetry) in the probability distributions of noise, we find that via examining the mean first exit time (MFET) and the first escape probability (FEP), the asymmetric fluctuations, interacting with nonlinearity in the system, lead to peculiar likelihood for transcription. This includes, in the additive noise case, realizing higher likelihood of transcription for larger positive skewness (i.e., asymmetry) index β, causing a stochastic bifurcation at the non-Gaussianity index value α = 1 (i.e., it is a separating point or line for the likelihood for transcription), and achieving a turning point at the threshold value β≈-0.5 (i.e., beyond which the likelihood for transcription suddenly reversed for α values). The stochastic bifurcation and turning point phenomena do not occur in the symmetric noise case (β = 0). While in the multiplicative noise case, non-Gaussianity index value α = 1 is a separating point or line for both the MFET and the FEP. We also investigate the noise enhanced stability phenomenon. Additionally, we are able to specify the regions in the whole parameter space for the asymmetric noise, in which we attain desired likelihood for transcription. We have conducted a series of numerical experiments in "regulating" the likelihood of gene transcription by tuning asymmetric stable Lévy noise indexes. This work offers insights for possible ways of achieving gene regulation in experimental research.

  1. Reduced Time in Therapeutic Range and Higher Mortality in Atrial Fibrillation Patients Taking Acenocoumarol.

    Science.gov (United States)

    Rivera-Caravaca, José Miguel; Roldán, Vanessa; Esteve-Pastor, María Asunción; Valdés, Mariano; Vicente, Vicente; Marín, Francisco; Lip, Gregory Y H

    2018-01-01

    The efficacy and tolerability of vitamin K antagonists (VKAs) depends on the quality of anticoagulant control, reflected by the mean time in therapeutic range (TTR) of international normalized ratio 2.0 to 3.0. In the present study, we aimed to investigate the association between TTR and change in TTR (ΔTTR) with the risk of mortality and clinically significant events in a consecutive cohort of atrial fibrillation (AF) patients. We included 1361 AF patients stable on VKAs (international normalized ratio 2.0-3.0) during at least the previous 6 months. After 6 months of follow-up we recalculated TTR, calculated ΔTTR (ie, the difference between baseline and 6-month TTRs) and investigated the association of both with the risk of mortality and "clinically significant events" (defined as the composite of stroke or systemic embolism, major bleeding, acute coronary syndrome, acute heart failure, and all-cause deaths). The median ΔTTR at 6 months of entry was 20% (interquartile range 0-34%), 796 (58.5%) patients had a TTR reduction of at least 20%, while 330 (24.2%) had a TTR <65%. During follow-up, 34 (2.5% [4.16% per year]) patients died and 61 (4.5% [7.47% per year]) had a clinically significant event. Median ΔTTR was significantly higher in patients who died (35.5% vs 20%; P = 0.002) or sustained clinically significant events (28% vs 20%; P = 0.022). Based on Cox regression analyses, the overall risk of mortality at 6 months for each decrease point in TTR was 1.02 (95% CI, 1.01-1.04; P = 0.003), and the risk of clinically significant events was 1.01 (95% CI, 1.00-1.03; P = 0.028). Patients with TTR <65% at 6 months had higher risk of mortality (hazard ratio = 2.96; 95% CI, 1.51-5.81; P = 0.002) and clinically significant events (hazard ratio = 1.71; 95% CI, 1.01-2.88; P = 0.046). Our findings suggest that in AF patients anticoagulated with VKAs, a change in TTR over 6 months (ie, ΔTTR) is an independent risk factor for mortality and clinically significant events

  2. A Predictive Likelihood Approach to Bayesian Averaging

    Directory of Open Access Journals (Sweden)

    Tomáš Jeřábek

    2015-01-01

    Full Text Available Multivariate time series forecasting is applied in a wide range of economic activities related to regional competitiveness and is the basis of almost all macroeconomic analysis. In this paper we combine multivariate density forecasts of GDP growth, inflation and real interest rates from four various models, two type of Bayesian vector autoregression (BVAR models, a New Keynesian dynamic stochastic general equilibrium (DSGE model of small open economy and DSGE-VAR model. The performance of models is identified using historical dates including domestic economy and foreign economy, which is represented by countries of the Eurozone. Because forecast accuracy of observed models are different, the weighting scheme based on the predictive likelihood, the trace of past MSE matrix, model ranks are used to combine the models. The equal-weight scheme is used as a simple combination scheme. The results show that optimally combined densities are comparable to the best individual models.

  3. A maximum likelihood framework for protein design

    Directory of Open Access Journals (Sweden)

    Philippe Hervé

    2006-06-01

    Full Text Available Abstract Background The aim of protein design is to predict amino-acid sequences compatible with a given target structure. Traditionally envisioned as a purely thermodynamic question, this problem can also be understood in a wider context, where additional constraints are captured by learning the sequence patterns displayed by natural proteins of known conformation. In this latter perspective, however, we still need a theoretical formalization of the question, leading to general and efficient learning methods, and allowing for the selection of fast and accurate objective functions quantifying sequence/structure compatibility. Results We propose a formulation of the protein design problem in terms of model-based statistical inference. Our framework uses the maximum likelihood principle to optimize the unknown parameters of a statistical potential, which we call an inverse potential to contrast with classical potentials used for structure prediction. We propose an implementation based on Markov chain Monte Carlo, in which the likelihood is maximized by gradient descent and is numerically estimated by thermodynamic integration. The fit of the models is evaluated by cross-validation. We apply this to a simple pairwise contact potential, supplemented with a solvent-accessibility term, and show that the resulting models have a better predictive power than currently available pairwise potentials. Furthermore, the model comparison method presented here allows one to measure the relative contribution of each component of the potential, and to choose the optimal number of accessibility classes, which turns out to be much higher than classically considered. Conclusion Altogether, this reformulation makes it possible to test a wide diversity of models, using different forms of potentials, or accounting for other factors than just the constraint of thermodynamic stability. Ultimately, such model-based statistical analyses may help to understand the forces

  4. MXLKID: a maximum likelihood parameter identifier

    International Nuclear Information System (INIS)

    Gavel, D.T.

    1980-07-01

    MXLKID (MaXimum LiKelihood IDentifier) is a computer program designed to identify unknown parameters in a nonlinear dynamic system. Using noisy measurement data from the system, the maximum likelihood identifier computes a likelihood function (LF). Identification of system parameters is accomplished by maximizing the LF with respect to the parameters. The main body of this report briefly summarizes the maximum likelihood technique and gives instructions and examples for running the MXLKID program. MXLKID is implemented LRLTRAN on the CDC7600 computer at LLNL. A detailed mathematical description of the algorithm is given in the appendices. 24 figures, 6 tables

  5. The Changing Faces of Corruption in Georgian Higher Education: Access through Times and Tests

    Science.gov (United States)

    Orkodashvili, Mariam

    2012-01-01

    This article presents a comparative-historical analysis of access to higher education in Georgia. It describes the workings of corrupt channels during the Soviet and early post-Soviet periods and the role of standardized tests in fighting corruption in higher education admission processes after introduction of the Unified National Entrance…

  6. Flourishing for the Common Good: Positive Leadership in Christian Higher Education during Times of Change

    Science.gov (United States)

    Dahlvig, Jolyn E.

    2018-01-01

    This paper argues that higher education should exist for society's common good, a focus that has been lost in recent years (Dorn, 2011; Eagan et al., 2015; Ford 2016; Habley, Bloom & Robbins, 2012). To explore how Christian higher education can provide leadership in returning to a focus on the common good, this paper traces the movement of…

  7. Essays on empirical likelihood in economics

    NARCIS (Netherlands)

    Gao, Z.

    2012-01-01

    This thesis intends to exploit the roots of empirical likelihood and its related methods in mathematical programming and computation. The roots will be connected and the connections will induce new solutions for the problems of estimation, computation, and generalization of empirical likelihood.

  8. Real-Time Systems: Reflections on higher education in the Czech Republic, Hungary, Poland and Slovenia

    NARCIS (Netherlands)

    File, Jonathan M.; Goedegebuure, Leo; Goedegebuure, L.C.J.

    2003-01-01

    Real-time systems (An ICT definition) In real-time multiprocessing there is the extra requirement that the system complete its response to any input within a certain critical time. This poses additional problems, particularly in situations where the system is heavily loaded and is subject to many

  9. Classical and quantum-mechanical axioms with the higher time derivative formalism

    International Nuclear Information System (INIS)

    Kamalov, Timur

    2013-01-01

    A Newtonian mechanics model is essentially the model of a point body in an inertial reference frame. How to describe extended bodies in non-inertial (vibration) reference frames with the random initial conditions? One of the most generalized ways of descriptions (known as the higher derivatives formalism) consists in taking into account the infinite number of the higher temporal derivatives of the coordinates in the Lagrange function. Such formalism describing physical objects in the infinite dimensions space does not contradict to the quantum mechanics and infinite dimensions Hilbert space.

  10. Transformative, transgressive social learning: rethinking higher education pedagogy in times of systemic global dysfunction

    NARCIS (Netherlands)

    Lotz-Sisitka, Heila; Wals, A.E.J.; Kronlid, David; McGarry, Dylan

    2015-01-01

    The nature of the sustainability challenges currently at hand is such that dominant pedagogies and forms of learning that characterize higher education need to be reconsidered to enable students and staff to deal with accelerating change, increasing complexity, contested knowledge claims and

  11. Blurring Time and Place in Higher Education with Bring Your Own Device Applications: A Literature Review

    Science.gov (United States)

    Sundgren, Marcus

    2017-01-01

    The use of mobile devices is increasing rapidly in society, and student device ownership is becoming more or less ubiquitous in many parts of the world. This might be an under-utilised resource that could benefit the educational practices of institutions of higher education. This review examines 91 journal articles from 28 countries published in…

  12. The United Nations, Peace, and Higher Education: Pedagogic Interventions in Neoliberal Times

    Science.gov (United States)

    Kester, Kevin

    2017-01-01

    Peace and conflict studies (PACS) education in recent decades has become a popular approach to social justice learning in higher education institutions (Harris, Fisk, and Rank 1998; Smith 2007; Carstarphen et al. 2010; Bajaj and Hantzopoulos 2016) and has been provided legitimacy through a number of different United Nations (UN) declarations…

  13. Breadth vs. Depth: The Timing of Specialization in Higher Education. NBER Working Paper No. 15943

    Science.gov (United States)

    Malamud, Ofer

    2010-01-01

    This paper examines the tradeoff between early and late specialization in the context of higher education. While some educational systems require students to specialize early by choosing a major field of study prior to entering university, others allow students to postpone this choice. I develop a model in which individuals, by taking courses in…

  14. Times, Measures and the Man: the Future of British Higher Education Treated Historically and Comparatively

    NARCIS (Netherlands)

    Neave, Guy

    2006-01-01

    This article is a tribute to the life work of Maurice Kogan. Very little of higher education's landscape in the United Kingdom has remained unchanged over the past four decades and this article sets out to analyze the way the perception of the role of universities in society has changed in the

  15. On a higher order multi-term time-fractional partial differential equation involving Caputo-Fabrizio derivative

    OpenAIRE

    Pirnapasov, Sardor; Karimov, Erkinjon

    2017-01-01

    In the present work we discuss higher order multi-term partial differential equation (PDE) with the Caputo-Fabrizio fractional derivative in time. We investigate a boundary value problem for fractional heat equation involving higher order Caputo-Fabrizio derivatives in time-variable. Using method of separation of variables and integration by parts, we reduce fractional order PDE to the integer order. We represent explicit solution of formulated problem in particular case by Fourier series.

  16. Nearly Efficient Likelihood Ratio Tests for Seasonal Unit Roots

    DEFF Research Database (Denmark)

    Jansson, Michael; Nielsen, Morten Ørregaard

    In an important generalization of zero frequency autore- gressive unit root tests, Hylleberg, Engle, Granger, and Yoo (1990) developed regression-based tests for unit roots at the seasonal frequencies in quarterly time series. We develop likelihood ratio tests for seasonal unit roots and show...... that these tests are "nearly efficient" in the sense of Elliott, Rothenberg, and Stock (1996), i.e. that their local asymptotic power functions are indistinguishable from the Gaussian power envelope. Currently available nearly efficient testing procedures for seasonal unit roots are regression-based and require...... the choice of a GLS detrending parameter, which our likelihood ratio tests do not....

  17. Time-discrete higher order ALE formulations: a priori error analysis

    KAUST Repository

    Bonito, Andrea; Kyza, Irene; Nochetto, Ricardo H.

    2013-01-01

    We derive optimal a priori error estimates for discontinuous Galerkin (dG) time discrete schemes of any order applied to an advection-diffusion model defined on moving domains and written in the Arbitrary Lagrangian Eulerian (ALE) framework. Our

  18. A higher order numerical method for time fractional partial differential equations with nonsmooth data

    Science.gov (United States)

    Xing, Yanyuan; Yan, Yubin

    2018-03-01

    Gao et al. [11] (2014) introduced a numerical scheme to approximate the Caputo fractional derivative with the convergence rate O (k 3 - α), 0 equation is sufficiently smooth, Lv and Xu [20] (2016) proved by using energy method that the corresponding numerical method for solving time fractional partial differential equation has the convergence rate O (k 3 - α), 0 equation has low regularity and in this case the numerical method fails to have the convergence rate O (k 3 - α), 0 quadratic interpolation polynomials. Based on this scheme, we introduce a time discretization scheme to approximate the time fractional partial differential equation and show by using Laplace transform methods that the time discretization scheme has the convergence rate O (k 3 - α), 0 0 for smooth and nonsmooth data in both homogeneous and inhomogeneous cases. Numerical examples are given to show that the theoretical results are consistent with the numerical results.

  19. MAXIMUM-LIKELIHOOD-ESTIMATION OF THE ENTROPY OF AN ATTRACTOR

    NARCIS (Netherlands)

    SCHOUTEN, JC; TAKENS, F; VANDENBLEEK, CM

    In this paper, a maximum-likelihood estimate of the (Kolmogorov) entropy of an attractor is proposed that can be obtained directly from a time series. Also, the relative standard deviation of the entropy estimate is derived; it is dependent on the entropy and on the number of samples used in the

  20. LIKELIHOOD ESTIMATION OF PARAMETERS USING SIMULTANEOUSLY MONITORED PROCESSES

    DEFF Research Database (Denmark)

    Friis-Hansen, Peter; Ditlevsen, Ove Dalager

    2004-01-01

    The topic is maximum likelihood inference from several simultaneously monitored response processes of a structure to obtain knowledge about the parameters of other not monitored but important response processes when the structure is subject to some Gaussian load field in space and time. The consi....... The considered example is a ship sailing with a given speed through a Gaussian wave field....

  1. Composite likelihood and two-stage estimation in family studies

    DEFF Research Database (Denmark)

    Andersen, Elisabeth Anne Wreford

    2004-01-01

    In this paper register based family studies provide the motivation for linking a two-stage estimation procedure in copula models for multivariate failure time data with a composite likelihood approach. The asymptotic properties of the estimators in both parametric and semi-parametric models are d...

  2. Earlier school start times are associated with higher rates of behavioral problems in elementary schools.

    Science.gov (United States)

    Keller, Peggy S; Gilbert, Lauren R; Haak, Eric A; Bi, Shuang; Smith, Olivia A

    2017-04-01

    Early school start times may curtail children's sleep and inadvertently promote sleep restriction. The current study examines the potential implications for early school start times for behavioral problems in public elementary schools (student ages 5-12 years) in Kentucky. School start times were obtained from school Web sites or by calling school offices; behavioral and disciplinary problems, along with demographic information about schools, were obtained from the Kentucky Department of Education. Estimated associations controlled for teacher/student ratio, racial composition, school rank, enrollment, and Appalachian location. Associations between early school start time and greater behavioral problems (harassment, in-school removals, suspensions, and expulsions) were observed, although some of these associations were found only for schools serving the non-Appalachian region. Findings support the growing body of research showing that early school start times may contribute to student problems, and extend this research through a large-scale examination of elementary schools, behavioral outcomes, and potential moderators of risk. Copyright © 2017 National Sleep Foundation. Published by Elsevier Inc. All rights reserved.

  3. Late-time tails of wave propagation in higher dimensional spacetimes

    International Nuclear Information System (INIS)

    Cardoso, Vitor; Yoshida, Shijun; Dias, Oscar J.C.; Lemos, Jose P.S.

    2003-01-01

    We study the late-time tails appearing in the propagation of massless fields (scalar, electromagnetic, and gravitational) in the vicinities of a D-dimensional Schwarzschild black hole. We find that at late times the fields always exhibit a power-law falloff, but the power law is highly sensitive to the dimensionality of the spacetime. Accordingly, for odd D>3 we find that the field behaves as t -(2l+D-2) at late times, where l is the angular index determining the angular dependence of the field. This behavior is entirely due to D being odd; it does not depend on the presence of a black hole in the spacetime. Indeed this tail is already present in the flat space Green's function. On the other hand, for even D>4 the field decays as t -(2l+3D-8) , and this time there is no contribution from the flat background. This power law is entirely due to the presence of the black hole. The D=4 case is special and exhibits, as is well known, t -(2l+3) behavior. In the extra dimensional scenario for our Universe, our results are strictly correct if the extra dimensions are infinite, but also give a good description of the late-time behavior of any field if the large extra dimensions are large enough

  4. Likelihood ratio sequential sampling models of recognition memory.

    Science.gov (United States)

    Osth, Adam F; Dennis, Simon; Heathcote, Andrew

    2017-02-01

    The mirror effect - a phenomenon whereby a manipulation produces opposite effects on hit and false alarm rates - is benchmark regularity of recognition memory. A likelihood ratio decision process, basing recognition on the relative likelihood that a stimulus is a target or a lure, naturally predicts the mirror effect, and so has been widely adopted in quantitative models of recognition memory. Glanzer, Hilford, and Maloney (2009) demonstrated that likelihood ratio models, assuming Gaussian memory strength, are also capable of explaining regularities observed in receiver-operating characteristics (ROCs), such as greater target than lure variance. Despite its central place in theorising about recognition memory, however, this class of models has not been tested using response time (RT) distributions. In this article, we develop a linear approximation to the likelihood ratio transformation, which we show predicts the same regularities as the exact transformation. This development enabled us to develop a tractable model of recognition-memory RT based on the diffusion decision model (DDM), with inputs (drift rates) provided by an approximate likelihood ratio transformation. We compared this "LR-DDM" to a standard DDM where all targets and lures receive their own drift rate parameters. Both were implemented as hierarchical Bayesian models and applied to four datasets. Model selection taking into account parsimony favored the LR-DDM, which requires fewer parameters than the standard DDM but still fits the data well. These results support log-likelihood based models as providing an elegant explanation of the regularities of recognition memory, not only in terms of choices made but also in terms of the times it takes to make them. Copyright © 2016 Elsevier Inc. All rights reserved.

  5. Approximate solution of space and time fractional higher order phase field equation

    Science.gov (United States)

    Shamseldeen, S.

    2018-03-01

    This paper is concerned with a class of space and time fractional partial differential equation (STFDE) with Riesz derivative in space and Caputo in time. The proposed STFDE is considered as a generalization of a sixth-order partial phase field equation. We describe the application of the optimal homotopy analysis method (OHAM) to obtain an approximate solution for the suggested fractional initial value problem. An averaged-squared residual error function is defined and used to determine the optimal convergence control parameter. Two numerical examples are studied, considering periodic and non-periodic initial conditions, to justify the efficiency and the accuracy of the adopted iterative approach. The dependence of the solution on the order of the fractional derivative in space and time and model parameters is investigated.

  6. Asymptotic Likelihood Distribution for Correlated & Constrained Systems

    CERN Document Server

    Agarwal, Ujjwal

    2016-01-01

    It describes my work as summer student at CERN. The report discusses the asymptotic distribution of the likelihood ratio for total no. of parameters being h and 2 out of these being are constrained and correlated.

  7. Riccion from higher-dimensional space-time with D-dimensional ...

    Indian Academy of Sciences (India)

    suggest that space-time above 3 05¢1016 GeV should be fractal. .... Here VD is the volume of SD, g´4·Dµ is the determinant of the metric tensor gMN (M ...... means that above 3.05x1016 GeV, SD is not a smooth surface whereas M4 is smooth.

  8. Widening Participation, Social Justice and Injustice: Part-Time Students in Higher Education in England

    Science.gov (United States)

    Callender, Claire

    2011-01-01

    This article critically assesses the nature and scope of current financial support for part-time undergraduates in England, highlighting its importance for widening participation. It considers the limitations of these financial arrangements, why they are in need of reform, and some of the consequences of their inadequacies. The paper argues that…

  9. Seven Steps to Heaven: Time and Tide in 21st Century Contemporary Music Higher Education

    Science.gov (United States)

    Mitchell, Annie K.

    2018-01-01

    Throughout the time of my teaching career, the tide has exposed changes in the nature of music, students and music education. This paper discusses teaching and learning in contemporary music at seven critical stages of 21st century music education: i) diverse types of undergraduate learners; ii) teaching traditional classical repertoire and skills…

  10. (Re-)designing higher education curricula in times of systemic dysfunction

    NARCIS (Netherlands)

    Tassone, Valentina C.; O’Mahony, Catherine; McKenna, Emma; Eppink, Hansje J.; Wals, Arjen E.J.

    2017-01-01

    There is an urgent need to address the grand sustainability challenges of our time, and to explore new and more responsible ways of operating, researching, and innovating that enable society to respond to these challenges. The emergent Responsible Research and Innovation (RRI) policy agenda can act

  11. Spanish Zimbardo Time Perspective Inventory Construction and Validity among Higher Education Students

    Science.gov (United States)

    Usart, Mireia; Romero, Margarida

    2014-01-01

    Introduction: The study of "Time Orientation" (TO) has been focused on how to measure this construct and its effects on human behavior. Defined as a fundamental psychological variable, TO is multidimensional, sensible to cultural differences and age. Although its relation to learning, it deserves further study in the different Higher…

  12. Maximum-Likelihood Detection Of Noncoherent CPM

    Science.gov (United States)

    Divsalar, Dariush; Simon, Marvin K.

    1993-01-01

    Simplified detectors proposed for use in maximum-likelihood-sequence detection of symbols in alphabet of size M transmitted by uncoded, full-response continuous phase modulation over radio channel with additive white Gaussian noise. Structures of receivers derived from particular interpretation of maximum-likelihood metrics. Receivers include front ends, structures of which depends only on M, analogous to those in receivers of coherent CPM. Parts of receivers following front ends have structures, complexity of which would depend on N.

  13. Higher Prevalence of Left-Handedness in Twins? Not After Controlling Birth Time Confounders.

    Science.gov (United States)

    Heikkilä, Kauko; Vuoksimaa, Eero; Saari-Kemppainen, Aulikki; Kaprio, Jaakko; Rose, Richard J; Haukka, Jari; Pitkäniemi, Janne; Iivanainen, Matti

    2015-10-01

    Pregnancy- and birth-related factors may have an effect on handedness. Compared with singletons, twins have a lower birth weight, shorter gestational age, and are at higher risk for birth complications. We tested whether the prevalence of left-handedness is higher among twins than singletons, and if so, whether that difference is fully explained by pregnancy and birth-related differences between twins and singletons. We analyzed Finnish population-based datasets; included were 8,786 twins and 5,892 singletons with information on birth weight (n = 12,381), Apgar scores (n = 11,129), and gestational age (n = 11,811). Two twin cohorts were involved: FinnTwin12 included twins born during 1983-1987, and FinnTwin16 included twins born during 1974-1979. We had two comparison groups of singletons: 4,101 individuals born during 1986-1988 and enrolled in the Helsinki Ultrasound Trial, and 1,791 individuals who were partners of FinnTwin16 twins. We used logistic regression models with writing hand as the outcome for comparison and evaluating effects of covariates. Left-handedness was more common in twins (9.67%) than in singletons (8.27%; p = .004). However, Apgar scores were associated with handedness, and after controlling for covariates, we found no difference in the prevalence of left-handedness between twins and singletons. Increased left-handedness among twins, often reported by others, was evident in our data, but only among our older twin cohorts, and that association disappeared after removing effects of perinatal covariates.

  14. Tensor-product preconditioners for higher-order space-time discontinuous Galerkin methods

    Science.gov (United States)

    Diosady, Laslo T.; Murman, Scott M.

    2017-02-01

    A space-time discontinuous-Galerkin spectral-element discretization is presented for direct numerical simulation of the compressible Navier-Stokes equations. An efficient solution technique based on a matrix-free Newton-Krylov method is developed in order to overcome the stiffness associated with high solution order. The use of tensor-product basis functions is key to maintaining efficiency at high-order. Efficient preconditioning methods are presented which can take advantage of the tensor-product formulation. A diagonalized Alternating-Direction-Implicit (ADI) scheme is extended to the space-time discontinuous Galerkin discretization. A new preconditioner for the compressible Euler/Navier-Stokes equations based on the fast-diagonalization method is also presented. Numerical results demonstrate the effectiveness of these preconditioners for the direct numerical simulation of subsonic turbulent flows.

  15. Tensor-Product Preconditioners for Higher-Order Space-Time Discontinuous Galerkin Methods

    Science.gov (United States)

    Diosady, Laslo T.; Murman, Scott M.

    2016-01-01

    space-time discontinuous-Galerkin spectral-element discretization is presented for direct numerical simulation of the compressible Navier-Stokes equat ions. An efficient solution technique based on a matrix-free Newton-Krylov method is developed in order to overcome the stiffness associated with high solution order. The use of tensor-product basis functions is key to maintaining efficiency at high order. Efficient preconditioning methods are presented which can take advantage of the tensor-product formulation. A diagonalized Alternating-Direction-Implicit (ADI) scheme is extended to the space-time discontinuous Galerkin discretization. A new preconditioner for the compressible Euler/Navier-Stokes equations based on the fast-diagonalization method is also presented. Numerical results demonstrate the effectiveness of these preconditioners for the direct numerical simulation of subsonic turbulent flows.

  16. Reducing the likelihood of long tennis matches.

    Science.gov (United States)

    Barnett, Tristan; Alan, Brown; Pollard, Graham

    2006-01-01

    Long matches can cause problems for tournaments. For example, the starting times of subsequent matches can be substantially delayed causing inconvenience to players, spectators, officials and television scheduling. They can even be seen as unfair in the tournament setting when the winner of a very long match, who may have negative aftereffects from such a match, plays the winner of an average or shorter length match in the next round. Long matches can also lead to injuries to the participating players. One factor that can lead to long matches is the use of the advantage set as the fifth set, as in the Australian Open, the French Open and Wimbledon. Another factor is long rallies and a greater than average number of points per game. This tends to occur more frequently on the slower surfaces such as at the French Open. The mathematical method of generating functions is used to show that the likelihood of long matches can be substantially reduced by using the tiebreak game in the fifth set, or more effectively by using a new type of game, the 50-40 game, throughout the match. Key PointsThe cumulant generating function has nice properties for calculating the parameters of distributions in a tennis matchA final tiebreaker set reduces the length of matches as currently being used in the US OpenA new 50-40 game reduces the length of matches whilst maintaining comparable probabilities for the better player to win the match.

  17. Higher Volume at Time of Breast Conserving Surgery Reduces Re-Excision in DCIS

    Directory of Open Access Journals (Sweden)

    J. H. Wolf

    2011-01-01

    Full Text Available Purpose. The purpose of this study was to compare the surgical and pathological variables which impact rate of re-excision following breast conserving therapy (BCS with or without concurrent additional margin excision (AM. Methods. The pathology database was queried for all patients with DCIS from January 2004 to September 2008. Pathologic assessment included volume of excision, subtype, size, distance from margin, grade, necrosis, multifocality, calcifications, and ER/PR status. Results. 405 cases were identified and 201 underwent BCS, 151-BCS-AM, and 53-mastectomy. Among the 201 BCS patients, 190 underwent re-excision for close or involved margins. 129 of these were treated with BCS and 61 with BCS-AM (P<.0001. The incidence of residual DCIS in the re-excision specimens was 32% (n=65 for BCS and 22% (n=33 for BCS-AM (P<.05. For both the BCS and the BCS-AM cohorts, volume of tissue excised is inversely correlated to the rate of re-excision (P=.0284. Multifocality (P=.0002 and ER status (P=.0382 were also significant predictors for rate of re-excision and variation in surgical technique was insignificant. Conclusions. The rate of positive margins, re-excision, and residual disease was significantly higher in patients with lower volume of excision. The performance of concurrent additional margin excision increases the efficacy of BCS for DCIS.

  18. Organizational Justice and Men's Likelihood to Sexually Harass: The Moderating Role of Sexism and Personality

    Science.gov (United States)

    Krings, Franciska; Facchin, Stephanie

    2009-01-01

    This study demonstrated relations between men's perceptions of organizational justice and increased sexual harassment proclivities. Respondents reported higher likelihood to sexually harass under conditions of low interactional justice, suggesting that sexual harassment likelihood may increase as a response to perceived injustice. Moreover, the…

  19. Maximum Likelihood and Restricted Likelihood Solutions in Multiple-Method Studies.

    Science.gov (United States)

    Rukhin, Andrew L

    2011-01-01

    A formulation of the problem of combining data from several sources is discussed in terms of random effects models. The unknown measurement precision is assumed not to be the same for all methods. We investigate maximum likelihood solutions in this model. By representing the likelihood equations as simultaneous polynomial equations, the exact form of the Groebner basis for their stationary points is derived when there are two methods. A parametrization of these solutions which allows their comparison is suggested. A numerical method for solving likelihood equations is outlined, and an alternative to the maximum likelihood method, the restricted maximum likelihood, is studied. In the situation when methods variances are considered to be known an upper bound on the between-method variance is obtained. The relationship between likelihood equations and moment-type equations is also discussed.

  20. Monitoring crop leaf area index time variation from higher resolution remotely sensed data

    International Nuclear Information System (INIS)

    Jiao, Sihong

    2014-01-01

    The leaf area index (LAI) is significant for research on global climate change and ecological environment. China HJ-1 satellite has a revisit cycle of four days, providing CCD data (HJ-1 CCD) with a resolution of 30 m. However, the HJ-1 CCD is incapable of obtaining observations at multiple angles. This is problematic because single angle observations provide insufficient data for determining the LAI. This article proposes a new method for determining LAI using HJ-1 CCD data. The proposed method uses background knowledge of dynamic land surface processes that are extracted from MODerate resolution Imaging Spectroradiometer (MODIS) LAI 1-km resolution data. To process the uncertainties that arise from using two data sources with different spatial resolutions, the proposed method is implemented in a dynamitic Bayesian network scheme by integrating a LAI dynamic process model and a canopy reflectance model with remotely sensed data. Validation results showed that the determination coefficient between estimated and measured LAI was 0.791, and the RMSE was 0.61. This method can enhance the accuracy of the retrieval results while retaining the time series variation characteristics of the vegetation LAI. The results suggest that this algorithm can be widely applied to determining high-resolution leaf area indices using data from China HJ-1 satellite even if information from single angle observations are insufficient for quantitative application

  1. Determination of point of maximum likelihood in failure domain using genetic algorithms

    International Nuclear Information System (INIS)

    Obadage, A.S.; Harnpornchai, N.

    2006-01-01

    The point of maximum likelihood in a failure domain yields the highest value of the probability density function in the failure domain. The maximum-likelihood point thus represents the worst combination of random variables that contribute in the failure event. In this work Genetic Algorithms (GAs) with an adaptive penalty scheme have been proposed as a tool for the determination of the maximum likelihood point. The utilization of only numerical values in the GAs operation makes the algorithms applicable to cases of non-linear and implicit single and multiple limit state function(s). The algorithmic simplicity readily extends its application to higher dimensional problems. When combined with Monte Carlo Simulation, the proposed methodology will reduce the computational complexity and at the same time will enhance the possibility in rare-event analysis under limited computational resources. Since, there is no approximation done in the procedure, the solution obtained is considered accurate. Consequently, GAs can be used as a tool for increasing the computational efficiency in the element and system reliability analyses

  2. Likelihood inference for unions of interacting discs

    DEFF Research Database (Denmark)

    Møller, Jesper; Helisová, Katarina

    To the best of our knowledge, this is the first paper which discusses likelihood inference or a random set using a germ-grain model, where the individual grains are unobservable edge effects occur, and other complications appear. We consider the case where the grains form a disc process modelled...... is specified with respect to a given marked Poisson model (i.e. a Boolean model). We show how edge effects and other complications can be handled by considering a certain conditional likelihood. Our methodology is illustrated by analyzing Peter Diggle's heather dataset, where we discuss the results...... of simulation-based maximum likelihood inference and the effect of specifying different reference Poisson models....

  3. Comparison of likelihood testing procedures for parallel systems with covariances

    International Nuclear Information System (INIS)

    Ayman Baklizi; Isa Daud; Noor Akma Ibrahim

    1998-01-01

    In this paper we considered investigating and comparing the behavior of the likelihood ratio, the Rao's and the Wald's statistics for testing hypotheses on the parameters of the simple linear regression model based on parallel systems with covariances. These statistics are asymptotically equivalent (Barndorff-Nielsen and Cox, 1994). However, their relative performances in finite samples are generally known. A Monte Carlo experiment is conducted to stimulate the sizes and the powers of these statistics for complete samples and in the presence of time censoring. Comparisons of the statistics are made according to the attainment of assumed size of the test and their powers at various points in the parameter space. The results show that the likelihood ratio statistics appears to have the best performance in terms of the attainment of the assumed size of the test. Power comparisons show that the Rao statistic has some advantage over the Wald statistic in almost all of the space of alternatives while likelihood ratio statistic occupies either the first or the last position in term of power. Overall, the likelihood ratio statistic appears to be more appropriate to the model under study, especially for small sample sizes

  4. Cases in which ancestral maximum likelihood will be confusingly misleading.

    Science.gov (United States)

    Handelman, Tomer; Chor, Benny

    2017-05-07

    Ancestral maximum likelihood (AML) is a phylogenetic tree reconstruction criteria that "lies between" maximum parsimony (MP) and maximum likelihood (ML). ML has long been known to be statistically consistent. On the other hand, Felsenstein (1978) showed that MP is statistically inconsistent, and even positively misleading: There are cases where the parsimony criteria, applied to data generated according to one tree topology, will be optimized on a different tree topology. The question of weather AML is statistically consistent or not has been open for a long time. Mossel et al. (2009) have shown that AML can "shrink" short tree edges, resulting in a star tree with no internal resolution, which yields a better AML score than the original (resolved) model. This result implies that AML is statistically inconsistent, but not that it is positively misleading, because the star tree is compatible with any other topology. We show that AML is confusingly misleading: For some simple, four taxa (resolved) tree, the ancestral likelihood optimization criteria is maximized on an incorrect (resolved) tree topology, as well as on a star tree (both with specific edge lengths), while the tree with the original, correct topology, has strictly lower ancestral likelihood. Interestingly, the two short edges in the incorrect, resolved tree topology are of length zero, and are not adjacent, so this resolved tree is in fact a simple path. While for MP, the underlying phenomenon can be described as long edge attraction, it turns out that here we have long edge repulsion. Copyright © 2017. Published by Elsevier Ltd.

  5. Maintaining symmetry of simulated likelihood functions

    DEFF Research Database (Denmark)

    Andersen, Laura Mørch

    This paper suggests solutions to two different types of simulation errors related to Quasi-Monte Carlo integration. Likelihood functions which depend on standard deviations of mixed parameters are symmetric in nature. This paper shows that antithetic draws preserve this symmetry and thereby...... improves precision substantially. Another source of error is that models testing away mixing dimensions must replicate the relevant dimensions of the quasi-random draws in the simulation of the restricted likelihood. These simulation errors are ignored in the standard estimation procedures used today...

  6. Likelihood inference for unions of interacting discs

    DEFF Research Database (Denmark)

    Møller, Jesper; Helisova, K.

    2010-01-01

    This is probably the first paper which discusses likelihood inference for a random set using a germ-grain model, where the individual grains are unobservable, edge effects occur and other complications appear. We consider the case where the grains form a disc process modelled by a marked point...... process, where the germs are the centres and the marks are the associated radii of the discs. We propose to use a recent parametric class of interacting disc process models, where the minimal sufficient statistic depends on various geometric properties of the random set, and the density is specified......-based maximum likelihood inference and the effect of specifying different reference Poisson models....

  7. Composite likelihood estimation of demographic parameters

    Directory of Open Access Journals (Sweden)

    Garrigan Daniel

    2009-11-01

    Full Text Available Abstract Background Most existing likelihood-based methods for fitting historical demographic models to DNA sequence polymorphism data to do not scale feasibly up to the level of whole-genome data sets. Computational economies can be achieved by incorporating two forms of pseudo-likelihood: composite and approximate likelihood methods. Composite likelihood enables scaling up to large data sets because it takes the product of marginal likelihoods as an estimator of the likelihood of the complete data set. This approach is especially useful when a large number of genomic regions constitutes the data set. Additionally, approximate likelihood methods can reduce the dimensionality of the data by summarizing the information in the original data by either a sufficient statistic, or a set of statistics. Both composite and approximate likelihood methods hold promise for analyzing large data sets or for use in situations where the underlying demographic model is complex and has many parameters. This paper considers a simple demographic model of allopatric divergence between two populations, in which one of the population is hypothesized to have experienced a founder event, or population bottleneck. A large resequencing data set from human populations is summarized by the joint frequency spectrum, which is a matrix of the genomic frequency spectrum of derived base frequencies in two populations. A Bayesian Metropolis-coupled Markov chain Monte Carlo (MCMCMC method for parameter estimation is developed that uses both composite and likelihood methods and is applied to the three different pairwise combinations of the human population resequence data. The accuracy of the method is also tested on data sets sampled from a simulated population model with known parameters. Results The Bayesian MCMCMC method also estimates the ratio of effective population size for the X chromosome versus that of the autosomes. The method is shown to estimate, with reasonable

  8. Physical constraints on the likelihood of life on exoplanets

    Science.gov (United States)

    Lingam, Manasvi; Loeb, Abraham

    2018-04-01

    One of the most fundamental questions in exoplanetology is to determine whether a given planet is habitable. We estimate the relative likelihood of a planet's propensity towards habitability by considering key physical characteristics such as the role of temperature on ecological and evolutionary processes, and atmospheric losses via hydrodynamic escape and stellar wind erosion. From our analysis, we demonstrate that Earth-sized exoplanets in the habitable zone around M-dwarfs seemingly display much lower prospects of being habitable relative to Earth, owing to the higher incident ultraviolet fluxes and closer distances to the host star. We illustrate our results by specifically computing the likelihood (of supporting life) for the recently discovered exoplanets, Proxima b and TRAPPIST-1e, which we find to be several orders of magnitude smaller than that of Earth.

  9. Secondary Analysis under Cohort Sampling Designs Using Conditional Likelihood

    Directory of Open Access Journals (Sweden)

    Olli Saarela

    2012-01-01

    Full Text Available Under cohort sampling designs, additional covariate data are collected on cases of a specific type and a randomly selected subset of noncases, primarily for the purpose of studying associations with a time-to-event response of interest. With such data available, an interest may arise to reuse them for studying associations between the additional covariate data and a secondary non-time-to-event response variable, usually collected for the whole study cohort at the outset of the study. Following earlier literature, we refer to such a situation as secondary analysis. We outline a general conditional likelihood approach for secondary analysis under cohort sampling designs and discuss the specific situations of case-cohort and nested case-control designs. We also review alternative methods based on full likelihood and inverse probability weighting. We compare the alternative methods for secondary analysis in two simulated settings and apply them in a real-data example.

  10. A Comparison of Pseudo-Maximum Likelihood and Asymptotically Distribution-Free Dynamic Factor Analysis Parameter Estimation in Fitting Covariance-Structure Models to Block-Toeplitz Representing Single-Subject Multivariate Time-Series

    NARCIS (Netherlands)

    Molenaar, P.C.M.; Nesselroade, J.R.

    1998-01-01

    The study of intraindividual variability pervades empirical inquiry in virtually all subdisciplines of psychology. The statistical analysis of multivariate time-series data - a central product of intraindividual investigations - requires special modeling techniques. The dynamic factor model (DFM),

  11. Efficient Bit-to-Symbol Likelihood Mappings

    Science.gov (United States)

    Moision, Bruce E.; Nakashima, Michael A.

    2010-01-01

    This innovation is an efficient algorithm designed to perform bit-to-symbol and symbol-to-bit likelihood mappings that represent a significant portion of the complexity of an error-correction code decoder for high-order constellations. Recent implementation of the algorithm in hardware has yielded an 8- percent reduction in overall area relative to the prior design.

  12. Likelihood-ratio-based biometric verification

    NARCIS (Netherlands)

    Bazen, A.M.; Veldhuis, Raymond N.J.

    2002-01-01

    This paper presents results on optimal similarity measures for biometric verification based on fixed-length feature vectors. First, we show that the verification of a single user is equivalent to the detection problem, which implies that for single-user verification the likelihood ratio is optimal.

  13. Likelihood Ratio-Based Biometric Verification

    NARCIS (Netherlands)

    Bazen, A.M.; Veldhuis, Raymond N.J.

    The paper presents results on optimal similarity measures for biometric verification based on fixed-length feature vectors. First, we show that the verification of a single user is equivalent to the detection problem, which implies that, for single-user verification, the likelihood ratio is optimal.

  14. Communicating likelihoods and probabilities in forecasts of volcanic eruptions

    Science.gov (United States)

    Doyle, Emma E. H.; McClure, John; Johnston, David M.; Paton, Douglas

    2014-02-01

    The issuing of forecasts and warnings of natural hazard events, such as volcanic eruptions, earthquake aftershock sequences and extreme weather often involves the use of probabilistic terms, particularly when communicated by scientific advisory groups to key decision-makers, who can differ greatly in relative expertise and function in the decision making process. Recipients may also differ in their perception of relative importance of political and economic influences on interpretation. Consequently, the interpretation of these probabilistic terms can vary greatly due to the framing of the statements, and whether verbal or numerical terms are used. We present a review from the psychology literature on how the framing of information influences communication of these probability terms. It is also unclear as to how people rate their perception of an event's likelihood throughout a time frame when a forecast time window is stated. Previous research has identified that, when presented with a 10-year time window forecast, participants viewed the likelihood of an event occurring ‘today’ as being of less than that in year 10. Here we show that this skew in perception also occurs for short-term time windows (under one week) that are of most relevance for emergency warnings. In addition, unlike the long-time window statements, the use of the phrasing “within the next…” instead of “in the next…” does not mitigate this skew, nor do we observe significant differences between the perceived likelihoods of scientists and non-scientists. This finding suggests that effects occurring due to the shorter time window may be ‘masking’ any differences in perception due to wording or career background observed for long-time window forecasts. These results have implications for scientific advice, warning forecasts, emergency management decision-making, and public information as any skew in perceived event likelihood towards the end of a forecast time window may result in

  15. Multiple Improvements of Multiple Imputation Likelihood Ratio Tests

    OpenAIRE

    Chan, Kin Wai; Meng, Xiao-Li

    2017-01-01

    Multiple imputation (MI) inference handles missing data by first properly imputing the missing values $m$ times, and then combining the $m$ analysis results from applying a complete-data procedure to each of the completed datasets. However, the existing method for combining likelihood ratio tests has multiple defects: (i) the combined test statistic can be negative in practice when the reference null distribution is a standard $F$ distribution; (ii) it is not invariant to re-parametrization; ...

  16. Phase transitions between lower and higher level management learning in times of crisis: an experimental study based on synergetics.

    Science.gov (United States)

    Liening, Andreas; Strunk, Guido; Mittelstadt, Ewald

    2013-10-01

    Much has been written about the differences between single- and double-loop learning, or more general between lower level and higher level learning. Especially in times of a fundamental crisis, a transition between lower and higher level learning would be an appropriate reaction to a challenge coming entirely out of the dark. However, so far there is no quantitative method to monitor such a transition. Therefore we introduce theory and methods of synergetics and present results from an experimental study based on the simulation of a crisis within a business simulation game. Hypothesized critical fluctuations - as a marker for so-called phase transitions - have been assessed with permutation entropy. Results show evidence for a phase transition during the crisis, which can be interpreted as a transition between lower and higher level learning.

  17. Analyzing multivariate survival data using composite likelihood and flexible parametric modeling of the hazard functions

    DEFF Research Database (Denmark)

    Nielsen, Jan; Parner, Erik

    2010-01-01

    In this paper, we model multivariate time-to-event data by composite likelihood of pairwise frailty likelihoods and marginal hazards using natural cubic splines. Both right- and interval-censored data are considered. The suggested approach is applied on two types of family studies using the gamma...

  18. A composite likelihood approach for spatially correlated survival data

    Science.gov (United States)

    Paik, Jane; Ying, Zhiliang

    2013-01-01

    The aim of this paper is to provide a composite likelihood approach to handle spatially correlated survival data using pairwise joint distributions. With e-commerce data, a recent question of interest in marketing research has been to describe spatially clustered purchasing behavior and to assess whether geographic distance is the appropriate metric to describe purchasing dependence. We present a model for the dependence structure of time-to-event data subject to spatial dependence to characterize purchasing behavior from the motivating example from e-commerce data. We assume the Farlie-Gumbel-Morgenstern (FGM) distribution and then model the dependence parameter as a function of geographic and demographic pairwise distances. For estimation of the dependence parameters, we present pairwise composite likelihood equations. We prove that the resulting estimators exhibit key properties of consistency and asymptotic normality under certain regularity conditions in the increasing-domain framework of spatial asymptotic theory. PMID:24223450

  19. A composite likelihood approach for spatially correlated survival data.

    Science.gov (United States)

    Paik, Jane; Ying, Zhiliang

    2013-01-01

    The aim of this paper is to provide a composite likelihood approach to handle spatially correlated survival data using pairwise joint distributions. With e-commerce data, a recent question of interest in marketing research has been to describe spatially clustered purchasing behavior and to assess whether geographic distance is the appropriate metric to describe purchasing dependence. We present a model for the dependence structure of time-to-event data subject to spatial dependence to characterize purchasing behavior from the motivating example from e-commerce data. We assume the Farlie-Gumbel-Morgenstern (FGM) distribution and then model the dependence parameter as a function of geographic and demographic pairwise distances. For estimation of the dependence parameters, we present pairwise composite likelihood equations. We prove that the resulting estimators exhibit key properties of consistency and asymptotic normality under certain regularity conditions in the increasing-domain framework of spatial asymptotic theory.

  20. Factors Associated with Young Adults’ Pregnancy Likelihood

    Science.gov (United States)

    Kitsantas, Panagiota; Lindley, Lisa L.; Wu, Huichuan

    2014-01-01

    OBJECTIVES While progress has been made to reduce adolescent pregnancies in the United States, rates of unplanned pregnancy among young adults (18–29 years) remain high. In this study, we assessed factors associated with perceived likelihood of pregnancy (likelihood of getting pregnant/getting partner pregnant in the next year) among sexually experienced young adults who were not trying to get pregnant and had ever used contraceptives. METHODS We conducted a secondary analysis of 660 young adults, 18–29 years old in the United States, from the cross-sectional National Survey of Reproductive and Contraceptive Knowledge. Logistic regression and classification tree analyses were conducted to generate profiles of young adults most likely to report anticipating a pregnancy in the next year. RESULTS Nearly one-third (32%) of young adults indicated they believed they had at least some likelihood of becoming pregnant in the next year. Young adults who believed that avoiding pregnancy was not very important were most likely to report pregnancy likelihood (odds ratio [OR], 5.21; 95% CI, 2.80–9.69), as were young adults for whom avoiding a pregnancy was important but not satisfied with their current contraceptive method (OR, 3.93; 95% CI, 1.67–9.24), attended religious services frequently (OR, 3.0; 95% CI, 1.52–5.94), were uninsured (OR, 2.63; 95% CI, 1.31–5.26), and were likely to have unprotected sex in the next three months (OR, 1.77; 95% CI, 1.04–3.01). DISCUSSION These results may help guide future research and the development of pregnancy prevention interventions targeting sexually experienced young adults. PMID:25782849

  1. Review of Elaboration Likelihood Model of persuasion

    OpenAIRE

    藤原, 武弘; 神山, 貴弥

    1989-01-01

    This article mainly introduces Elaboration Likelihood Model (ELM), proposed by Petty & Cacioppo, that is, a general attitude change theory. ELM posturates two routes to persuasion; central and peripheral route. Attitude change by central route is viewed as resulting from a diligent consideration of the issue-relevant informations presented. On the other hand, attitude change by peripheral route is viewed as resulting from peripheral cues in the persuasion context. Secondly we compare these tw...

  2. Unbinned likelihood analysis of EGRET observations

    International Nuclear Information System (INIS)

    Digel, Seth W.

    2000-01-01

    We present a newly-developed likelihood analysis method for EGRET data that defines the likelihood function without binning the photon data or averaging the instrumental response functions. The standard likelihood analysis applied to EGRET data requires the photons to be binned spatially and in energy, and the point-spread functions to be averaged over energy and inclination angle. The full-width half maximum of the point-spread function increases by about 40% from on-axis to 30 degree sign inclination, and depending on the binning in energy can vary by more than that in a single energy bin. The new unbinned method avoids the loss of information that binning and averaging cause and can properly analyze regions where EGRET viewing periods overlap and photons with different inclination angles would otherwise be combined in the same bin. In the poster, we describe the unbinned analysis method and compare its sensitivity with binned analysis for detecting point sources in EGRET data

  3. A High-Order, Linear Time-Invariant Model for Application to Higher Harmonic Control and Flight Control System Interaction

    Science.gov (United States)

    Cheng, Rendy P.; Tischler, Mark B.; Celi, Roberto

    2006-01-01

    This research describes a new methodology for the extraction of a high-order, linear time invariant model, which allows the periodicity of the helicopter response to be accurately captured. This model provides the needed level of dynamic fidelity to permit an analysis and optimization of the AFCS and HHC algorithms. The key results of this study indicate that the closed-loop HHC system has little influence on the AFCS or on the vehicle handling qualities, which indicates that the AFCS does not need modification to work with the HHC system. However, the results show that the vibration response to maneuvers must be considered during the HHC design process, and this leads to much higher required HHC loop crossover frequencies. This research also demonstrates that the transient vibration responses during maneuvers can be reduced by optimizing the closed-loop higher harmonic control algorithm using conventional control system analyses.

  4. Likelihood functions for the analysis of single-molecule binned photon sequences

    Energy Technology Data Exchange (ETDEWEB)

    Gopich, Irina V., E-mail: irinag@niddk.nih.gov [Laboratory of Chemical Physics, National Institute of Diabetes and Digestive and Kidney Diseases, National Institutes of Health, Bethesda, MD 20892 (United States)

    2012-03-02

    Graphical abstract: Folding of a protein with attached fluorescent dyes, the underlying conformational trajectory of interest, and the observed binned photon trajectory. Highlights: Black-Right-Pointing-Pointer A sequence of photon counts can be analyzed using a likelihood function. Black-Right-Pointing-Pointer The exact likelihood function for a two-state kinetic model is provided. Black-Right-Pointing-Pointer Several approximations are considered for an arbitrary kinetic model. Black-Right-Pointing-Pointer Improved likelihood functions are obtained to treat sequences of FRET efficiencies. - Abstract: We consider the analysis of a class of experiments in which the number of photons in consecutive time intervals is recorded. Sequence of photon counts or, alternatively, of FRET efficiencies can be studied using likelihood-based methods. For a kinetic model of the conformational dynamics and state-dependent Poisson photon statistics, the formalism to calculate the exact likelihood that this model describes such sequences of photons or FRET efficiencies is developed. Explicit analytic expressions for the likelihood function for a two-state kinetic model are provided. The important special case when conformational dynamics are so slow that at most a single transition occurs in a time bin is considered. By making a series of approximations, we eventually recover the likelihood function used in hidden Markov models. In this way, not only is insight gained into the range of validity of this procedure, but also an improved likelihood function can be obtained.

  5. Moment Conditions Selection Based on Adaptive Penalized Empirical Likelihood

    Directory of Open Access Journals (Sweden)

    Yunquan Song

    2014-01-01

    Full Text Available Empirical likelihood is a very popular method and has been widely used in the fields of artificial intelligence (AI and data mining as tablets and mobile application and social media dominate the technology landscape. This paper proposes an empirical likelihood shrinkage method to efficiently estimate unknown parameters and select correct moment conditions simultaneously, when the model is defined by moment restrictions in which some are possibly misspecified. We show that our method enjoys oracle-like properties; that is, it consistently selects the correct moment conditions and at the same time its estimator is as efficient as the empirical likelihood estimator obtained by all correct moment conditions. Moreover, unlike the GMM, our proposed method allows us to carry out confidence regions for the parameters included in the model without estimating the covariances of the estimators. For empirical implementation, we provide some data-driven procedures for selecting the tuning parameter of the penalty function. The simulation results show that the method works remarkably well in terms of correct moment selection and the finite sample properties of the estimators. Also, a real-life example is carried out to illustrate the new methodology.

  6. Time-resolved photoemission micro-spectrometer using higher-order harmonics of Ti:sapphire laser

    International Nuclear Information System (INIS)

    Azuma, J.; Kamada, M.; Kondo, Y.

    2004-01-01

    Full text: A new photoemission spectrometer is under construction for the photoemission microscopy and the time-resolved pump- probe experiment. The higher order harmonics of the Ti:sapphire laser is used as the light source of the VUV region in this system. Because the fundamental laser is focused tightly into the rare gas jet to generate the higher order harmonics, the spot size of the laser, in other words, the spot size of the VUV light source is smaller than a few tens of micrometer. This smallness of the spot size has advantage for the microscopy. In order to compensate the low flux of the laser harmonics, a multilayer-coated schwaltzshild optics was designed. The multilayers play also as the monochromatic filter. The spatial resolution of this schwaltzshild system is found to be less than 1 micrometer by the ray-tracing calculations. A main chamber of the system is equipped with a time-of-flight energy analyzer to improve the efficiency of the electron detection. The main chamber and the gas chamber are separated by a differential pumping chamber and a thin Al foil. The system is designed for the study of the clean surface. It will be capable to perform the sub-micron photoemission microscopy and the femto-second pump-probe photoemission study for the various photo-excited dynamics on clean surfaces

  7. Sampling of systematic errors to estimate likelihood weights in nuclear data uncertainty propagation

    International Nuclear Information System (INIS)

    Helgesson, P.; Sjöstrand, H.; Koning, A.J.; Rydén, J.; Rochman, D.; Alhassan, E.; Pomp, S.

    2016-01-01

    In methodologies for nuclear data (ND) uncertainty assessment and propagation based on random sampling, likelihood weights can be used to infer experimental information into the distributions for the ND. As the included number of correlated experimental points grows large, the computational time for the matrix inversion involved in obtaining the likelihood can become a practical problem. There are also other problems related to the conventional computation of the likelihood, e.g., the assumption that all experimental uncertainties are Gaussian. In this study, a way to estimate the likelihood which avoids matrix inversion is investigated; instead, the experimental correlations are included by sampling of systematic errors. It is shown that the model underlying the sampling methodology (using univariate normal distributions for random and systematic errors) implies a multivariate Gaussian for the experimental points (i.e., the conventional model). It is also shown that the likelihood estimates obtained through sampling of systematic errors approach the likelihood obtained with matrix inversion as the sample size for the systematic errors grows large. In studied practical cases, it is seen that the estimates for the likelihood weights converge impractically slowly with the sample size, compared to matrix inversion. The computational time is estimated to be greater than for matrix inversion in cases with more experimental points, too. Hence, the sampling of systematic errors has little potential to compete with matrix inversion in cases where the latter is applicable. Nevertheless, the underlying model and the likelihood estimates can be easier to intuitively interpret than the conventional model and the likelihood function involving the inverted covariance matrix. Therefore, this work can both have pedagogical value and be used to help motivating the conventional assumption of a multivariate Gaussian for experimental data. The sampling of systematic errors could also

  8. Dimension-Independent Likelihood-Informed MCMC

    KAUST Repository

    Cui, Tiangang; Law, Kody; Marzouk, Youssef

    2015-01-01

    Many Bayesian inference problems require exploring the posterior distribution of high-dimensional parameters, which in principle can be described as functions. By exploiting low-dimensional structure in the change from prior to posterior [distributions], we introduce a suite of MCMC samplers that can adapt to the complex structure of the posterior distribution, yet are well-defined on function space. Posterior sampling in nonlinear inverse problems arising from various partial di erential equations and also a stochastic differential equation are used to demonstrate the e ciency of these dimension-independent likelihood-informed samplers.

  9. Multi-Channel Maximum Likelihood Pitch Estimation

    DEFF Research Database (Denmark)

    Christensen, Mads Græsbøll

    2012-01-01

    In this paper, a method for multi-channel pitch estimation is proposed. The method is a maximum likelihood estimator and is based on a parametric model where the signals in the various channels share the same fundamental frequency but can have different amplitudes, phases, and noise characteristics....... This essentially means that the model allows for different conditions in the various channels, like different signal-to-noise ratios, microphone characteristics and reverberation. Moreover, the method does not assume that a certain array structure is used but rather relies on a more general model and is hence...

  10. Dimension-Independent Likelihood-Informed MCMC

    KAUST Repository

    Cui, Tiangang

    2015-01-07

    Many Bayesian inference problems require exploring the posterior distribution of high-dimensional parameters, which in principle can be described as functions. By exploiting low-dimensional structure in the change from prior to posterior [distributions], we introduce a suite of MCMC samplers that can adapt to the complex structure of the posterior distribution, yet are well-defined on function space. Posterior sampling in nonlinear inverse problems arising from various partial di erential equations and also a stochastic differential equation are used to demonstrate the e ciency of these dimension-independent likelihood-informed samplers.

  11. Approximate maximum parsimony and ancestral maximum likelihood.

    Science.gov (United States)

    Alon, Noga; Chor, Benny; Pardi, Fabio; Rapoport, Anat

    2010-01-01

    We explore the maximum parsimony (MP) and ancestral maximum likelihood (AML) criteria in phylogenetic tree reconstruction. Both problems are NP-hard, so we seek approximate solutions. We formulate the two problems as Steiner tree problems under appropriate distances. The gist of our approach is the succinct characterization of Steiner trees for a small number of leaves for the two distances. This enables the use of known Steiner tree approximation algorithms. The approach leads to a 16/9 approximation ratio for AML and asymptotically to a 1.55 approximation ratio for MP.

  12. Introducing conjoint analysis method into delayed lotteries studies: its validity and time stability are higher than in adjusting.

    Science.gov (United States)

    Białek, Michał; Markiewicz, Łukasz; Sawicki, Przemysław

    2015-01-01

    The delayed lotteries are much more common in everyday life than are pure lotteries. Usually, we need to wait to find out the outcome of the risky decision (e.g., investing in a stock market, engaging in a relationship). However, most research has studied the time discounting and probability discounting in isolation using the methodologies designed specifically to track changes in one parameter. Most commonly used method is adjusting, but its reported validity and time stability in research on discounting are suboptimal. The goal of this study was to introduce the novel method for analyzing delayed lotteries-conjoint analysis-which hypothetically is more suitable for analyzing individual preferences in this area. A set of two studies compared the conjoint analysis with adjusting. The results suggest that individual parameters of discounting strength estimated with conjoint have higher predictive value (Study 1 and 2), and they are more stable over time (Study 2) compared to adjusting. We discuss these findings, despite the exploratory character of reported studies, by suggesting that future research on delayed lotteries should be cross-validated using both methods.

  13. Introducing conjoint analysis method into delayed lotteries studies: Its validity and time stability are higher than in adjusting

    Directory of Open Access Journals (Sweden)

    Michal eBialek

    2015-01-01

    Full Text Available The delayed lotteries are much more common in everyday life than are pure lotteries. Usually, we need to wait to find out the outcome of the risky decision (e.g., investing in a stock market, engaging in a relationship. However, most research has studied the time discounting and probability discounting in isolation using the methodologies designed specifically to track changes in one parameter. Most commonly used method is adjusting, but its reported validity and time stability in research on discounting are suboptimal.The goal of this study was to introduce the novel method for analyzing delayed lotteries - conjoint analysis - which hypothetically is more suitable for analyzing individual preferences in this area. A set of two studies compared the conjoint analysis with adjusting. The results suggest that individual parameters of discounting strength estimated with conjoint have higher predictive value (Study 1 & 2, and they are more stable over time (Study 2 compared to adjusting. We discuss these findings, despite the exploratory character of reported studies, by suggesting that future research on delayed lotteries should be cross-validated using both methods.

  14. Risk factors and likelihood of Campylobacter colonization in broiler flocks

    Directory of Open Access Journals (Sweden)

    SL Kuana

    2007-09-01

    Full Text Available Campylobacter was investigated in cecal droppings, feces, and cloacal swabs of 22 flocks of 3 to 5 week-old broilers. Risk factors and the likelihood of the presence of this agent in these flocks were determined. Management practices, such as cleaning and disinfection, feeding, drinkers, and litter treatments, were assessed. Results were evaluated using Odds Ratio (OR test, and their significance was tested by Fisher's test (p<0.05. A Campylobacter prevalence of 81.8% was found in the broiler flocks (18/22, and within positive flocks, it varied between 85 and 100%. Campylobacter incidence among sample types was homogenous, being 81.8% in cecal droppings, 80.9% in feces, and 80.4% in cloacal swabs (230. Flocks fed by automatic feeding systems presented higher incidence of Campylobacter as compared to those fed by tube feeders. Litter was reused in 63.6% of the farm, and, despite the lack of statistical significance, there was higher likelihood of Campylobacter incidence when litter was reused. Foot bath was not used in 45.5% of the flocks, whereas the use of foot bath associated to deficient lime management increased the number of positive flocks, although with no statiscal significance. The evaluated parameters were not significantly associated with Campylobacter colonization in the assessed broiler flocks.

  15. Singularity Structure Analysis of the Higher-Dimensional Time-Gated Manakov System: Periodic Excitations and Elastic Scattering

    International Nuclear Information System (INIS)

    Kuetche, Victor Kamgang; Bouetou, Thomas Bouetou; Kofane, Timoleon Crepin

    2010-12-01

    We investigate the singularity structure analysis of the higher-dimensional time-gated Manakov system referring to the (2+1)-dimensional coupled nonlinear Schroedinger (CNLS) equations, and we show that these equations are Painleve-integrable. By means of the Weiss et al.'s methodology, we show the arbitrariness of the expansion coefficients and the consistency of the truncation corresponding to a special Baecklund transformation (BT) of these CNLS equations. In the wake of such transformation, following the Hirota's formalism, we derive a one-soliton solution. Besides, by using the Zakharov-Shabat (ZS) scheme which provides a general Lax-representation of an evolution system, we show that the (2+1)-dimensional CNLS system under interests is completely integrable. Furthermore, using the arbitrariness of the above coefficients, we unearth and investigate a typical spectrum of periodic coherent structures while depicting elastic interactions amongst such patterns. (author)

  16. Computer-mediated communication and time pressure induce higher cardiovascular responses in the preparatory and execution phases of cooperative tasks.

    Science.gov (United States)

    Costa Ferrer, Raquel; Serrano Rosa, Miguel Ángel; Zornoza Abad, Ana; Salvador Fernández-Montejo, Alicia

    2010-11-01

    The cardiovascular (CV) response to social challenge and stress is associated with the etiology of cardiovascular diseases. New ways of communication, time pressure and different types of information are common in our society. In this study, the cardiovascular response to two different tasks (open vs. closed information) was examined employing different communication channels (computer-mediated vs. face-to-face) and with different pace control (self vs. external). Our results indicate that there was a higher CV response in the computer-mediated condition, on the closed information task and in the externally paced condition. These role of these factors should be considered when studying the consequences of social stress and their underlying mechanisms.

  17. Regional 18F-Fluorodeoxyglucose Hypometabolism is Associated with Higher Apathy Scores Over Time in Early Alzheimer Disease.

    Science.gov (United States)

    Gatchel, Jennifer R; Donovan, Nancy J; Locascio, Joseph J; Becker, J Alex; Rentz, Dorene M; Sperling, Reisa A; Johnson, Keith A; Marshall, Gad A

    2017-07-01

    Apathy is among the earliest and most pervasive neuropsychiatric symptoms in prodromal and mild Alzheimer disease (AD) dementia that correlates with functional impairment and disease progression. We investigated the association of apathy with regional 18F-fluorodeoxyglucose (FDG) metabolism in cognitively normal, mild cognitive impairment, and AD dementia subjects from the Alzheimer's Disease Neuroimaging Initiative database. Cross-sectional and longitudinal studies. 57 North American research sites. 402 community dwelling elders. Apathy was assessed using the Neuropsychiatric Inventory Questionnaire. Baseline FDG metabolism in five regions implicated in the neurobiology of apathy and AD was investigated in relationship to apathy at baseline (cross-sectional general linear model) and longitudinally (mixed random/fixed effect model). Covariates included age, sex, diagnosis, apolipoprotein E genotype, premorbid intelligence, cognition, and antidepressant use. Cross-sectional analysis revealed that posterior cingulate hypometabolism, diagnosis, male sex, and antidepressant use were associated with higher apathy scores. Longitudinal analysis revealed that the interaction of supramarginal hypometabolism and time, posterior cingulate hypometabolism, and antidepressant use were associated with higher apathy scores across time; only supramarginal hypometabolism was positively related to rate of increase of apathy. Results support an association of apathy with hypometabolism in parietal regions commonly affected in early stages of AD, rather than medial frontal regions implicated in the neurobiology of apathy in later stages. Further work is needed to substantiate whether this localization is specific to apathy rather than to disease stage, and to investigate the potential role of AD proteinopathies in the pathogenesis of apathy. Copyright © 2017 American Association for Geriatric Psychiatry. Published by Elsevier Inc. All rights reserved.

  18. Maximum Likelihood and Bayes Estimation in Randomly Censored Geometric Distribution

    Directory of Open Access Journals (Sweden)

    Hare Krishna

    2017-01-01

    Full Text Available In this article, we study the geometric distribution under randomly censored data. Maximum likelihood estimators and confidence intervals based on Fisher information matrix are derived for the unknown parameters with randomly censored data. Bayes estimators are also developed using beta priors under generalized entropy and LINEX loss functions. Also, Bayesian credible and highest posterior density (HPD credible intervals are obtained for the parameters. Expected time on test and reliability characteristics are also analyzed in this article. To compare various estimates developed in the article, a Monte Carlo simulation study is carried out. Finally, for illustration purpose, a randomly censored real data set is discussed.

  19. Quantum Statistical Entropy of Non-extreme and Nearly Extreme Black Holes in Higher-Dimensional Space-Time

    Institute of Scientific and Technical Information of China (English)

    XU Dian-Yan

    2003-01-01

    The free energy and entropy of Reissner-Nordstrom black holes in higher-dimensional space-time are calculated by the quantum statistic method with a brick wall model. The space-time of the black holes is divided into three regions: region 1, (r > r0); region 2, (r0 > r > n); and region 3, (T-J > r > 0), where r0 is the radius of the outer event horizon, and r, is the radius of the inner event horizon. Detailed calculation shows that the entropy contributed by region 2 is zero, the entropy contributed by region 1 is positive and proportional to the outer event horizon area, the entropy contributed by region 3 is negative and proportional to the inner event horizon area. The total entropy contributed by all the three regions is positive and proportional to the area difference between the outer and inner event horizons. As rt approaches r0 in the nearly extreme case, the total quantum statistical entropy approaches zero.

  20. Energy-momentum conserving higher-order time integration of nonlinear dynamics of finite elastic fiber-reinforced continua

    Science.gov (United States)

    Erler, Norbert; Groß, Michael

    2015-05-01

    Since many years the relevance of fibre-reinforced polymers is steadily increasing in fields of engineering, especially in aircraft and automotive industry. Due to the high strength in fibre direction, but the possibility of lightweight construction, these composites replace more and more traditional materials as metals. Fibre-reinforced polymers are often manufactured from glass or carbon fibres as attachment parts or from steel or nylon cord as force transmission parts. Attachment parts are mostly subjected to small strains, but force transmission parts usually suffer large deformations in at least one direction. Here, a geometrically nonlinear formulation is necessary. Typical examples are helicopter rotor blades, where the fibres have the function to stabilize the structure in order to counteract large centrifugal forces. For long-run analyses of rotor blade deformations, we have to apply numerically stable time integrators for anisotropic materials. This paper presents higher-order accurate and numerically stable time stepping schemes for nonlinear elastic fibre-reinforced continua with anisotropic stress behaviour.

  1. Applying exclusion likelihoods from LHC searches to extended Higgs sectors

    International Nuclear Information System (INIS)

    Bechtle, Philip; Heinemeyer, Sven; Staal, Oscar; Stefaniak, Tim; Weiglein, Georg

    2015-01-01

    LHC searches for non-standard Higgs bosons decaying into tau lepton pairs constitute a sensitive experimental probe for physics beyond the Standard Model (BSM), such as supersymmetry (SUSY). Recently, the limits obtained from these searches have been presented by the CMS collaboration in a nearly model-independent fashion - as a narrow resonance model - based on the full 8 TeV dataset. In addition to publishing a 95 % C.L. exclusion limit, the full likelihood information for the narrowresonance model has been released. This provides valuable information that can be incorporated into global BSM fits. We present a simple algorithm that maps an arbitrary model with multiple neutral Higgs bosons onto the narrow resonance model and derives the corresponding value for the exclusion likelihood from the CMS search. This procedure has been implemented into the public computer code HiggsBounds (version 4.2.0 and higher). We validate our implementation by cross-checking against the official CMS exclusion contours in three Higgs benchmark scenarios in the Minimal Supersymmetric Standard Model (MSSM), and find very good agreement. Going beyond validation, we discuss the combined constraints of the ττ search and the rate measurements of the SM-like Higgs at 125 GeV in a recently proposed MSSM benchmark scenario, where the lightest Higgs boson obtains SM-like couplings independently of the decoupling of the heavier Higgs states. Technical details for how to access the likelihood information within HiggsBounds are given in the appendix. The program is available at http:// higgsbounds.hepforge.org. (orig.)

  2. H.264 SVC Complexity Reduction Based on Likelihood Mode Decision

    Directory of Open Access Journals (Sweden)

    L. Balaji

    2015-01-01

    Full Text Available H.264 Advanced Video Coding (AVC was prolonged to Scalable Video Coding (SVC. SVC executes in different electronics gadgets such as personal computer, HDTV, SDTV, IPTV, and full-HDTV in which user demands various scaling of the same content. The various scaling is resolution, frame rate, quality, heterogeneous networks, bandwidth, and so forth. Scaling consumes more encoding time and computational complexity during mode selection. In this paper, to reduce encoding time and computational complexity, a fast mode decision algorithm based on likelihood mode decision (LMD is proposed. LMD is evaluated in both temporal and spatial scaling. From the results, we conclude that LMD performs well, when compared to the previous fast mode decision algorithms. The comparison parameters are time, PSNR, and bit rate. LMD achieve time saving of 66.65% with 0.05% detriment in PSNR and 0.17% increment in bit rate compared with the full search method.

  3. H.264 SVC Complexity Reduction Based on Likelihood Mode Decision.

    Science.gov (United States)

    Balaji, L; Thyagharajan, K K

    2015-01-01

    H.264 Advanced Video Coding (AVC) was prolonged to Scalable Video Coding (SVC). SVC executes in different electronics gadgets such as personal computer, HDTV, SDTV, IPTV, and full-HDTV in which user demands various scaling of the same content. The various scaling is resolution, frame rate, quality, heterogeneous networks, bandwidth, and so forth. Scaling consumes more encoding time and computational complexity during mode selection. In this paper, to reduce encoding time and computational complexity, a fast mode decision algorithm based on likelihood mode decision (LMD) is proposed. LMD is evaluated in both temporal and spatial scaling. From the results, we conclude that LMD performs well, when compared to the previous fast mode decision algorithms. The comparison parameters are time, PSNR, and bit rate. LMD achieve time saving of 66.65% with 0.05% detriment in PSNR and 0.17% increment in bit rate compared with the full search method.

  4. A novel Fast Gas Chromatography based technique for higher time resolution measurements of speciated monoterpenes in air

    Science.gov (United States)

    Jones, C. E.; Kato, S.; Nakashima, Y.; Kajii, Y.

    2013-12-01

    Biogenic emissions supply the largest fraction of non-methane volatile organic compounds (VOC) from the biosphere to the atmospheric boundary layer, and typically comprise a complex mixture of reactive terpenes. Due to this chemical complexity, achieving comprehensive measurements of biogenic VOC (BVOC) in air within a satisfactory time resolution is analytically challenging. To address this, we have developed a novel, fully automated Fast Gas Chromatography (Fast-GC) based technique to provide higher time resolution monitoring of monoterpenes (and selected other C9-C15 terpenes) during plant emission studies and in ambient air. To our knowledge, this is the first study to apply a Fast-GC based separation technique to achieve quantification of terpenes in air. Three chromatography methods have been developed for atmospheric terpene analysis under different sampling scenarios. Each method facilitates chromatographic separation of selected BVOC within a significantly reduced analysis time compared to conventional GC methods, whilst maintaining the ability to quantify individual monoterpene structural isomers. Using this approach, the C10-C15 BVOC composition of single plant emissions may be characterised within a ~ 14 min analysis time. Moreover, in situ quantification of 12 monoterpenes in unpolluted ambient air may be achieved within an ~ 11 min chromatographic separation time (increasing to ~ 19 min when simultaneous quantification of multiple oxygenated C9-C10 terpenoids is required, and/or when concentrations of anthropogenic VOC are significant). This corresponds to a two- to fivefold increase in measurement frequency compared to conventional GC methods. Here we outline the technical details and analytical capability of this chromatographic approach, and present the first in situ Fast-GC observations of 6 monoterpenes and the oxygenated BVOC linalool in ambient air. During this field deployment within a suburban forest ~ 30 km west of central Tokyo, Japan, the

  5. Does better access to FPs decrease the likelihood of emergency department use?

    Science.gov (United States)

    Mian, Oxana; Pong, Raymond

    2012-01-01

    Abstract Objective To determine whether better access to FP services decreases the likelihood of emergency department (ED) use among the Ontario population. Design Population-based telephone survey. Setting Ontario. Participants A total of 8502 Ontario residents aged 16 years and older. Main outcome measures Emergency department use in the 12 months before the survey. Results Among the general population, having a regular FP was associated with having better access to FPs for immediate care (P FPs for immediate care at least once a year; 63.1% of them had seen FPs without difficulties and were significantly less likely to use EDs than those who did not see FPs or had difficulties accessing physicians when needed (OR = 0.62, P FPs (P FPs for immediate care among the general population. Further research is needed to understand what accounts for a higher likelihood of ED use among those with regular FPs, new immigrants, residents of northern and rural areas of Ontario, and people with low socioeconomic status when actual access and sociodemographic characteristics have been taken into consideration. More important, this study demonstrates a need of distinguishing between potential and actual access to care, as having a regular FP and having timely and effective access to FP care might mean different things and have different effects on ED use. PMID:23152473

  6. Maximum Likelihood Reconstruction for Magnetic Resonance Fingerprinting.

    Science.gov (United States)

    Zhao, Bo; Setsompop, Kawin; Ye, Huihui; Cauley, Stephen F; Wald, Lawrence L

    2016-08-01

    This paper introduces a statistical estimation framework for magnetic resonance (MR) fingerprinting, a recently proposed quantitative imaging paradigm. Within this framework, we present a maximum likelihood (ML) formalism to estimate multiple MR tissue parameter maps directly from highly undersampled, noisy k-space data. A novel algorithm, based on variable splitting, the alternating direction method of multipliers, and the variable projection method, is developed to solve the resulting optimization problem. Representative results from both simulations and in vivo experiments demonstrate that the proposed approach yields significantly improved accuracy in parameter estimation, compared to the conventional MR fingerprinting reconstruction. Moreover, the proposed framework provides new theoretical insights into the conventional approach. We show analytically that the conventional approach is an approximation to the ML reconstruction; more precisely, it is exactly equivalent to the first iteration of the proposed algorithm for the ML reconstruction, provided that a gridding reconstruction is used as an initialization.

  7. Subtracting and Fitting Histograms using Profile Likelihood

    CERN Document Server

    D'Almeida, F M L

    2008-01-01

    It is known that many interesting signals expected at LHC are of unknown shape and strongly contaminated by background events. These signals will be dif cult to detect during the rst years of LHC operation due to the initial low luminosity. In this work, one presents a method of subtracting histograms based on the pro le likelihood function when the background is previously estimated by Monte Carlo events and one has low statistics. Estimators for the signal in each bin of the histogram difference are calculated so as limits for the signals with 68.3% of Con dence Level in a low statistics case when one has a exponential background and a Gaussian signal. The method can also be used to t histograms when the signal shape is known. Our results show a good performance and avoid the problem of negative values when subtracting histograms.

  8. Assessments of higher-order ionospheric effects on GPS coordinate time series: A case study of CMONOC with longer time series

    Science.gov (United States)

    Jiang, Weiping; Deng, Liansheng; Zhou, Xiaohui; Ma, Yifang

    2014-05-01

    Higher-order ionospheric (HIO) corrections are proposed to become a standard part for precise GPS data analysis. For this study, we deeply investigate the impacts of the HIO corrections on the coordinate time series by implementing re-processing of the GPS data from Crustal Movement Observation Network of China (CMONOC). Nearly 13 year data are used in our three processing runs: (a) run NO, without HOI corrections, (b) run IG, both second- and third-order corrections are modeled using the International Geomagnetic Reference Field 11 (IGRF11) to model the magnetic field, (c) run ID, the same with IG but dipole magnetic model are applied. Both spectral analysis and noise analysis are adopted to investigate these effects. Results show that for CMONOC stations, HIO corrections are found to have brought an overall improvement. After the corrections are applied, the noise amplitudes decrease, with the white noise amplitudes showing a more remarkable variation. Low-latitude sites are more affected. For different coordinate components, the impacts vary. The results of an analysis of stacked periodograms show that there is a good match between the seasonal amplitudes and the HOI corrections, and the observed variations in the coordinate time series are related to HOI effects. HOI delays partially explain the seasonal amplitudes in the coordinate time series, especially for the U component. The annual amplitudes for all components are decreased for over one-half of the selected CMONOC sites. Additionally, the semi-annual amplitudes for the sites are much more strongly affected by the corrections. However, when diplole model is used, the results are not as optimistic as IGRF model. Analysis of dipole model indicate that HIO delay lead to the increase of noise amplitudes, and that HIO delays with dipole model can generate false periodic signals. When dipole model are used in modeling HIO terms, larger residual and noise are brought in rather than the effective improvements.

  9. Seven novel probe systems for real-time PCR provide absolute single-base discrimination, higher signaling, and generic components.

    Science.gov (United States)

    Murray, James L; Hu, Peixu; Shafer, David A

    2014-11-01

    We have developed novel probe systems for real-time PCR that provide higher specificity, greater sensitivity, and lower cost relative to dual-labeled probes. The seven DNA Detection Switch (DDS)-probe systems reported here employ two interacting polynucleotide components: a fluorescently labeled probe and a quencher antiprobe. High-fidelity detection is achieved with three DDS designs: two internal probes (internal DDS and Flip probes) and a primer probe (ZIPR probe), wherein each probe is combined with a carefully engineered, slightly mismatched, error-checking antiprobe. The antiprobe blocks off-target detection over a wide range of temperatures and facilitates multiplexing. Other designs (Universal probe, Half-Universal probe, and MacMan probe) use generic components that enable low-cost detection. Finally, single-molecule G-Force probes employ guanine-mediated fluorescent quenching by forming a hairpin between adjacent C-rich and G-rich sequences. Examples provided show how these probe technologies discriminate drug-resistant Mycobacterium tuberculosis mutants, Escherichia coli O157:H7, oncogenic EGFR deletion mutations, hepatitis B virus, influenza A/B strains, and single-nucleotide polymorphisms in the human VKORC1 gene. Copyright © 2014 American Society for Investigative Pathology and the Association for Molecular Pathology. Published by Elsevier Inc. All rights reserved.

  10. Parental family variables and likelihood of divorce.

    Science.gov (United States)

    Skalkidou, A

    2000-01-01

    It has long been established that divorced men and women have substantially higher standardized general mortality than same gender persons. Because the incidence of divorce is increasing in many countries, determinants of divorce rates assume great importance as indirect risk factors for several diseases and conditions that adversely affect health. We have undertaken a study in Athens, Greece, to evaluate whether sibship size, birth order, and the gender composition of spousal sibships are related to the probability of divorce. 358 high school students, aged between 15 and 17 years, satisfactorily completed anonymous questionnaires, indicating whether their natural parents have been separated or divorced, their parents' educational achievement, birth order and sibship size by gender. The study was analyzed as a twin case-control investigation, treating those divorced or separated as cases and those who were not divorced or separated as controls. A man who grew up as an only child was almost three times as likely to divorce compared to a man with siblings, and this association was highly significant (p approximately 0.004). There was no such evidence with respect to women. After controlling for sibship size, earlier born men--but not women--appeared to be at higher risk for divorce compared to those later born. There was no evidence that the gender structure of the sibship substantially affects the risk for divorce. Even though divorce is not an organic disease, it indirectly affects health as well as the social well-being. The findings of this study need to be replicated, but, if confirmed, they could contribute to our understanding of the roots of some instances of marital dysfunction.

  11. Credentialism, Adults, and Part-Time Higher Education in the United Kingdom: An Account of Rising Take Up and Some Implications for Policy.

    Science.gov (United States)

    Fuller, Alison

    2001-01-01

    Explains the growing importance of higher-level qualifications for adults in the UK, highlighting statistical trends in commitment to learning and qualifying-the result of taking part-time courses in higher education. Most part-time undergraduates fund their own tuition. Mature students' backgrounds and perspectives partly account for their rising…

  12. Likelihood analysis of the minimal AMSB model

    Energy Technology Data Exchange (ETDEWEB)

    Bagnaschi, E.; Weiglein, G. [DESY, Hamburg (Germany); Borsato, M.; Chobanova, V.; Lucio, M.; Santos, D.M. [Universidade de Santiago de Compostela, Santiago de Compostela (Spain); Sakurai, K. [Institute for Particle Physics Phenomenology, University of Durham, Science Laboratories, Department of Physics, Durham (United Kingdom); University of Warsaw, Faculty of Physics, Institute of Theoretical Physics, Warsaw (Poland); Buchmueller, O.; Citron, M.; Costa, J.C.; Richards, A. [Imperial College, High Energy Physics Group, Blackett Laboratory, London (United Kingdom); Cavanaugh, R. [Fermi National Accelerator Laboratory, Batavia, IL (United States); University of Illinois at Chicago, Physics Department, Chicago, IL (United States); De Roeck, A. [Experimental Physics Department, CERN, Geneva (Switzerland); Antwerp University, Wilrijk (Belgium); Dolan, M.J. [School of Physics, University of Melbourne, ARC Centre of Excellence for Particle Physics at the Terascale, Melbourne (Australia); Ellis, J.R. [King' s College London, Theoretical Particle Physics and Cosmology Group, Department of Physics, London (United Kingdom); CERN, Theoretical Physics Department, Geneva (Switzerland); Flaecher, H. [University of Bristol, H.H. Wills Physics Laboratory, Bristol (United Kingdom); Heinemeyer, S. [Campus of International Excellence UAM+CSIC, Madrid (Spain); Instituto de Fisica Teorica UAM-CSIC, Madrid (Spain); Instituto de Fisica de Cantabria (CSIC-UC), Cantabria (Spain); Isidori, G. [Physik-Institut, Universitaet Zuerich, Zurich (Switzerland); Luo, F. [Kavli IPMU (WPI), UTIAS, The University of Tokyo, Kashiwa, Chiba (Japan); Olive, K.A. [School of Physics and Astronomy, University of Minnesota, William I. Fine Theoretical Physics Institute, Minneapolis, MN (United States)

    2017-04-15

    We perform a likelihood analysis of the minimal anomaly-mediated supersymmetry-breaking (mAMSB) model using constraints from cosmology and accelerator experiments. We find that either a wino-like or a Higgsino-like neutralino LSP, χ{sup 0}{sub 1}, may provide the cold dark matter (DM), both with similar likelihoods. The upper limit on the DM density from Planck and other experiments enforces m{sub χ{sup 0}{sub 1}} 0) but the scalar mass m{sub 0} is poorly constrained. In the wino-LSP case, m{sub 3/2} is constrained to about 900 TeV and m{sub χ{sup 0}{sub 1}} to 2.9 ± 0.1 TeV, whereas in the Higgsino-LSP case m{sub 3/2} has just a lower limit >or similar 650 TeV (>or similar 480 TeV) and m{sub χ{sup 0}{sub 1}} is constrained to 1.12 (1.13) ± 0.02 TeV in the μ > 0 (μ < 0) scenario. In neither case can the anomalous magnetic moment of the muon, (g-2){sub μ}, be improved significantly relative to its Standard Model (SM) value, nor do flavour measurements constrain the model significantly, and there are poor prospects for discovering supersymmetric particles at the LHC, though there are some prospects for direct DM detection. On the other hand, if the χ{sup 0}{sub 1} contributes only a fraction of the cold DM density, future LHC E{sub T}-based searches for gluinos, squarks and heavier chargino and neutralino states as well as disappearing track searches in the wino-like LSP region will be relevant, and interference effects enable BR(B{sub s,d} → μ{sup +}μ{sup -}) to agree with the data better than in the SM in the case of wino-like DM with μ > 0. (orig.)

  13. Dimension-independent likelihood-informed MCMC

    KAUST Repository

    Cui, Tiangang

    2015-10-08

    Many Bayesian inference problems require exploring the posterior distribution of high-dimensional parameters that represent the discretization of an underlying function. This work introduces a family of Markov chain Monte Carlo (MCMC) samplers that can adapt to the particular structure of a posterior distribution over functions. Two distinct lines of research intersect in the methods developed here. First, we introduce a general class of operator-weighted proposal distributions that are well defined on function space, such that the performance of the resulting MCMC samplers is independent of the discretization of the function. Second, by exploiting local Hessian information and any associated low-dimensional structure in the change from prior to posterior distributions, we develop an inhomogeneous discretization scheme for the Langevin stochastic differential equation that yields operator-weighted proposals adapted to the non-Gaussian structure of the posterior. The resulting dimension-independent and likelihood-informed (DILI) MCMC samplers may be useful for a large class of high-dimensional problems where the target probability measure has a density with respect to a Gaussian reference measure. Two nonlinear inverse problems are used to demonstrate the efficiency of these DILI samplers: an elliptic PDE coefficient inverse problem and path reconstruction in a conditioned diffusion.

  14. Likelihood Analysis of Supersymmetric SU(5) GUTs

    CERN Document Server

    Bagnaschi, E.

    2017-01-01

    We perform a likelihood analysis of the constraints from accelerator experiments and astrophysical observations on supersymmetric (SUSY) models with SU(5) boundary conditions on soft SUSY-breaking parameters at the GUT scale. The parameter space of the models studied has 7 parameters: a universal gaugino mass $m_{1/2}$, distinct masses for the scalar partners of matter fermions in five- and ten-dimensional representations of SU(5), $m_5$ and $m_{10}$, and for the $\\mathbf{5}$ and $\\mathbf{\\bar 5}$ Higgs representations $m_{H_u}$ and $m_{H_d}$, a universal trilinear soft SUSY-breaking parameter $A_0$, and the ratio of Higgs vevs $\\tan \\beta$. In addition to previous constraints from direct sparticle searches, low-energy and flavour observables, we incorporate constraints based on preliminary results from 13 TeV LHC searches for jets + MET events and long-lived particles, as well as the latest PandaX-II and LUX searches for direct Dark Matter detection. In addition to previously-identified mechanisms for bringi...

  15. Dimension-independent likelihood-informed MCMC

    KAUST Repository

    Cui, Tiangang; Law, Kody; Marzouk, Youssef M.

    2015-01-01

    Many Bayesian inference problems require exploring the posterior distribution of high-dimensional parameters that represent the discretization of an underlying function. This work introduces a family of Markov chain Monte Carlo (MCMC) samplers that can adapt to the particular structure of a posterior distribution over functions. Two distinct lines of research intersect in the methods developed here. First, we introduce a general class of operator-weighted proposal distributions that are well defined on function space, such that the performance of the resulting MCMC samplers is independent of the discretization of the function. Second, by exploiting local Hessian information and any associated low-dimensional structure in the change from prior to posterior distributions, we develop an inhomogeneous discretization scheme for the Langevin stochastic differential equation that yields operator-weighted proposals adapted to the non-Gaussian structure of the posterior. The resulting dimension-independent and likelihood-informed (DILI) MCMC samplers may be useful for a large class of high-dimensional problems where the target probability measure has a density with respect to a Gaussian reference measure. Two nonlinear inverse problems are used to demonstrate the efficiency of these DILI samplers: an elliptic PDE coefficient inverse problem and path reconstruction in a conditioned diffusion.

  16. Likelihood Approximation With Parallel Hierarchical Matrices For Large Spatial Datasets

    KAUST Repository

    Litvinenko, Alexander

    2017-11-01

    The main goal of this article is to introduce the parallel hierarchical matrix library HLIBpro to the statistical community. We describe the HLIBCov package, which is an extension of the HLIBpro library for approximating large covariance matrices and maximizing likelihood functions. We show that an approximate Cholesky factorization of a dense matrix of size $2M\\\\times 2M$ can be computed on a modern multi-core desktop in few minutes. Further, HLIBCov is used for estimating the unknown parameters such as the covariance length, variance and smoothness parameter of a Matérn covariance function by maximizing the joint Gaussian log-likelihood function. The computational bottleneck here is expensive linear algebra arithmetics due to large and dense covariance matrices. Therefore covariance matrices are approximated in the hierarchical ($\\\\H$-) matrix format with computational cost $\\\\mathcal{O}(k^2n \\\\log^2 n/p)$ and storage $\\\\mathcal{O}(kn \\\\log n)$, where the rank $k$ is a small integer (typically $k<25$), $p$ the number of cores and $n$ the number of locations on a fairly general mesh. We demonstrate a synthetic example, where the true values of known parameters are known. For reproducibility we provide the C++ code, the documentation, and the synthetic data.

  17. Likelihood Approximation With Parallel Hierarchical Matrices For Large Spatial Datasets

    KAUST Repository

    Litvinenko, Alexander; Sun, Ying; Genton, Marc G.; Keyes, David E.

    2017-01-01

    The main goal of this article is to introduce the parallel hierarchical matrix library HLIBpro to the statistical community. We describe the HLIBCov package, which is an extension of the HLIBpro library for approximating large covariance matrices and maximizing likelihood functions. We show that an approximate Cholesky factorization of a dense matrix of size $2M\\times 2M$ can be computed on a modern multi-core desktop in few minutes. Further, HLIBCov is used for estimating the unknown parameters such as the covariance length, variance and smoothness parameter of a Matérn covariance function by maximizing the joint Gaussian log-likelihood function. The computational bottleneck here is expensive linear algebra arithmetics due to large and dense covariance matrices. Therefore covariance matrices are approximated in the hierarchical ($\\H$-) matrix format with computational cost $\\mathcal{O}(k^2n \\log^2 n/p)$ and storage $\\mathcal{O}(kn \\log n)$, where the rank $k$ is a small integer (typically $k<25$), $p$ the number of cores and $n$ the number of locations on a fairly general mesh. We demonstrate a synthetic example, where the true values of known parameters are known. For reproducibility we provide the C++ code, the documentation, and the synthetic data.

  18. Simulation-based marginal likelihood for cluster strong lensing cosmology

    Science.gov (United States)

    Killedar, M.; Borgani, S.; Fabjan, D.; Dolag, K.; Granato, G.; Meneghetti, M.; Planelles, S.; Ragone-Figueroa, C.

    2018-01-01

    Comparisons between observed and predicted strong lensing properties of galaxy clusters have been routinely used to claim either tension or consistency with Λ cold dark matter cosmology. However, standard approaches to such cosmological tests are unable to quantify the preference for one cosmology over another. We advocate approximating the relevant Bayes factor using a marginal likelihood that is based on the following summary statistic: the posterior probability distribution function for the parameters of the scaling relation between Einstein radii and cluster mass, α and β. We demonstrate, for the first time, a method of estimating the marginal likelihood using the X-ray selected z > 0.5 Massive Cluster Survey clusters as a case in point and employing both N-body and hydrodynamic simulations of clusters. We investigate the uncertainty in this estimate and consequential ability to compare competing cosmologies, which arises from incomplete descriptions of baryonic processes, discrepancies in cluster selection criteria, redshift distribution and dynamical state. The relation between triaxial cluster masses at various overdensities provides a promising alternative to the strong lensing test.

  19. Optimized Large-scale CMB Likelihood and Quadratic Maximum Likelihood Power Spectrum Estimation

    Science.gov (United States)

    Gjerløw, E.; Colombo, L. P. L.; Eriksen, H. K.; Górski, K. M.; Gruppuso, A.; Jewell, J. B.; Plaszczynski, S.; Wehus, I. K.

    2015-11-01

    We revisit the problem of exact cosmic microwave background (CMB) likelihood and power spectrum estimation with the goal of minimizing computational costs through linear compression. This idea was originally proposed for CMB purposes by Tegmark et al., and here we develop it into a fully functioning computational framework for large-scale polarization analysis, adopting WMAP as a working example. We compare five different linear bases (pixel space, harmonic space, noise covariance eigenvectors, signal-to-noise covariance eigenvectors, and signal-plus-noise covariance eigenvectors) in terms of compression efficiency, and find that the computationally most efficient basis is the signal-to-noise eigenvector basis, which is closely related to the Karhunen-Loeve and Principal Component transforms, in agreement with previous suggestions. For this basis, the information in 6836 unmasked WMAP sky map pixels can be compressed into a smaller set of 3102 modes, with a maximum error increase of any single multipole of 3.8% at ℓ ≤ 32 and a maximum shift in the mean values of a joint distribution of an amplitude-tilt model of 0.006σ. This compression reduces the computational cost of a single likelihood evaluation by a factor of 5, from 38 to 7.5 CPU seconds, and it also results in a more robust likelihood by implicitly regularizing nearly degenerate modes. Finally, we use the same compression framework to formulate a numerically stable and computationally efficient variation of the Quadratic Maximum Likelihood implementation, which requires less than 3 GB of memory and 2 CPU minutes per iteration for ℓ ≤ 32, rendering low-ℓ QML CMB power spectrum analysis fully tractable on a standard laptop.

  20. Higher-order Brain Areas Associated with Real-time Functional MRI Neurofeedback Training of the Somato-motor Cortex.

    Science.gov (United States)

    Auer, Tibor; Dewiputri, Wan Ilma; Frahm, Jens; Schweizer, Renate

    2018-05-15

    Neurofeedback (NFB) allows subjects to learn self-regulation of neuronal brain activation based on information about the ongoing activation. The implementation of real-time functional magnetic resonance imaging (rt-fMRI) for NFB training now facilitates the investigation into underlying processes. Our study involved 16 control and 16 training right-handed subjects, the latter performing an extensive rt-fMRI NFB training using motor imagery. A previous analysis focused on the targeted primary somato-motor cortex (SMC). The present study extends the analysis to the supplementary motor area (SMA), the next higher brain area within the hierarchy of the motor system. We also examined transfer-related functional connectivity using a whole-volume psycho-physiological interaction (PPI) analysis to reveal brain areas associated with learning. The ROI analysis of the pre- and post-training fMRI data for motor imagery without NFB (transfer) resulted in a significant training-specific increase in the SMA. It could also be shown that the contralateral SMA exhibited a larger increase than the ipsilateral SMA in the training and the transfer runs, and that the right-hand training elicited a larger increase in the transfer runs than the left-hand training. The PPI analysis revealed a training-specific increase in transfer-related functional connectivity between the left SMA and frontal areas as well as the anterior midcingulate cortex (aMCC) for right- and left-hand trainings. Moreover, the transfer success was related with training-specific increase in functional connectivity between the left SMA and the target area SMC. Our study demonstrates that NFB training increases functional connectivity with non-targeted brain areas. These are associated with the training strategy (i.e., SMA) as well as with learning the NFB skill (i.e., aMCC and frontal areas). This detailed description of both the system to be trained and the areas involved in learning can provide valuable information

  1. Likelihood analysis of supersymmetric SU(5) GUTs

    Energy Technology Data Exchange (ETDEWEB)

    Bagnaschi, E.; Weiglein, G. [DESY, Hamburg (Germany); Costa, J.C.; Buchmueller, O.; Citron, M.; Richards, A.; De Vries, K.J. [Imperial College, High Energy Physics Group, Blackett Laboratory, London (United Kingdom); Sakurai, K. [University of Durham, Science Laboratories, Department of Physics, Institute for Particle Physics Phenomenology, Durham (United Kingdom); University of Warsaw, Faculty of Physics, Institute of Theoretical Physics, Warsaw (Poland); Borsato, M.; Chobanova, V.; Lucio, M.; Martinez Santos, D. [Universidade de Santiago de Compostela, Santiago de Compostela (Spain); Cavanaugh, R. [Fermi National Accelerator Laboratory, Batavia, IL (United States); University of Illinois at Chicago, Physics Department, Chicago, IL (United States); Roeck, A. de [CERN, Experimental Physics Department, Geneva (Switzerland); Antwerp University, Wilrijk (Belgium); Dolan, M.J. [University of Melbourne, ARC Centre of Excellence for Particle Physics at the Terascale, School of Physics, Parkville (Australia); Ellis, J.R. [King' s College London, Theoretical Particle Physics and Cosmology Group, Department of Physics, London (United Kingdom); Theoretical Physics Department, CERN, Geneva 23 (Switzerland); Flaecher, H. [University of Bristol, H.H. Wills Physics Laboratory, Bristol (United Kingdom); Heinemeyer, S. [Campus of International Excellence UAM+CSIC, Cantoblanco, Madrid (Spain); Instituto de Fisica Teorica UAM-CSIC, Madrid (Spain); Instituto de Fisica de Cantabria (CSIC-UC), Santander (Spain); Isidori, G. [Universitaet Zuerich, Physik-Institut, Zurich (Switzerland); Olive, K.A. [University of Minnesota, William I. Fine Theoretical Physics Institute, School of Physics and Astronomy, Minneapolis, MN (United States)

    2017-02-15

    We perform a likelihood analysis of the constraints from accelerator experiments and astrophysical observations on supersymmetric (SUSY) models with SU(5) boundary conditions on soft SUSY-breaking parameters at the GUT scale. The parameter space of the models studied has seven parameters: a universal gaugino mass m{sub 1/2}, distinct masses for the scalar partners of matter fermions in five- and ten-dimensional representations of SU(5), m{sub 5} and m{sub 10}, and for the 5 and anti 5 Higgs representations m{sub H{sub u}} and m{sub H{sub d}}, a universal trilinear soft SUSY-breaking parameter A{sub 0}, and the ratio of Higgs vevs tan β. In addition to previous constraints from direct sparticle searches, low-energy and flavour observables, we incorporate constraints based on preliminary results from 13 TeV LHC searches for jets + E{sub T} events and long-lived particles, as well as the latest PandaX-II and LUX searches for direct Dark Matter detection. In addition to previously identified mechanisms for bringing the supersymmetric relic density into the range allowed by cosmology, we identify a novel u{sub R}/c{sub R} - χ{sup 0}{sub 1} coannihilation mechanism that appears in the supersymmetric SU(5) GUT model and discuss the role of ν{sub τ} coannihilation. We find complementarity between the prospects for direct Dark Matter detection and SUSY searches at the LHC. (orig.)

  2. Likelihood analysis of supersymmetric SU(5) GUTs

    Energy Technology Data Exchange (ETDEWEB)

    Bagnaschi, E. [DESY, Hamburg (Germany); Costa, J.C. [Imperial College, London (United Kingdom). Blackett Lab.; Sakurai, K. [Durham Univ. (United Kingdom). Inst. for Particle Physics Phenomonology; Warsaw Univ. (Poland). Inst. of Theoretical Physics; Collaboration: MasterCode Collaboration; and others

    2016-10-15

    We perform a likelihood analysis of the constraints from accelerator experiments and astrophysical observations on supersymmetric (SUSY) models with SU(5) boundary conditions on soft SUSY-breaking parameters at the GUT scale. The parameter space of the models studied has 7 parameters: a universal gaugino mass m{sub 1/2}, distinct masses for the scalar partners of matter fermions in five- and ten-dimensional representations of SU(5), m{sub 5} and m{sub 10}, and for the 5 and anti 5 Higgs representations m{sub H{sub u}} and m{sub H{sub d}}, a universal trilinear soft SUSY-breaking parameter A{sub 0}, and the ratio of Higgs vevs tan β. In addition to previous constraints from direct sparticle searches, low-energy and avour observables, we incorporate constraints based on preliminary results from 13 TeV LHC searches for jets+E{sub T} events and long-lived particles, as well as the latest PandaX-II and LUX searches for direct Dark Matter detection. In addition to previously-identified mechanisms for bringing the supersymmetric relic density into the range allowed by cosmology, we identify a novel u{sub R}/c{sub R}-χ{sup 0}{sub 1} coannihilation mechanism that appears in the supersymmetric SU(5) GUT model and discuss the role of ν{sub T} coannihilation. We find complementarity between the prospects for direct Dark Matter detection and SUSY searches at the LHC.

  3. LASER: A Maximum Likelihood Toolkit for Detecting Temporal Shifts in Diversification Rates From Molecular Phylogenies

    Directory of Open Access Journals (Sweden)

    Daniel L. Rabosky

    2006-01-01

    Full Text Available Rates of species origination and extinction can vary over time during evolutionary radiations, and it is possible to reconstruct the history of diversification using molecular phylogenies of extant taxa only. Maximum likelihood methods provide a useful framework for inferring temporal variation in diversification rates. LASER is a package for the R programming environment that implements maximum likelihood methods based on the birth-death process to test whether diversification rates have changed over time. LASER contrasts the likelihood of phylogenetic data under models where diversification rates have changed over time to alternative models where rates have remained constant over time. Major strengths of the package include the ability to detect temporal increases in diversification rates and the inference of diversification parameters under multiple rate-variable models of diversification. The program and associated documentation are freely available from the R package archive at http://cran.r-project.org.

  4. Time does not heal all wounds: older adults who experienced childhood adversities have higher odds of mood, anxiety, and personality disorders.

    Science.gov (United States)

    Raposo, Sarah M; Mackenzie, Corey S; Henriksen, Christine A; Afifi, Tracie O

    2014-11-01

    We aimed to examine the prevalence of several types of childhood adversity across adult cohorts, whether age moderates the effect of childhood adversity on mental health, the relationship between childhood adversity and psychopathology among older adults, the dose-response relationship between number of types of childhood adversities and mental disorders in later life, and whether lifetime mental health treatment reduces the odds of psychopathology among older survivors of childhood adversity. In a population-based, cross-sectional study on a nationally representative U.S. sample, we studied 34,653 community-dwelling Americans 20 years and older, including 7,080 adults 65 years and older from Wave 2 of the National Epidemiologic Survey on Alcohol and Related Conditions. Trained lay interviewers assessed past-year mood and anxiety disorders and lifetime personality disorders. Participants self-reported childhood adversity based on questions from the Adverse Childhood Experiences Study. Childhood adversity was prevalent across five age cohorts. In our adjusted models, age did not moderate the effect of childhood adversity on mental disorders. Older adults who experienced childhood adversity had higher odds of having mood (odds ratio: 1.73; 95% confidence interval: 1.32-2.28), anxiety (odds ratio: 1.48; 95% confidence interval: 1.20-1.83), and personality disorders (odds ratio: 2.11; 95% confidence interval: 1.75-2.54) after adjusting for covariates. An increasing number of types of childhood adversities was associated with higher odds of personality disorders and somewhat higher odds of anxiety disorders. Treatment-seeking was associated with a reduced likelihood of anxiety and, especially, mood disorders in older adult childhood adversity survivors. These results emphasize the importance of preventing childhood adversity and intervening once it occurs to avoid the negative mental health effects that can last into old age. Copyright © 2014 American Association for

  5. The behavior of the likelihood ratio test for testing missingness

    OpenAIRE

    Hens, Niel; Aerts, Marc; Molenberghs, Geert; Thijs, Herbert

    2003-01-01

    To asses the sensitivity of conclusions to model choices in the context of selection models for non-random dropout, one can oppose the different missing mechanisms to each other; e.g. by the likelihood ratio tests. The finite sample behavior of the null distribution and the power of the likelihood ratio test is studied under a variety of missingness mechanisms. missing data; sensitivity analysis; likelihood ratio test; missing mechanisms

  6. Transfer Entropy as a Log-Likelihood Ratio

    Science.gov (United States)

    Barnett, Lionel; Bossomaier, Terry

    2012-09-01

    Transfer entropy, an information-theoretic measure of time-directed information transfer between joint processes, has steadily gained popularity in the analysis of complex stochastic dynamics in diverse fields, including the neurosciences, ecology, climatology, and econometrics. We show that for a broad class of predictive models, the log-likelihood ratio test statistic for the null hypothesis of zero transfer entropy is a consistent estimator for the transfer entropy itself. For finite Markov chains, furthermore, no explicit model is required. In the general case, an asymptotic χ2 distribution is established for the transfer entropy estimator. The result generalizes the equivalence in the Gaussian case of transfer entropy and Granger causality, a statistical notion of causal influence based on prediction via vector autoregression, and establishes a fundamental connection between directed information transfer and causality in the Wiener-Granger sense.

  7. Narrow band interference cancelation in OFDM: Astructured maximum likelihood approach

    KAUST Repository

    Sohail, Muhammad Sadiq

    2012-06-01

    This paper presents a maximum likelihood (ML) approach to mitigate the effect of narrow band interference (NBI) in a zero padded orthogonal frequency division multiplexing (ZP-OFDM) system. The NBI is assumed to be time variant and asynchronous with the frequency grid of the ZP-OFDM system. The proposed structure based technique uses the fact that the NBI signal is sparse as compared to the ZP-OFDM signal in the frequency domain. The structure is also useful in reducing the computational complexity of the proposed method. The paper also presents a data aided approach for improved NBI estimation. The suitability of the proposed method is demonstrated through simulations. © 2012 IEEE.

  8. Penalized Maximum Likelihood Estimation for univariate normal mixture distributions

    International Nuclear Information System (INIS)

    Ridolfi, A.; Idier, J.

    2001-01-01

    Due to singularities of the likelihood function, the maximum likelihood approach for the estimation of the parameters of normal mixture models is an acknowledged ill posed optimization problem. Ill posedness is solved by penalizing the likelihood function. In the Bayesian framework, it amounts to incorporating an inverted gamma prior in the likelihood function. A penalized version of the EM algorithm is derived, which is still explicit and which intrinsically assures that the estimates are not singular. Numerical evidence of the latter property is put forward with a test

  9. A Time of Quiet Activism: Research, Practice, and Policy in American Women's Higher Education, 1945-1965

    Science.gov (United States)

    Eisenmann, Linda

    2005-01-01

    This article reflects on three narratives that affected American women's participation in higher education during the first twenty years after World War II. In hindsight, the educators of the 1950s and early 1960s may seem gratuitously meek and self-effacing. In comparison to later efforts, their activism can appear unnecessarily limited and too…

  10. Just-in-Time Research: A Call to Arms for Research into Mobile Technologies in Higher Education

    Science.gov (United States)

    Byrne-Davis, Lucie; Dexter, Hilary; Hart, Jo; Cappelli, Tim; Byrne, Ged; Sampson, Ian; Mooney, Jane; Lumsden, Colin

    2015-01-01

    Mobile technologies are becoming commonplace in society and in education. In higher education, it is crucial to understand the impact of constant access to information on the development of the knowledge and competence of the learner. This study reports on a series of four surveys completed by UK-based medical students (n = 443) who received…

  11. Funding System of Full-Time Higher Education and Technical Efficiency: Case of the University of Ljubljana

    Science.gov (United States)

    Tajnikar, Maks; Debevec, Jasmina

    2008-01-01

    The present paper tackles the issue of the higher education funding system in Slovenia. Its main attribute is that institutions are classified into study groups according to their fields of education, and funds granted by the state are based on their weights or study group factors (SGF). Analysis conducted using data envelopment analysis tested…

  12. A Lesson of Lost Political Capital in Public Higher Education: Leadership Challenges in a Time of Needed Organizational Change

    Science.gov (United States)

    Hilton, Mark; Jacobson, Rod

    2012-01-01

    All higher education institutions are struggling with a rapidly changing market and financial landscape. Here is a management-centered analysis of what happened when a college president, recognizing the need to make a radical adaptation to those changes, tried moving a campus community to a new organizational model, without collegial consensus,…

  13. Hard Times in Higher Education: The Closure of Subject Centres and the Implications for Education for Sustainable Development (ESD

    Directory of Open Access Journals (Sweden)

    Brian Chalkley

    2011-04-01

    Full Text Available Within many British Universities and, indeed, across higher education internationally, how best to provide education for sustainable development (ESD has become an increasingly important issue. There is now a widespread view that higher education sectors have a key part to play in preparing societies for the transition to a low carbon economy and the shift towards more sustainable ways of living and working. In the UK, a leading role in this field has been played by the Higher Education Academy and especially its network of 24 Subject Centres, each of which promotes curriculum enhancement in a particular discipline area. The mission of the Higher Education Academy has been to help raise the overall quality of the student learning experience across all disciplines and all Higher Education institutions (HEIs. As part of promoting and supporting many kinds of curriculum innovation and staff development, the HE Academy has championed the cause of ESD. Now, however, as a result of government spending cuts, the Academy is facing severe budget reductions and all its Subject Centres are soon to close. At this pivotal moment, the purpose of this paper is, therefore, to review the HE Academy’s past contribution to ESD and to explore the likely future implications of the demise of its Subject Centres. The paper ends by outlining some ideas as to how the ESD agenda might be advanced in the post-Subject Centre era, in the light of the Academy’s intention to support subject communities under its new structure. The paper has been developed through participation in key committees, engagement with Academy and Subject Centre staff, as well as through a literature review.

  14. Examining the Potential Impact of Full Tuition Fees on Mature Part-Time Students in English Higher Education

    Science.gov (United States)

    Shaw, Angela

    2014-01-01

    This paper examines current part-time mature learners' views on the potential impact upon future students as full fees are introduced from 2012. It investigates the problems which part-time mature learners may face with the advent of student loans and subsequent debt, given that they are usually combining complex lives with their studies, with…

  15. Evaluation of Smoking Prevention Television Messages Based on the Elaboration Likelihood Model

    Science.gov (United States)

    Flynn, Brian S.; Worden, John K.; Bunn, Janice Yanushka; Connolly, Scott W.; Dorwaldt, Anne L.

    2011-01-01

    Progress in reducing youth smoking may depend on developing improved methods to communicate with higher risk youth. This study explored the potential of smoking prevention messages based on the Elaboration Likelihood Model (ELM) to address these needs. Structured evaluations of 12 smoking prevention messages based on three strategies derived from…

  16. Maximum likelihood positioning algorithm for high-resolution PET scanners

    International Nuclear Information System (INIS)

    Gross-Weege, Nicolas; Schug, David; Hallen, Patrick; Schulz, Volkmar

    2016-01-01

    Purpose: In high-resolution positron emission tomography (PET), lightsharing elements are incorporated into typical detector stacks to read out scintillator arrays in which one scintillator element (crystal) is smaller than the size of the readout channel. In order to identify the hit crystal by means of the measured light distribution, a positioning algorithm is required. One commonly applied positioning algorithm uses the center of gravity (COG) of the measured light distribution. The COG algorithm is limited in spatial resolution by noise and intercrystal Compton scatter. The purpose of this work is to develop a positioning algorithm which overcomes this limitation. Methods: The authors present a maximum likelihood (ML) algorithm which compares a set of expected light distributions given by probability density functions (PDFs) with the measured light distribution. Instead of modeling the PDFs by using an analytical model, the PDFs of the proposed ML algorithm are generated assuming a single-gamma-interaction model from measured data. The algorithm was evaluated with a hot-rod phantom measurement acquired with the preclinical HYPERION II D PET scanner. In order to assess the performance with respect to sensitivity, energy resolution, and image quality, the ML algorithm was compared to a COG algorithm which calculates the COG from a restricted set of channels. The authors studied the energy resolution of the ML and the COG algorithm regarding incomplete light distributions (missing channel information caused by detector dead time). Furthermore, the authors investigated the effects of using a filter based on the likelihood values on sensitivity, energy resolution, and image quality. Results: A sensitivity gain of up to 19% was demonstrated in comparison to the COG algorithm for the selected operation parameters. Energy resolution and image quality were on a similar level for both algorithms. Additionally, the authors demonstrated that the performance of the ML

  17. The Hidden Benefits of Part-Time Higher Education Study to Working Practices: Is There a Case for Making Them More Visible?

    Science.gov (United States)

    Callender, Claire; Little, Brenda

    2015-01-01

    Within the UK, part-time study is now seen as important in meeting wider government objectives for higher education (HE) and for sustainable economic growth through skills development. Yet, measures to capture the impact of HE may not be wholly appropriate to part-time study. In particular, the continuing focus on tangible, economic measures may…

  18. Planck intermediate results: XVI. Profile likelihoods for cosmological parameters

    DEFF Research Database (Denmark)

    Bartlett, J.G.; Cardoso, J.-F.; Delabrouille, J.

    2014-01-01

    We explore the 2013 Planck likelihood function with a high-precision multi-dimensional minimizer (Minuit). This allows a refinement of the CDM best-fit solution with respect to previously-released results, and the construction of frequentist confidence intervals using profile likelihoods. The agr...

  19. Planck 2013 results. XV. CMB power spectra and likelihood

    DEFF Research Database (Denmark)

    Tauber, Jan; Bartlett, J.G.; Bucher, M.

    2014-01-01

    This paper presents the Planck 2013 likelihood, a complete statistical description of the two-point correlation function of the CMB temperature fluctuations that accounts for all known relevant uncertainties, both instrumental and astrophysical in nature. We use this likelihood to derive our best...

  20. The modified signed likelihood statistic and saddlepoint approximations

    DEFF Research Database (Denmark)

    Jensen, Jens Ledet

    1992-01-01

    SUMMARY: For a number of tests in exponential families we show that the use of a normal approximation to the modified signed likelihood ratio statistic r * is equivalent to the use of a saddlepoint approximation. This is also true in a large deviation region where the signed likelihood ratio...... statistic r is of order √ n. © 1992 Biometrika Trust....

  1. Flexible Pedagogies: Part-Time Learners and Learning in Higher Education. Flexible Pedagogies: Preparing for the Future Series

    Science.gov (United States)

    McLinden, Michael

    2013-01-01

    This publication focuses on national and international policy initiatives to develop a better understanding of part-time learners and the types of flexibility that may enhance their study especially pedagogically. As part of our five-strand research project "Flexible Pedagogies: preparing for the future" it: (1) highlights the challenges…

  2. Recruitment and Retention of Full-Time Engineering Faculty, Fall 1980. Higher Education Panel Report Number 52.

    Science.gov (United States)

    Atelsek, Frank J.; Gomberg, Irene L.

    The extent of faculty vacancies in colleges of engineering, the effects of such vacancies upon research and instructional programs, and the nature of the competition between academia and industry in hiring engineering faculty were surveyed. The focus is on permanent full-time faculty positions in the following major engineering fields:…

  3. The Economic Domino Effect: A Phenomenological Study Exploring Community College Faculty's Lived Experiences during Financial Hard Times in Higher Education

    Science.gov (United States)

    Taylor, Tridai A.

    2014-01-01

    This qualitative study explored the lived experiences of eight full-time community college faculty members who taught during the economic crisis of 2008. The study was guided by the central research question, "How do community college faculty members describe their lived experiences regarding the recent economic crisis of 2008 and its impact…

  4. Likelihood analysis of parity violation in the compound nucleus

    International Nuclear Information System (INIS)

    Bowman, D.; Sharapov, E.

    1993-01-01

    We discuss the determination of the root mean-squared matrix element of the parity-violating interaction between compound-nuclear states using likelihood analysis. We briefly review the relevant features of the statistical model of the compound nucleus and the formalism of likelihood analysis. We then discuss the application of likelihood analysis to data on panty-violating longitudinal asymmetries. The reliability of the extracted value of the matrix element and errors assigned to the matrix element is stressed. We treat the situations where the spins of the p-wave resonances are not known and known using experimental data and Monte Carlo techniques. We conclude that likelihood analysis provides a reliable way to determine M and its confidence interval. We briefly discuss some problems associated with the normalization of the likelihood function

  5. Calibration of two complex ecosystem models with different likelihood functions

    Science.gov (United States)

    Hidy, Dóra; Haszpra, László; Pintér, Krisztina; Nagy, Zoltán; Barcza, Zoltán

    2014-05-01

    The biosphere is a sensitive carbon reservoir. Terrestrial ecosystems were approximately carbon neutral during the past centuries, but they became net carbon sinks due to climate change induced environmental change and associated CO2 fertilization effect of the atmosphere. Model studies and measurements indicate that the biospheric carbon sink can saturate in the future due to ongoing climate change which can act as a positive feedback. Robustness of carbon cycle models is a key issue when trying to choose the appropriate model for decision support. The input parameters of the process-based models are decisive regarding the model output. At the same time there are several input parameters for which accurate values are hard to obtain directly from experiments or no local measurements are available. Due to the uncertainty associated with the unknown model parameters significant bias can be experienced if the model is used to simulate the carbon and nitrogen cycle components of different ecosystems. In order to improve model performance the unknown model parameters has to be estimated. We developed a multi-objective, two-step calibration method based on Bayesian approach in order to estimate the unknown parameters of PaSim and Biome-BGC models. Biome-BGC and PaSim are a widely used biogeochemical models that simulate the storage and flux of water, carbon, and nitrogen between the ecosystem and the atmosphere, and within the components of the terrestrial ecosystems (in this research the developed version of Biome-BGC is used which is referred as BBGC MuSo). Both models were calibrated regardless the simulated processes and type of model parameters. The calibration procedure is based on the comparison of measured data with simulated results via calculating a likelihood function (degree of goodness-of-fit between simulated and measured data). In our research different likelihood function formulations were used in order to examine the effect of the different model

  6. On the de Sitter and Nariai solutions in general relativity and their extension in higher dimensional space-time

    International Nuclear Information System (INIS)

    Nariai, Hidekazu; Ishihara, Hideki.

    1983-01-01

    Various geometrical properties of Nariai's less-familiar solution of the vacuum Einstein equations R sub( mu nu ) = lambda g sub( mu nu ) is f irst summarized in comparison with de Sitter's well-known solution. Next an extension of both solutions is performed in a six-dimensional space on the supposition that such an extension will in future become useful to elucidate more closely the creation of particles in an inflationary stage of the big-bang universe. For preparation, the behavior of a massive scalar field in the extended space-time is studied in a classical level. (author)

  7. Teachers' views of using e-learning for non-traditional students in higher education across three disciplines [nursing, chemistry and management] at a time of massification and increased diversity in higher education.

    Science.gov (United States)

    Allan, Helen T; O'Driscoll, Mike; Simpson, Vikki; Shawe, Jill

    2013-09-01

    The expansion of the higher educational sector in the United Kingdom over the last two decades to meet political aspirations of the successive governments and popular demand for participation in the sector (the Widening Participation Agenda) has overlapped with the introduction of e-learning. This paper describes teachers' views of using e-learning for non-traditional students in higher education across three disciplines [nursing, chemistry and management] at a time of massification and increased diversity in higher education. A three phase, mixed methods study; this paper reports findings from phase two of the study. One university in England. Higher education teachers teaching on the nursing, chemistry and management programmes. Focus groups with these teachers. Findings from these data show that teachers across the programmes have limited knowledge of whether students are non-traditional or what category of non-traditional status they might be in. Such knowledge as they have does not seem to influence the tailoring of teaching and learning for non-traditional students. Teachers in chemistry and nursing want more support from the university to improve their use of e-learning, as did teachers in management but to a lesser extent. Our conclusions confirm other studies in the field outside nursing which suggest that non-traditional students' learning needs have not been considered meaningfully in the development of e-learning strategies in universities. We suggest that this may be because teachers have been required to develop e-learning at the same time as they cope with the massification of, and widening participation in, higher education. The findings are of particular importance to nurse educators given the high number of non-traditional students on nursing programmes. Copyright © 2012 Elsevier Ltd. All rights reserved.

  8. Just-in-time research: a call to arms for research into mobile technologies in higher education

    Directory of Open Access Journals (Sweden)

    Lucie Byrne-Davis

    2015-04-01

    Full Text Available Mobile technologies are becoming commonplace in society and in education. In higher education, it is crucial to understand the impact of constant access to information on the development of the knowledge and competence of the learner. This study reports on a series of four surveys completed by UK-based medical students (n=443 who received tablet computers (iPads from their medical school during their 4th year of study. Students were surveyed prior to receiving the iPads and again regarding their usage and experiences at 2, 6 and 12 months post receipt of tablets. Findings indicate that students differed in their use of iPads but that the majority felt that tablets had impacted on their learning and the majority were using them frequently (at least once a day during learning. Almost half of the students reported that clinical supervisors had raised the possibility of tablets changing patient care. These results, although only descriptive, raise important questions about the impact of mobile technologies on learning.

  9. Ecuadorian youth, social and geographic mobility and higher education in Spain and Ecuador. Unequal educational trajectories in times of crises

    Directory of Open Access Journals (Sweden)

    Cristina Vega Solís

    2017-11-01

    Full Text Available In the paper we analyze the inequalities that emerge from and are reproduced in Ecuadorian’s higher education trajectories in the context of economic crises. We focus on the strategies that these youth and their families employ for social mobility as well as the role of public policy in these processes. We examine the trajectories of three groups: sons and daughters of the 2000 migration wave from Ecuador to Spain who study at universities in Spain, those who have returned to Ecuador for their studies, and Ecuadorians who move to Spain in order to carry out postgraduate studies, some of them funded by scholarships from the Ecuadorian government. The research project employed a qualitative methodology based on interviews, focus groups and a survey with Ecuadorians in Spain who took the entrance exam for admittance into Ecuador’s public university system. Our findings highlight the varied forms of capitals that these diverse students employ, as well as the social and economic constraints that they encounter. In a period of economic crisis in Spain, the first group of students must often downgrade their expectations in order to continue their studies. Their experience contrasts starkly with Ecuadorians undertaking postgraduate studies in Spain, whose heterogenous trajectories are upwardly and geographically mobile. The case of the return university students to Ecuador shows us that education is inserted into a broader strategy that depends on transnational networks shaped over more than a decade of Ecuador-Spain migration.

  10. An Efficient UD-Based Algorithm for the Computation of Maximum Likelihood Sensitivity of Continuous-Discrete Systems

    DEFF Research Database (Denmark)

    Boiroux, Dimitri; Juhl, Rune; Madsen, Henrik

    2016-01-01

    This paper addresses maximum likelihood parameter estimation of continuous-time nonlinear systems with discrete-time measurements. We derive an efficient algorithm for the computation of the log-likelihood function and its gradient, which can be used in gradient-based optimization algorithms....... This algorithm uses UD decomposition of symmetric matrices and the array algorithm for covariance update and gradient computation. We test our algorithm on the Lotka-Volterra equations. Compared to the maximum likelihood estimation based on finite difference gradient computation, we get a significant speedup...

  11. Academic Work—Faster, Higher, Further? On the (Missing Proportion of Work to Spare Time in the (Cultural Sciences

    Directory of Open Access Journals (Sweden)

    Gert Dressel

    2008-01-01

    Full Text Available We make the practices of the academic production of knowledge a subject of critical discussion by focusing on the world of academic work and the academics themselves. Based on interviews with academics in the field of cultural sciences we conclude that with regard to their daily routines, their annual schedules, and their life-courses the so-called private life (family life, leisure time etc. becomes dominated by the social and cultural logics of the working sphere. Although it might appear exaggerated, we will refer to the humanities as a "total institution" which entails social, physical, and mental costs for its "inmates" as well as for those who never managed to become "inmates" (in spite of their efforts and those who don’t belong to the institution any more. URN: urn:nbn:de:0114-fqs0801385

  12. Spinor Green function in higher-dimensional cosmic string space-time in the presence of magnetic flux

    International Nuclear Information System (INIS)

    Spinelly, J.; Mello, E.R. Bezerra de

    2008-01-01

    In this paper we investigate the vacuum polarization effects associated with quantum fermionic charged fields in a generalized (d+1)-dimensional cosmic string space-times considering the presence of a magnetic flux along the string. In order to develop this analysis we calculate a general expression for the respective Green function, valid for several different values of d, which is expressed in terms of a bispinor associated with the square of the Dirac operator. Adopting this result, we explicitly calculate the renormalized vacuum expectation values of the energy-momentum tensors, (T A B ) Ren. , associated with massless fields. Moreover, for specific values of the parameters which codify the cosmic string and the fractional part of the ratio of the magnetic flux by the quantum one, we were able to present in closed forms the bispinor and the respective Green function for massive fields.

  13. Susceptibility, likelihood to be diagnosed, worry and fear for contracting Lyme disease.

    Science.gov (United States)

    Fogel, Joshua; Chawla, Gurasees S

    Risk perception and psychological concerns are relevant for understanding how people view Lyme disease. This study investigates the four separate outcomes of susceptibility, likelihood to be diagnosed, worry, and fear for contracting Lyme disease. University students (n=713) were surveyed about demographics, perceived health, Lyme disease knowledge, Lyme disease preventive behaviors, Lyme disease history, and Lyme disease miscellaneous variables. We found that women were associated with increased susceptibility and fear. Asian/Asian-American race/ethnicity was associated with increased worry and fear. Perceived good health was associated with increased likelihood to be diagnosed, worry, and fear. Correct knowledge was associated with increased susceptibility and likelihood to be diagnosed. Those who typically spend a lot of time outdoors were associated with increased susceptibility, likelihood to be diagnosed, worry, and fear. In conclusion, healthcare providers and public health campaigns should address susceptibility, likelihood to be diagnosed, worry, and fear about Lyme disease, and should particularly target women and Asians/Asian-Americans to address any possible misconceptions and/or offer effective coping strategies. Copyright © 2016 King Saud Bin Abdulaziz University for Health Sciences. Published by Elsevier Ltd. All rights reserved.

  14. The maize INDETERMINATE1 flowering time regulator defines a highly conserved zinc finger protein family in higher plants

    Directory of Open Access Journals (Sweden)

    Colasanti Joseph

    2006-06-01

    Full Text Available Abstract Background The maize INDETERMINATE1 gene, ID1, is a key regulator of the transition to flowering and the founding member of a transcription factor gene family that encodes a protein with a distinct arrangement of zinc finger motifs. The zinc fingers and surrounding sequence make up the signature ID domain (IDD, which appears to be found in all higher plant genomes. The presence of zinc finger domains and previous biochemical studies showing that ID1 binds to DNA suggests that members of this gene family are involved in transcriptional regulation. Results Comparison of IDD genes identified in Arabidopsis and rice genomes, and all IDD genes discovered in maize EST and genomic databases, suggest that ID1 is a unique member of this gene family. High levels of sequence similarity amongst all IDD genes from maize, rice and Arabidopsis suggest that they are derived from a common ancestor. Several unique features of ID1 suggest that it is a divergent member of the maize IDD family. Although no clear ID1 ortholog was identified in the Arabidopsis genome, highly similar genes that encode proteins with identity extending beyond the ID domain were isolated from rice and sorghum. Phylogenetic comparisons show that these putative orthologs, along with maize ID1, form a group separate from other IDD genes. In contrast to ID1 mRNA, which is detected exclusively in immature leaves, several maize IDD genes showed a broad range of expression in various tissues. Further, Western analysis with an antibody that cross-reacts with ID1 protein and potential orthologs from rice and sorghum shows that all three proteins are detected in immature leaves only. Conclusion Comparative genomic analysis shows that the IDD zinc finger family is highly conserved among both monocots and dicots. The leaf-specific ID1 expression pattern distinguishes it from other maize IDD genes examined. A similar leaf-specific localization pattern was observed for the putative ID1 protein

  15. Efficient algorithms for maximum likelihood decoding in the surface code

    Science.gov (United States)

    Bravyi, Sergey; Suchara, Martin; Vargo, Alexander

    2014-09-01

    We describe two implementations of the optimal error correction algorithm known as the maximum likelihood decoder (MLD) for the two-dimensional surface code with a noiseless syndrome extraction. First, we show how to implement MLD exactly in time O (n2), where n is the number of code qubits. Our implementation uses a reduction from MLD to simulation of matchgate quantum circuits. This reduction however requires a special noise model with independent bit-flip and phase-flip errors. Secondly, we show how to implement MLD approximately for more general noise models using matrix product states (MPS). Our implementation has running time O (nχ3), where χ is a parameter that controls the approximation precision. The key step of our algorithm, borrowed from the density matrix renormalization-group method, is a subroutine for contracting a tensor network on the two-dimensional grid. The subroutine uses MPS with a bond dimension χ to approximate the sequence of tensors arising in the course of contraction. We benchmark the MPS-based decoder against the standard minimum weight matching decoder observing a significant reduction of the logical error probability for χ ≥4.

  16. Late time acceleration of the 3-space in a higher dimensional steady state universe in dilaton gravity

    International Nuclear Information System (INIS)

    Akarsu, Özgür; Dereli, Tekin

    2013-01-01

    We present cosmological solutions for (1+3+n)-dimensional steady state universe in dilaton gravity with an arbitrary dilaton coupling constant w and exponential dilaton self-interaction potentials in the string frame. We focus particularly on the class in which the 3-space expands with a time varying deceleration parameter. We discuss the number of the internal dimensions and the value of the dilaton coupling constant to determine the cases that are consistent with the observed universe and the primordial nucleosynthesis. The 3-space starts with a decelerated expansion rate and evolves into accelerated expansion phase subject to the values of w and n, but ends with a Big Rip in all cases. We discuss the cosmological evolution in further detail for the cases w = 1 and w = ½ that permit exact solutions. We also comment on how the universe would be conceived by an observer in four dimensions who is unaware of the internal dimensions and thinks that the conventional general relativity is valid at cosmological scales

  17. Late time acceleration of the 3-space in a higher dimensional steady state universe in dilaton gravity

    Science.gov (United States)

    Akarsu, Özgür; Dereli, Tekin

    2013-02-01

    We present cosmological solutions for (1+3+n)-dimensional steady state universe in dilaton gravity with an arbitrary dilaton coupling constant w and exponential dilaton self-interaction potentials in the string frame. We focus particularly on the class in which the 3-space expands with a time varying deceleration parameter. We discuss the number of the internal dimensions and the value of the dilaton coupling constant to determine the cases that are consistent with the observed universe and the primordial nucleosynthesis. The 3-space starts with a decelerated expansion rate and evolves into accelerated expansion phase subject to the values of w and n, but ends with a Big Rip in all cases. We discuss the cosmological evolution in further detail for the cases w = 1 and w = ½ that permit exact solutions. We also comment on how the universe would be conceived by an observer in four dimensions who is unaware of the internal dimensions and thinks that the conventional general relativity is valid at cosmological scales.

  18. Parallelization of maximum likelihood fits with OpenMP and CUDA

    CERN Document Server

    Jarp, S; Leduc, J; Nowak, A; Pantaleo, F

    2011-01-01

    Data analyses based on maximum likelihood fits are commonly used in the high energy physics community for fitting statistical models to data samples. This technique requires the numerical minimization of the negative log-likelihood function. MINUIT is the most common package used for this purpose in the high energy physics community. The main algorithm in this package, MIGRAD, searches the minimum by using the gradient information. The procedure requires several evaluations of the function, depending on the number of free parameters and their initial values. The whole procedure can be very CPU-time consuming in case of complex functions, with several free parameters, many independent variables and large data samples. Therefore, it becomes particularly important to speed-up the evaluation of the negative log-likelihood function. In this paper we present an algorithm and its implementation which benefits from data vectorization and parallelization (based on OpenMP) and which was also ported to Graphics Processi...

  19. Uncertainty about the true source. A note on the likelihood ratio at the activity level.

    Science.gov (United States)

    Taroni, Franco; Biedermann, Alex; Bozza, Silvia; Comte, Jennifer; Garbolino, Paolo

    2012-07-10

    This paper focuses on likelihood ratio based evaluations of fibre evidence in cases in which there is uncertainty about whether or not the reference item available for analysis - that is, an item typically taken from the suspect or seized at his home - is the item actually worn at the time of the offence. A likelihood ratio approach is proposed that, for situations in which certain categorical assumptions can be made about additionally introduced parameters, converges to formula described in existing literature. The properties of the proposed likelihood ratio approach are analysed through sensitivity analyses and discussed with respect to possible argumentative implications that arise in practice. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  20. Planck 2013 results. XV. CMB power spectra and likelihood

    Science.gov (United States)

    Planck Collaboration; Ade, P. A. R.; Aghanim, N.; Armitage-Caplan, C.; Arnaud, M.; Ashdown, M.; Atrio-Barandela, F.; Aumont, J.; Baccigalupi, C.; Banday, A. J.; Barreiro, R. B.; Bartlett, J. G.; Battaner, E.; Benabed, K.; Benoît, A.; Benoit-Lévy, A.; Bernard, J.-P.; Bersanelli, M.; Bielewicz, P.; Bobin, J.; Bock, J. J.; Bonaldi, A.; Bonavera, L.; Bond, J. R.; Borrill, J.; Bouchet, F. R.; Boulanger, F.; Bridges, M.; Bucher, M.; Burigana, C.; Butler, R. C.; Calabrese, E.; Cardoso, J.-F.; Catalano, A.; Challinor, A.; Chamballu, A.; Chiang, H. C.; Chiang, L.-Y.; Christensen, P. R.; Church, S.; Clements, D. L.; Colombi, S.; Colombo, L. P. L.; Combet, C.; Couchot, F.; Coulais, A.; Crill, B. P.; Curto, A.; Cuttaia, F.; Danese, L.; Davies, R. D.; Davis, R. J.; de Bernardis, P.; de Rosa, A.; de Zotti, G.; Delabrouille, J.; Delouis, J.-M.; Désert, F.-X.; Dickinson, C.; Diego, J. M.; Dole, H.; Donzelli, S.; Doré, O.; Douspis, M.; Dunkley, J.; Dupac, X.; Efstathiou, G.; Elsner, F.; Enßlin, T. A.; Eriksen, H. K.; Finelli, F.; Forni, O.; Frailis, M.; Fraisse, A. A.; Franceschi, E.; Gaier, T. C.; Galeotta, S.; Galli, S.; Ganga, K.; Giard, M.; Giardino, G.; Giraud-Héraud, Y.; Gjerløw, E.; González-Nuevo, J.; Górski, K. M.; Gratton, S.; Gregorio, A.; Gruppuso, A.; Gudmundsson, J. E.; Hansen, F. K.; Hanson, D.; Harrison, D.; Helou, G.; Henrot-Versillé, S.; Hernández-Monteagudo, C.; Herranz, D.; Hildebrandt, S. R.; Hivon, E.; Hobson, M.; Holmes, W. A.; Hornstrup, A.; Hovest, W.; Huffenberger, K. M.; Hurier, G.; Jaffe, A. H.; Jaffe, T. R.; Jewell, J.; Jones, W. C.; Juvela, M.; Keihänen, E.; Keskitalo, R.; Kiiveri, K.; Kisner, T. S.; Kneissl, R.; Knoche, J.; Knox, L.; Kunz, M.; Kurki-Suonio, H.; Lagache, G.; Lähteenmäki, A.; Lamarre, J.-M.; Lasenby, A.; Lattanzi, M.; Laureijs, R. J.; Lawrence, C. R.; Le Jeune, M.; Leach, S.; Leahy, J. P.; Leonardi, R.; León-Tavares, J.; Lesgourgues, J.; Liguori, M.; Lilje, P. B.; Linden-Vørnle, M.; Lindholm, V.; López-Caniego, M.; Lubin, P. M.; Macías-Pérez, J. F.; Maffei, B.; Maino, D.; Mandolesi, N.; Marinucci, D.; Maris, M.; Marshall, D. J.; Martin, P. G.; Martínez-González, E.; Masi, S.; Massardi, M.; Matarrese, S.; Matthai, F.; Mazzotta, P.; Meinhold, P. R.; Melchiorri, A.; Mendes, L.; Menegoni, E.; Mennella, A.; Migliaccio, M.; Millea, M.; Mitra, S.; Miville-Deschênes, M.-A.; Molinari, D.; Moneti, A.; Montier, L.; Morgante, G.; Mortlock, D.; Moss, A.; Munshi, D.; Murphy, J. A.; Naselsky, P.; Nati, F.; Natoli, P.; Netterfield, C. B.; Nørgaard-Nielsen, H. U.; Noviello, F.; Novikov, D.; Novikov, I.; O'Dwyer, I. J.; Orieux, F.; Osborne, S.; Oxborrow, C. A.; Paci, F.; Pagano, L.; Pajot, F.; Paladini, R.; Paoletti, D.; Partridge, B.; Pasian, F.; Patanchon, G.; Paykari, P.; Perdereau, O.; Perotto, L.; Perrotta, F.; Piacentini, F.; Piat, M.; Pierpaoli, E.; Pietrobon, D.; Plaszczynski, S.; Pointecouteau, E.; Polenta, G.; Ponthieu, N.; Popa, L.; Poutanen, T.; Pratt, G. W.; Prézeau, G.; Prunet, S.; Puget, J.-L.; Rachen, J. P.; Rahlin, A.; Rebolo, R.; Reinecke, M.; Remazeilles, M.; Renault, C.; Ricciardi, S.; Riller, T.; Ringeval, C.; Ristorcelli, I.; Rocha, G.; Rosset, C.; Roudier, G.; Rowan-Robinson, M.; Rubiño-Martín, J. A.; Rusholme, B.; Sandri, M.; Sanselme, L.; Santos, D.; Savini, G.; Scott, D.; Seiffert, M. D.; Shellard, E. P. S.; Spencer, L. D.; Starck, J.-L.; Stolyarov, V.; Stompor, R.; Sudiwala, R.; Sureau, F.; Sutton, D.; Suur-Uski, A.-S.; Sygnet, J.-F.; Tauber, J. A.; Tavagnacco, D.; Terenzi, L.; Toffolatti, L.; Tomasi, M.; Tristram, M.; Tucci, M.; Tuovinen, J.; Türler, M.; Valenziano, L.; Valiviita, J.; Van Tent, B.; Varis, J.; Vielva, P.; Villa, F.; Vittorio, N.; Wade, L. A.; Wandelt, B. D.; Wehus, I. K.; White, M.; White, S. D. M.; Yvon, D.; Zacchei, A.; Zonca, A.

    2014-11-01

    This paper presents the Planck 2013 likelihood, a complete statistical description of the two-point correlation function of the CMB temperature fluctuations that accounts for all known relevant uncertainties, both instrumental and astrophysical in nature. We use this likelihood to derive our best estimate of the CMB angular power spectrum from Planck over three decades in multipole moment, ℓ, covering 2 ≤ ℓ ≤ 2500. The main source of uncertainty at ℓ ≲ 1500 is cosmic variance. Uncertainties in small-scale foreground modelling and instrumental noise dominate the error budget at higher ℓs. For ℓ impact of residual foreground and instrumental uncertainties on the final cosmological parameters. We find good internal agreement among the high-ℓ cross-spectra with residuals below a few μK2 at ℓ ≲ 1000, in agreement with estimated calibration uncertainties. We compare our results with foreground-cleaned CMB maps derived from all Planck frequencies, as well as with cross-spectra derived from the 70 GHz Planck map, and find broad agreement in terms of spectrum residuals and cosmological parameters. We further show that the best-fit ΛCDM cosmology is in excellent agreement with preliminary PlanckEE and TE polarisation spectra. We find that the standard ΛCDM cosmology is well constrained by Planck from the measurements at ℓ ≲ 1500. One specific example is the spectral index of scalar perturbations, for which we report a 5.4σ deviation from scale invariance, ns = 1. Increasing the multipole range beyond ℓ ≃ 1500 does not increase our accuracy for the ΛCDM parameters, but instead allows us to study extensions beyond the standard model. We find no indication of significant departures from the ΛCDM framework. Finally, we report a tension between the Planck best-fit ΛCDM model and the low-ℓ spectrum in the form of a power deficit of 5-10% at ℓ ≲ 40, with a statistical significance of 2.5-3σ. Without a theoretically motivated model for

  1. Greenery in the university environment: Students’ preferences and perceived restoration likelihood

    Science.gov (United States)

    2018-01-01

    A large body of evidence shows that interaction with greenery can be beneficial for human stress reduction, emotional states, and improved cognitive function. It can, therefore, be expected that university students might benefit from greenery in the university environment. Before investing in real-life interventions in a university environment, it is necessary to first explore students’ perceptions of greenery in the university environment. This study examined (1) preference for university indoor and outdoor spaces with and without greenery (2) perceived restoration likelihood of university outdoor spaces with and without greenery and (3) if preference and perceived restoration likelihood ratings were modified by demographic characteristics or connectedness to nature in Dutch university students (N = 722). Digital photographic stimuli represented four university spaces (lecture hall, classroom, study area, university outdoor space). For each of the three indoor spaces there were four or five stimuli conditions: (1) the standard design (2) the standard design with a colorful poster (3) the standard design with a nature poster (4) the standard design with a green wall (5) the standard design with a green wall plus interior plants. The university outdoor space included: (1) the standard design (2) the standard design with seating (3) the standard design with colorful artifacts (4) the standard design with green elements (5) the standard design with extensive greenery. Multi-level analyses showed that students gave higher preference ratings to the indoor spaces with a nature poster, a green wall, or a green wall plus interior plants than to the standard designs and the designs with the colorful posters. Students also rated preference and perceived restoration likelihood of the outdoor spaces that included greenery higher than those without. Preference and perceived restoration likelihood were not modified by demographic characteristics, but students with strong

  2. Greenery in the university environment: Students' preferences and perceived restoration likelihood.

    Directory of Open Access Journals (Sweden)

    Nicole van den Bogerd

    Full Text Available A large body of evidence shows that interaction with greenery can be beneficial for human stress reduction, emotional states, and improved cognitive function. It can, therefore, be expected that university students might benefit from greenery in the university environment. Before investing in real-life interventions in a university environment, it is necessary to first explore students' perceptions of greenery in the university environment. This study examined (1 preference for university indoor and outdoor spaces with and without greenery (2 perceived restoration likelihood of university outdoor spaces with and without greenery and (3 if preference and perceived restoration likelihood ratings were modified by demographic characteristics or connectedness to nature in Dutch university students (N = 722. Digital photographic stimuli represented four university spaces (lecture hall, classroom, study area, university outdoor space. For each of the three indoor spaces there were four or five stimuli conditions: (1 the standard design (2 the standard design with a colorful poster (3 the standard design with a nature poster (4 the standard design with a green wall (5 the standard design with a green wall plus interior plants. The university outdoor space included: (1 the standard design (2 the standard design with seating (3 the standard design with colorful artifacts (4 the standard design with green elements (5 the standard design with extensive greenery. Multi-level analyses showed that students gave higher preference ratings to the indoor spaces with a nature poster, a green wall, or a green wall plus interior plants than to the standard designs and the designs with the colorful posters. Students also rated preference and perceived restoration likelihood of the outdoor spaces that included greenery higher than those without. Preference and perceived restoration likelihood were not modified by demographic characteristics, but students with strong

  3. Greenery in the university environment: Students' preferences and perceived restoration likelihood.

    Science.gov (United States)

    van den Bogerd, Nicole; Dijkstra, S Coosje; Seidell, Jacob C; Maas, Jolanda

    2018-01-01

    A large body of evidence shows that interaction with greenery can be beneficial for human stress reduction, emotional states, and improved cognitive function. It can, therefore, be expected that university students might benefit from greenery in the university environment. Before investing in real-life interventions in a university environment, it is necessary to first explore students' perceptions of greenery in the university environment. This study examined (1) preference for university indoor and outdoor spaces with and without greenery (2) perceived restoration likelihood of university outdoor spaces with and without greenery and (3) if preference and perceived restoration likelihood ratings were modified by demographic characteristics or connectedness to nature in Dutch university students (N = 722). Digital photographic stimuli represented four university spaces (lecture hall, classroom, study area, university outdoor space). For each of the three indoor spaces there were four or five stimuli conditions: (1) the standard design (2) the standard design with a colorful poster (3) the standard design with a nature poster (4) the standard design with a green wall (5) the standard design with a green wall plus interior plants. The university outdoor space included: (1) the standard design (2) the standard design with seating (3) the standard design with colorful artifacts (4) the standard design with green elements (5) the standard design with extensive greenery. Multi-level analyses showed that students gave higher preference ratings to the indoor spaces with a nature poster, a green wall, or a green wall plus interior plants than to the standard designs and the designs with the colorful posters. Students also rated preference and perceived restoration likelihood of the outdoor spaces that included greenery higher than those without. Preference and perceived restoration likelihood were not modified by demographic characteristics, but students with strong

  4. Imagination perspective affects ratings of the likelihood of occurrence of autobiographical memories.

    Science.gov (United States)

    Marsh, Benjamin U; Pezdek, Kathy; Lam, Shirley T

    2014-07-01

    Two experiments tested and confirmed the hypothesis that when the phenomenological characteristics of imagined events are more similar to those of related autobiographical memories, the imagined event is more likely to be considered to have occurred. At Time 1 and 2-weeks later, individuals rated the likelihood of occurrence for 20 life events. In Experiment 1, 1-week after Time 1, individuals imagined 3 childhood events from a first-person or third-person perspective. There was a no-imagination control. An increase in likelihood ratings from Time 1 to Time 2 resulted when imagination was from the third-person but not first-person perspective. In Experiment 2, childhood and recent events were imagined from a third- or first-person perspective. A significant interaction resulted. For childhood events, likelihood change scores were greater for third-person than first-person perspective; for recent adult events, likelihood change scores were greater for first-person than third-person perspective, although this latter trend was not significant. Copyright © 2014 Elsevier B.V. All rights reserved.

  5. HLIBCov: Parallel Hierarchical Matrix Approximation of Large Covariance Matrices and Likelihoods with Applications in Parameter Identification

    KAUST Repository

    Litvinenko, Alexander

    2017-01-01

    and maximizing likelihood functions. We show that an approximate Cholesky factorization of a dense matrix of size $2M\\times 2M$ can be computed on a modern multi-core desktop in few minutes. Further, HLIBCov is used for estimating the unknown parameters

  6. A Composite Likelihood Inference in Latent Variable Models for Ordinal Longitudinal Responses

    Science.gov (United States)

    Vasdekis, Vassilis G. S.; Cagnone, Silvia; Moustaki, Irini

    2012-01-01

    The paper proposes a composite likelihood estimation approach that uses bivariate instead of multivariate marginal probabilities for ordinal longitudinal responses using a latent variable model. The model considers time-dependent latent variables and item-specific random effects to be accountable for the interdependencies of the multivariate…

  7. Posterior distributions for likelihood ratios in forensic science.

    Science.gov (United States)

    van den Hout, Ardo; Alberink, Ivo

    2016-09-01

    Evaluation of evidence in forensic science is discussed using posterior distributions for likelihood ratios. Instead of eliminating the uncertainty by integrating (Bayes factor) or by conditioning on parameter values, uncertainty in the likelihood ratio is retained by parameter uncertainty derived from posterior distributions. A posterior distribution for a likelihood ratio can be summarised by the median and credible intervals. Using the posterior mean of the distribution is not recommended. An analysis of forensic data for body height estimation is undertaken. The posterior likelihood approach has been criticised both theoretically and with respect to applicability. This paper addresses the latter and illustrates an interesting application area. Copyright © 2016 The Chartered Society of Forensic Sciences. Published by Elsevier Ireland Ltd. All rights reserved.

  8. Practical likelihood analysis for spatial generalized linear mixed models

    DEFF Research Database (Denmark)

    Bonat, W. H.; Ribeiro, Paulo Justiniano

    2016-01-01

    We investigate an algorithm for maximum likelihood estimation of spatial generalized linear mixed models based on the Laplace approximation. We compare our algorithm with a set of alternative approaches for two datasets from the literature. The Rhizoctonia root rot and the Rongelap are......, respectively, examples of binomial and count datasets modeled by spatial generalized linear mixed models. Our results show that the Laplace approximation provides similar estimates to Markov Chain Monte Carlo likelihood, Monte Carlo expectation maximization, and modified Laplace approximation. Some advantages...... of Laplace approximation include the computation of the maximized log-likelihood value, which can be used for model selection and tests, and the possibility to obtain realistic confidence intervals for model parameters based on profile likelihoods. The Laplace approximation also avoids the tuning...

  9. Generalized empirical likelihood methods for analyzing longitudinal data

    KAUST Repository

    Wang, S.; Qian, L.; Carroll, R. J.

    2010-01-01

    Efficient estimation of parameters is a major objective in analyzing longitudinal data. We propose two generalized empirical likelihood based methods that take into consideration within-subject correlations. A nonparametric version of the Wilks

  10. Maximum likelihood estimation of finite mixture model for economic data

    Science.gov (United States)

    Phoong, Seuk-Yen; Ismail, Mohd Tahir

    2014-06-01

    Finite mixture model is a mixture model with finite-dimension. This models are provides a natural representation of heterogeneity in a finite number of latent classes. In addition, finite mixture models also known as latent class models or unsupervised learning models. Recently, maximum likelihood estimation fitted finite mixture models has greatly drawn statistician's attention. The main reason is because maximum likelihood estimation is a powerful statistical method which provides consistent findings as the sample sizes increases to infinity. Thus, the application of maximum likelihood estimation is used to fit finite mixture model in the present paper in order to explore the relationship between nonlinear economic data. In this paper, a two-component normal mixture model is fitted by maximum likelihood estimation in order to investigate the relationship among stock market price and rubber price for sampled countries. Results described that there is a negative effect among rubber price and stock market price for Malaysia, Thailand, Philippines and Indonesia.

  11. Attitude towards, and likelihood of, complaining in the banking ...

    African Journals Online (AJOL)

    aims to determine customers' attitudes towards complaining as well as their likelihood of voicing a .... is particularly powerful and impacts greatly on customer satisfaction and retention. ...... 'Cross-national analysis of hotel customers' attitudes ...

  12. On the likelihood function of Gaussian max-stable processes

    KAUST Repository

    Genton, M. G.; Ma, Y.; Sang, H.

    2011-01-01

    We derive a closed form expression for the likelihood function of a Gaussian max-stable process indexed by ℝd at p≤d+1 sites, d≥1. We demonstrate the gain in efficiency in the maximum composite likelihood estimators of the covariance matrix from p=2 to p=3 sites in ℝ2 by means of a Monte Carlo simulation study. © 2011 Biometrika Trust.

  13. Incorporating Nuisance Parameters in Likelihoods for Multisource Spectra

    CERN Document Server

    Conway, J.S.

    2011-01-01

    We describe here the general mathematical approach to constructing likelihoods for fitting observed spectra in one or more dimensions with multiple sources, including the effects of systematic uncertainties represented as nuisance parameters, when the likelihood is to be maximized with respect to these parameters. We consider three types of nuisance parameters: simple multiplicative factors, source spectra "morphing" parameters, and parameters representing statistical uncertainties in the predicted source spectra.

  14. On the likelihood function of Gaussian max-stable processes

    KAUST Repository

    Genton, M. G.

    2011-05-24

    We derive a closed form expression for the likelihood function of a Gaussian max-stable process indexed by ℝd at p≤d+1 sites, d≥1. We demonstrate the gain in efficiency in the maximum composite likelihood estimators of the covariance matrix from p=2 to p=3 sites in ℝ2 by means of a Monte Carlo simulation study. © 2011 Biometrika Trust.

  15. Tapered composite likelihood for spatial max-stable models

    KAUST Repository

    Sang, Huiyan

    2014-05-01

    Spatial extreme value analysis is useful to environmental studies, in which extreme value phenomena are of interest and meaningful spatial patterns can be discerned. Max-stable process models are able to describe such phenomena. This class of models is asymptotically justified to characterize the spatial dependence among extremes. However, likelihood inference is challenging for such models because their corresponding joint likelihood is unavailable and only bivariate or trivariate distributions are known. In this paper, we propose a tapered composite likelihood approach by utilizing lower dimensional marginal likelihoods for inference on parameters of various max-stable process models. We consider a weighting strategy based on a "taper range" to exclude distant pairs or triples. The "optimal taper range" is selected to maximize various measures of the Godambe information associated with the tapered composite likelihood function. This method substantially reduces the computational cost and improves the efficiency over equally weighted composite likelihood estimators. We illustrate its utility with simulation experiments and an analysis of rainfall data in Switzerland.

  16. Dissociating response conflict and error likelihood in anterior cingulate cortex.

    Science.gov (United States)

    Yeung, Nick; Nieuwenhuis, Sander

    2009-11-18

    Neuroimaging studies consistently report activity in anterior cingulate cortex (ACC) in conditions of high cognitive demand, leading to the view that ACC plays a crucial role in the control of cognitive processes. According to one prominent theory, the sensitivity of ACC to task difficulty reflects its role in monitoring for the occurrence of competition, or "conflict," between responses to signal the need for increased cognitive control. However, a contrasting theory proposes that ACC is the recipient rather than source of monitoring signals, and that ACC activity observed in relation to task demand reflects the role of this region in learning about the likelihood of errors. Response conflict and error likelihood are typically confounded, making the theories difficult to distinguish empirically. The present research therefore used detailed computational simulations to derive contrasting predictions regarding ACC activity and error rate as a function of response speed. The simulations demonstrated a clear dissociation between conflict and error likelihood: fast response trials are associated with low conflict but high error likelihood, whereas slow response trials show the opposite pattern. Using the N2 component as an index of ACC activity, an EEG study demonstrated that when conflict and error likelihood are dissociated in this way, ACC activity tracks conflict and is negatively correlated with error likelihood. These findings support the conflict-monitoring theory and suggest that, in speeded decision tasks, ACC activity reflects current task demands rather than the retrospective coding of past performance.

  17. Tapered composite likelihood for spatial max-stable models

    KAUST Repository

    Sang, Huiyan; Genton, Marc G.

    2014-01-01

    Spatial extreme value analysis is useful to environmental studies, in which extreme value phenomena are of interest and meaningful spatial patterns can be discerned. Max-stable process models are able to describe such phenomena. This class of models is asymptotically justified to characterize the spatial dependence among extremes. However, likelihood inference is challenging for such models because their corresponding joint likelihood is unavailable and only bivariate or trivariate distributions are known. In this paper, we propose a tapered composite likelihood approach by utilizing lower dimensional marginal likelihoods for inference on parameters of various max-stable process models. We consider a weighting strategy based on a "taper range" to exclude distant pairs or triples. The "optimal taper range" is selected to maximize various measures of the Godambe information associated with the tapered composite likelihood function. This method substantially reduces the computational cost and improves the efficiency over equally weighted composite likelihood estimators. We illustrate its utility with simulation experiments and an analysis of rainfall data in Switzerland.

  18. Continuing professional education: Motivations and experiences of health and social care professional's part-time study in higher education. A qualitative literature review.

    Science.gov (United States)

    Burrow, Simon; Mairs, Hilary; Pusey, Helen; Bradshaw, Timothy; Keady, John

    2016-11-01

    To understand the motivations and experiences of health and social care professionals undertaking part-time, accredited, continuing professional education in higher education. A review following systematic principles. Systematic searches for literature published between January 2000 and December 2015 using the databases: SCOPUS, Web of Science, Medline, PsychINFO, Social Policy and Practice and CINAHL. Studies were included if they were published in the English language and were qualitative in design, focussing on the motivations and experiences of staff engaged in part-time, accredited, higher education study. Three reviewers appraised the quality of the selected studies. Thirteen qualitative studies were identified for the review. Motivating factors for staff to engage in part-time, accredited, continuing professional development study included: personal and professional drivers, influence of workplace/management and funding and availability. Key themes in relation to how staff experienced study included: the demands of adjusting to the academic requirements of higher education study; the experience of juggling competing demands of study, work and family; and the presence or absence of support for part-time study in the personal and professional arenas. Health and social care professionals experience a number of challenges when engaging in part-time, continuing professional education in higher education institutions. A significant challenge is the juggling of competing demands of study, work and family, and this may have a negative impact on learning. Research is needed to inform how higher education can address the specific learning needs of this population and develop pedagogic approaches that are both responsive to need and support of effective learning. Copyright © 2016 Elsevier Ltd. All rights reserved.

  19. A television in the bedroom is associated with higher weekday screen time among youth with attention deficit hyperactivity disorder (ADD/ADHD)

    OpenAIRE

    Lo, Charmaine B.; Waring, Molly E.; Pagoto, Sherry L.; Lemon, Stephenie C.

    2015-01-01

    Objective: A TV in the bedroom has been associated with screen time in youth. Youth with attention deficit hyperactivity disorder (ADD/ADHD) have higher rates of screen time, but associations with bedroom TVs are unknown in this population. We examined the association of having a bedroom TV with screen time among youth with ADD/ADHD. Methods: Data were from the 2007 National Survey of Children's Health. Youth 6–17 years whose parent/guardian reported a physician's diagnosis of ADD/ADHD (n ...

  20. Maximum likelihood estimation for cytogenetic dose-response curves

    International Nuclear Information System (INIS)

    Frome, E.L; DuFrain, R.J.

    1983-10-01

    In vitro dose-response curves are used to describe the relation between the yield of dicentric chromosome aberrations and radiation dose for human lymphocytes. The dicentric yields follow the Poisson distribution, and the expected yield depends on both the magnitude and the temporal distribution of the dose for low LET radiation. A general dose-response model that describes this relation has been obtained by Kellerer and Rossi using the theory of dual radiation action. The yield of elementary lesions is kappa[γd + g(t, tau)d 2 ], where t is the time and d is dose. The coefficient of the d 2 term is determined by the recovery function and the temporal mode of irradiation. Two special cases of practical interest are split-dose and continuous exposure experiments, and the resulting models are intrinsically nonlinear in the parameters. A general purpose maximum likelihood estimation procedure is described and illustrated with numerical examples from both experimental designs. Poisson regression analysis is used for estimation, hypothesis testing, and regression diagnostics. Results are discussed in the context of exposure assessment procedures for both acute and chronic human radiation exposure

  1. Ringing Artefact Reduction By An Efficient Likelihood Improvement Method

    Science.gov (United States)

    Fuderer, Miha

    1989-10-01

    In MR imaging, the extent of the acquired spatial frequencies of the object is necessarily finite. The resulting image shows artefacts caused by "truncation" of its Fourier components. These are known as Gibbs artefacts or ringing artefacts. These artefacts are particularly. visible when the time-saving reduced acquisition method is used, say, when scanning only the lowest 70% of the 256 data lines. Filtering the data results in loss of resolution. A method is described that estimates the high frequency data from the low-frequency data lines, with the likelihood of the image as criterion. It is a computationally very efficient method, since it requires practically only two extra Fourier transforms, in addition to the normal. reconstruction. The results of this method on MR images of human subjects are promising. Evaluations on a 70% acquisition image show about 20% decrease of the error energy after processing. "Error energy" is defined as the total power of the difference to a 256-data-lines reference image. The elimination of ringing artefacts then appears almost complete..

  2. Quantifying uncertainty, variability and likelihood for ordinary differential equation models

    LENUS (Irish Health Repository)

    Weisse, Andrea Y

    2010-10-28

    Abstract Background In many applications, ordinary differential equation (ODE) models are subject to uncertainty or variability in initial conditions and parameters. Both, uncertainty and variability can be quantified in terms of a probability density function on the state and parameter space. Results The partial differential equation that describes the evolution of this probability density function has a form that is particularly amenable to application of the well-known method of characteristics. The value of the density at some point in time is directly accessible by the solution of the original ODE extended by a single extra dimension (for the value of the density). This leads to simple methods for studying uncertainty, variability and likelihood, with significant advantages over more traditional Monte Carlo and related approaches especially when studying regions with low probability. Conclusions While such approaches based on the method of characteristics are common practice in other disciplines, their advantages for the study of biological systems have so far remained unrecognized. Several examples illustrate performance and accuracy of the approach and its limitations.

  3. Maximum likelihood estimation for cytogenetic dose-response curves

    International Nuclear Information System (INIS)

    Frome, E.L.; DuFrain, R.J.

    1986-01-01

    In vitro dose-response curves are used to describe the relation between chromosome aberrations and radiation dose for human lymphocytes. The lymphocytes are exposed to low-LET radiation, and the resulting dicentric chromosome aberrations follow the Poisson distribution. The expected yield depends on both the magnitude and the temporal distribution of the dose. A general dose-response model that describes this relation has been presented by Kellerer and Rossi (1972, Current Topics on Radiation Research Quarterly 8, 85-158; 1978, Radiation Research 75, 471-488) using the theory of dual radiation action. Two special cases of practical interest are split-dose and continuous exposure experiments, and the resulting dose-time-response models are intrinsically nonlinear in the parameters. A general-purpose maximum likelihood estimation procedure is described, and estimation for the nonlinear models is illustrated with numerical examples from both experimental designs. Poisson regression analysis is used for estimation, hypothesis testing, and regression diagnostics. Results are discussed in the context of exposure assessment procedures for both acute and chronic human radiation exposure

  4. Physical activity may decrease the likelihood of children developing constipation.

    Science.gov (United States)

    Seidenfaden, Sandra; Ormarsson, Orri Thor; Lund, Sigrun H; Bjornsson, Einar S

    2018-01-01

    Childhood constipation is common. We evaluated children diagnosed with constipation, who were referred to an Icelandic paediatric emergency department, and determined the effect of lifestyle factors on its aetiology. The parents of children who were diagnosed with constipation and participated in a phase IIB clinical trial on laxative suppositories answered an online questionnaire about their children's lifestyle and constipation in March-April 2013. The parents of nonconstipated children that visited the paediatric department of Landspitali University Hospital or an Icelandic outpatient clinic answered the same questionnaire. We analysed responses regarding 190 children aged one year to 18 years: 60 with constipation and 130 without. We found that 40% of the constipated children had recurrent symptoms, 27% had to seek medical attention more than once and 33% received medication per rectum. The 47 of 130 control group subjects aged 10-18 were much more likely to exercise more than three times a week (72%) and for more than a hour (62%) than the 26 of 60 constipated children of the same age (42% and 35%, respectively). Constipation risk factors varied with age and many children diagnosed with constipation had recurrent symptoms. Physical activity may affect the likelihood of developing constipation in older children. ©2017 Foundation Acta Paediatrica. Published by John Wiley & Sons Ltd.

  5. Maximum likelihood estimation for cytogenetic dose-response curves

    Energy Technology Data Exchange (ETDEWEB)

    Frome, E.L; DuFrain, R.J.

    1983-10-01

    In vitro dose-response curves are used to describe the relation between the yield of dicentric chromosome aberrations and radiation dose for human lymphocytes. The dicentric yields follow the Poisson distribution, and the expected yield depends on both the magnitude and the temporal distribution of the dose for low LET radiation. A general dose-response model that describes this relation has been obtained by Kellerer and Rossi using the theory of dual radiation action. The yield of elementary lesions is kappa(..gamma..d + g(t, tau)d/sup 2/), where t is the time and d is dose. The coefficient of the d/sup 2/ term is determined by the recovery function and the temporal mode of irradiation. Two special cases of practical interest are split-dose and continuous exposure experiments, and the resulting models are intrinsically nonlinear in the parameters. A general purpose maximum likelihood estimation procedure is described and illustrated with numerical examples from both experimental designs. Poisson regression analysis is used for estimation, hypothesis testing, and regression diagnostics. Results are discussed in the context of exposure assessment procedures for both acute and chronic human radiation exposure.

  6. Maximum likelihood approach for several stochastic volatility models

    International Nuclear Information System (INIS)

    Camprodon, Jordi; Perelló, Josep

    2012-01-01

    Volatility measures the amplitude of price fluctuations. Despite it being one of the most important quantities in finance, volatility is not directly observable. Here we apply a maximum likelihood method which assumes that price and volatility follow a two-dimensional diffusion process where volatility is the stochastic diffusion coefficient of the log-price dynamics. We apply this method to the simplest versions of the expOU, the OU and the Heston stochastic volatility models and we study their performance in terms of the log-price probability, the volatility probability, and its Mean First-Passage Time. The approach has some predictive power on the future returns amplitude by only knowing the current volatility. The assumed models do not consider long-range volatility autocorrelation and the asymmetric return-volatility cross-correlation but the method still yields very naturally these two important stylized facts. We apply the method to different market indices and with a good performance in all cases. (paper)

  7. Does better access to FPs decrease the likelihood of emergency department use? Results from the Primary Care Access Survey.

    Science.gov (United States)

    Mian, Oxana; Pong, Raymond

    2012-11-01

    To determine whether better access to FP services decreases the likelihood of emergency department (ED) use among the Ontario population. Population-based telephone survey. Ontario. A total of 8502 Ontario residents aged 16 years and older. Emergency department use in the 12 months before the survey. Among the general population, having a regular FP was associated with having better access to FPs for immediate care (P FPs for immediate care at least once a year; 63.1% of them had seen FPs without difficulties and were significantly less likely to use EDs than those who did not see FPs or had difficulties accessing physicians when needed (OR = 0.62, P FPs (P FPs for immediate care among the general population. Further research is needed to understand what accounts for a higher likelihood of ED use among those with regular FPs, new immigrants, residents of northern and rural areas of Ontario, and people with low socioeconomic status when actual access and sociodemographic characteristics have been taken into consideration. More important, this study demonstrates a need of distinguishing between potential and actual access to care, as having a regular FP and having timely and effective access to FP care might mean different things and have different effects on ED use.

  8. Redfield Ratios in Inland Waters: Higher Biological Control of C:N:P Ratios in Tropical Semi-arid High Water Residence Time Lakes

    Directory of Open Access Journals (Sweden)

    Ng H. They

    2017-08-01

    Full Text Available The canonical Redfield C:N:P ratio for algal biomass is often not achieved in inland waters due to higher C and N content and more variability when compared to the oceans. This has been attributed to much lower residence times and higher contributions of the watershed to the total organic matter pool of continental ecosystems. In this study we examined the effect of water residence times in low latitude lakes (in a gradient from humid to a semi-arid region on seston elemental ratios in different size fractions. We used lake water specific conductivity as a proxy for residence time in a region of Eastern Brazil where there is a strong precipitation gradient. The C:P ratios decreased in the seston and bacterial size-fractions and increased in the dissolved fraction with increasing water retention time, suggesting uptake of N and P from the dissolved pool. Bacterial abundance, production and respiration increased in response to increased residence time and intracellular nutrient availability in agreement with the growth rate hypothesis. Our results reinforce the role of microorganisms in shaping the chemical environment in aquatic systems particularly at long water residence times and highlights the importance of this factor in influencing ecological stoichiometry in all aquatic ecosystems.

  9. Novel penalised likelihood reconstruction of PET in the assessment of histologically verified small pulmonary nodules

    International Nuclear Information System (INIS)

    Teoh, Eugene J.; Gleeson, Fergus V.; McGowan, Daniel R.; Bradley, Kevin M.; Belcher, Elizabeth; Black, Edward

    2016-01-01

    Investigate the effect of a novel Bayesian penalised likelihood (BPL) reconstruction algorithm on analysis of pulmonary nodules examined with 18F-FDG PET/CT, and to determine its effect on small, sub-10-mm nodules. 18F-FDG PET/CTs performed for nodule evaluation in 104 patients (121 nodules) were retrospectively reconstructed using the new algorithm, and compared to time-of-flight ordered subset expectation maximisation (OSEM) reconstruction. Nodule and background parameters were analysed semi-quantitatively and visually. BPL compared to OSEM resulted in statistically significant increases in nodule SUV max (mean 5.3 to 8.1, p < 0.00001), signal-to-background (mean 3.6 to 5.3, p < 0.00001) and signal-to-noise (mean 24 to 41, p < 0.00001). Mean percentage increase in SUV max (%ΔSUV max ) was significantly higher in nodules ≤10 mm (n = 31, mean 73 %) compared to >10 mm (n = 90, mean 42 %) (p = 0.025). Increase in signal-to-noise was higher in nodules ≤10 mm (224 %, mean 12 to 27) compared to >10 mm (165 %, mean 28 to 46). When applying optimum SUV max thresholds for detecting malignancy, the sensitivity and accuracy increased using BPL, with the greatest improvements in nodules ≤10 mm. BPL results in a significant increase in signal-to-background and signal-to-noise compared to OSEM. When semi-quantitative analyses to diagnose malignancy are applied, higher SUV max thresholds may be warranted owing to the SUV max increase compared to OSEM. (orig.)

  10. Novel penalised likelihood reconstruction of PET in the assessment of histologically verified small pulmonary nodules

    Energy Technology Data Exchange (ETDEWEB)

    Teoh, Eugene J.; Gleeson, Fergus V. [Oxford University Hospitals NHS Trust, Department of Radiology, Churchill Hospital, Oxford (United Kingdom); University of Oxford, Department of Oncology, Oxford (United Kingdom); McGowan, Daniel R. [University of Oxford, Department of Oncology, Oxford (United Kingdom); Oxford University Hospitals NHS Trust, Radiation Physics and Protection, Churchill Hospital, Oxford (United Kingdom); Bradley, Kevin M. [Oxford University Hospitals NHS Trust, Department of Radiology, Churchill Hospital, Oxford (United Kingdom); Belcher, Elizabeth; Black, Edward [Oxford University Hospitals NHS Trust, Department of Thoracic Surgery, John Radcliffe Hospital, Oxford (United Kingdom)

    2016-02-15

    Investigate the effect of a novel Bayesian penalised likelihood (BPL) reconstruction algorithm on analysis of pulmonary nodules examined with 18F-FDG PET/CT, and to determine its effect on small, sub-10-mm nodules. 18F-FDG PET/CTs performed for nodule evaluation in 104 patients (121 nodules) were retrospectively reconstructed using the new algorithm, and compared to time-of-flight ordered subset expectation maximisation (OSEM) reconstruction. Nodule and background parameters were analysed semi-quantitatively and visually. BPL compared to OSEM resulted in statistically significant increases in nodule SUV{sub max} (mean 5.3 to 8.1, p < 0.00001), signal-to-background (mean 3.6 to 5.3, p < 0.00001) and signal-to-noise (mean 24 to 41, p < 0.00001). Mean percentage increase in SUV{sub max} (%ΔSUV{sub max}) was significantly higher in nodules ≤10 mm (n = 31, mean 73 %) compared to >10 mm (n = 90, mean 42 %) (p = 0.025). Increase in signal-to-noise was higher in nodules ≤10 mm (224 %, mean 12 to 27) compared to >10 mm (165 %, mean 28 to 46). When applying optimum SUV{sub max} thresholds for detecting malignancy, the sensitivity and accuracy increased using BPL, with the greatest improvements in nodules ≤10 mm. BPL results in a significant increase in signal-to-background and signal-to-noise compared to OSEM. When semi-quantitative analyses to diagnose malignancy are applied, higher SUV{sub max} thresholds may be warranted owing to the SUV{sub max} increase compared to OSEM. (orig.)

  11. A maximum pseudo-likelihood approach for estimating species trees under the coalescent model

    Directory of Open Access Journals (Sweden)

    Edwards Scott V

    2010-10-01

    Full Text Available Abstract Background Several phylogenetic approaches have been developed to estimate species trees from collections of gene trees. However, maximum likelihood approaches for estimating species trees under the coalescent model are limited. Although the likelihood of a species tree under the multispecies coalescent model has already been derived by Rannala and Yang, it can be shown that the maximum likelihood estimate (MLE of the species tree (topology, branch lengths, and population sizes from gene trees under this formula does not exist. In this paper, we develop a pseudo-likelihood function of the species tree to obtain maximum pseudo-likelihood estimates (MPE of species trees, with branch lengths of the species tree in coalescent units. Results We show that the MPE of the species tree is statistically consistent as the number M of genes goes to infinity. In addition, the probability that the MPE of the species tree matches the true species tree converges to 1 at rate O(M -1. The simulation results confirm that the maximum pseudo-likelihood approach is statistically consistent even when the species tree is in the anomaly zone. We applied our method, Maximum Pseudo-likelihood for Estimating Species Trees (MP-EST to a mammal dataset. The four major clades found in the MP-EST tree are consistent with those in the Bayesian concatenation tree. The bootstrap supports for the species tree estimated by the MP-EST method are more reasonable than the posterior probability supports given by the Bayesian concatenation method in reflecting the level of uncertainty in gene trees and controversies over the relationship of four major groups of placental mammals. Conclusions MP-EST can consistently estimate the topology and branch lengths (in coalescent units of the species tree. Although the pseudo-likelihood is derived from coalescent theory, and assumes no gene flow or horizontal gene transfer (HGT, the MP-EST method is robust to a small amount of HGT in the

  12. Pedagogical Change at Times of Change in the Higher Education System: An Exploration of Early Career Mentoring, Co-publication and Teaching & Learning Insights

    Directory of Open Access Journals (Sweden)

    Bill Boyd

    2015-03-01

    Full Text Available Universities are at a time of change. Their social, political and economic conditions are under challenge, while technological change challenges curriculum design and implementation, requiring reconsiderations of teaching and learning practices. In this context, and as part of the conference session on Higher education in 2014: threshold, watershed or business as usual?, I reviewed an approach I have been trialing to supporting early- and mid-career academics to navigate through this changing environment. This paper presents an illustrated essay on a human-scale approach to early- and mid-career mentoring through the establishment of small team-based research and writing projects. The essay provides examples of activities that, on the one hand, assist academics to develop the tools they need to navigate the new and evolving environment of higher education, while on the other hand directly addresses key pedagogical issues and provides new insight into teaching and learning in higher education.

  13. Smoking increases the likelihood of Helicobacter pylori treatment failure.

    Science.gov (United States)

    Itskoviz, David; Boltin, Doron; Leibovitzh, Haim; Tsadok Perets, Tsachi; Comaneshter, Doron; Cohen, Arnon; Niv, Yaron; Levi, Zohar

    2017-07-01

    Data regarding the impact of smoking on the success of Helicobacter pylori (H. pylori) eradication are conflicting, partially due to the fact that sociodemographic status is associated with both smoking and H. pylori treatment success. We aimed to assess the effect of smoking on H. pylori eradication rates after controlling for sociodemographic confounders. Included were subjects aged 15 years or older, with a first time positive C 13 -urea breath test (C 13 -UBT) between 2007 to 2014, who underwent a second C 13 -UBT after receiving clarithromycin-based triple therapy. Data regarding age, gender, socioeconomic status (SES), smoking (current smokers or "never smoked"), and drug use were extracted from the Clalit health maintenance organization database. Out of 120,914 subjects with a positive first time C 13 -UBT, 50,836 (42.0%) underwent a second C 13 -UBT test. After excluding former smokers, 48,130 remained who were eligible for analysis. The mean age was 44.3±18.2years, 69.2% were females, 87.8% were Jewish and 12.2% Arabs, 25.5% were current smokers. The overall eradication failure rates were 33.3%: 34.8% in current smokers and 32.8% in subjects who never smoked. In a multivariate analysis, eradication failure was positively associated with current smoking (Odds Ratio {OR} 1.15, 95% CI 1.10-1.20, psmoking was found to significantly increase the likelihood of unsuccessful first-line treatment for H. pylori infection. Copyright © 2017 Editrice Gastroenterologica Italiana S.r.l. Published by Elsevier Ltd. All rights reserved.

  14. Constraint likelihood analysis for a network of gravitational wave detectors

    International Nuclear Information System (INIS)

    Klimenko, S.; Rakhmanov, M.; Mitselmakher, G.; Mohanty, S.

    2005-01-01

    We propose a coherent method for detection and reconstruction of gravitational wave signals with a network of interferometric detectors. The method is derived by using the likelihood ratio functional for unknown signal waveforms. In the likelihood analysis, the global maximum of the likelihood ratio over the space of waveforms is used as the detection statistic. We identify a problem with this approach. In the case of an aligned pair of detectors, the detection statistic depends on the cross correlation between the detectors as expected, but this dependence disappears even for infinitesimally small misalignments. We solve the problem by applying constraints on the likelihood functional and obtain a new class of statistics. The resulting method can be applied to data from a network consisting of any number of detectors with arbitrary detector orientations. The method allows us reconstruction of the source coordinates and the waveforms of two polarization components of a gravitational wave. We study the performance of the method with numerical simulations and find the reconstruction of the source coordinates to be more accurate than in the standard likelihood method

  15. Efficient estimators for likelihood ratio sensitivity indices of complex stochastic dynamics

    Energy Technology Data Exchange (ETDEWEB)

    Arampatzis, Georgios; Katsoulakis, Markos A.; Rey-Bellet, Luc [Department of Mathematics and Statistics, University of Massachusetts, Amherst, Massachusetts 01003 (United States)

    2016-03-14

    We demonstrate that centered likelihood ratio estimators for the sensitivity indices of complex stochastic dynamics are highly efficient with low, constant in time variance and consequently they are suitable for sensitivity analysis in long-time and steady-state regimes. These estimators rely on a new covariance formulation of the likelihood ratio that includes as a submatrix a Fisher information matrix for stochastic dynamics and can also be used for fast screening of insensitive parameters and parameter combinations. The proposed methods are applicable to broad classes of stochastic dynamics such as chemical reaction networks, Langevin-type equations and stochastic models in finance, including systems with a high dimensional parameter space and/or disparate decorrelation times between different observables. Furthermore, they are simple to implement as a standard observable in any existing simulation algorithm without additional modifications.

  16. Profile-likelihood Confidence Intervals in Item Response Theory Models.

    Science.gov (United States)

    Chalmers, R Philip; Pek, Jolynn; Liu, Yang

    2017-01-01

    Confidence intervals (CIs) are fundamental inferential devices which quantify the sampling variability of parameter estimates. In item response theory, CIs have been primarily obtained from large-sample Wald-type approaches based on standard error estimates, derived from the observed or expected information matrix, after parameters have been estimated via maximum likelihood. An alternative approach to constructing CIs is to quantify sampling variability directly from the likelihood function with a technique known as profile-likelihood confidence intervals (PL CIs). In this article, we introduce PL CIs for item response theory models, compare PL CIs to classical large-sample Wald-type CIs, and demonstrate important distinctions among these CIs. CIs are then constructed for parameters directly estimated in the specified model and for transformed parameters which are often obtained post-estimation. Monte Carlo simulation results suggest that PL CIs perform consistently better than Wald-type CIs for both non-transformed and transformed parameters.

  17. Generalized empirical likelihood methods for analyzing longitudinal data

    KAUST Repository

    Wang, S.

    2010-02-16

    Efficient estimation of parameters is a major objective in analyzing longitudinal data. We propose two generalized empirical likelihood based methods that take into consideration within-subject correlations. A nonparametric version of the Wilks theorem for the limiting distributions of the empirical likelihood ratios is derived. It is shown that one of the proposed methods is locally efficient among a class of within-subject variance-covariance matrices. A simulation study is conducted to investigate the finite sample properties of the proposed methods and compare them with the block empirical likelihood method by You et al. (2006) and the normal approximation with a correctly estimated variance-covariance. The results suggest that the proposed methods are generally more efficient than existing methods which ignore the correlation structure, and better in coverage compared to the normal approximation with correctly specified within-subject correlation. An application illustrating our methods and supporting the simulation study results is also presented.

  18. Unbinned likelihood maximisation framework for neutrino clustering in Python

    Energy Technology Data Exchange (ETDEWEB)

    Coenders, Stefan [Technische Universitaet Muenchen, Boltzmannstr. 2, 85748 Garching (Germany)

    2016-07-01

    Albeit having detected an astrophysical neutrino flux with IceCube, sources of astrophysical neutrinos remain hidden up to now. A detection of a neutrino point source is a smoking gun for hadronic processes and acceleration of cosmic rays. The search for neutrino sources has many degrees of freedom, for example steady versus transient, point-like versus extended sources, et cetera. Here, we introduce a Python framework designed for unbinned likelihood maximisations as used in searches for neutrino point sources by IceCube. Implementing source scenarios in a modular way, likelihood searches on various kinds can be implemented in a user-friendly way, without sacrificing speed and memory management.

  19. Nearly Efficient Likelihood Ratio Tests of the Unit Root Hypothesis

    DEFF Research Database (Denmark)

    Jansson, Michael; Nielsen, Morten Ørregaard

    Seemingly absent from the arsenal of currently available "nearly efficient" testing procedures for the unit root hypothesis, i.e. tests whose local asymptotic power functions are indistinguishable from the Gaussian power envelope, is a test admitting a (quasi-)likelihood ratio interpretation. We...... show that the likelihood ratio unit root test derived in a Gaussian AR(1) model with standard normal innovations is nearly efficient in that model. Moreover, these desirable properties carry over to more complicated models allowing for serially correlated and/or non-Gaussian innovations....

  20. A note on estimating errors from the likelihood function

    International Nuclear Information System (INIS)

    Barlow, Roger

    2005-01-01

    The points at which the log likelihood falls by 12 from its maximum value are often used to give the 'errors' on a result, i.e. the 68% central confidence interval. The validity of this is examined for two simple cases: a lifetime measurement and a Poisson measurement. Results are compared with the exact Neyman construction and with the simple Bartlett approximation. It is shown that the accuracy of the log likelihood method is poor, and the Bartlett construction explains why it is flawed

  1. LDR: A Package for Likelihood-Based Sufficient Dimension Reduction

    Directory of Open Access Journals (Sweden)

    R. Dennis Cook

    2011-03-01

    Full Text Available We introduce a new mlab software package that implements several recently proposed likelihood-based methods for sufficient dimension reduction. Current capabilities include estimation of reduced subspaces with a fixed dimension d, as well as estimation of d by use of likelihood-ratio testing, permutation testing and information criteria. The methods are suitable for preprocessing data for both regression and classification. Implementations of related estimators are also available. Although the software is more oriented to command-line operation, a graphical user interface is also provided for prototype computations.

  2. Likelihood ratio decisions in memory: three implied regularities.

    Science.gov (United States)

    Glanzer, Murray; Hilford, Andrew; Maloney, Laurence T

    2009-06-01

    We analyze four general signal detection models for recognition memory that differ in their distributional assumptions. Our analyses show that a basic assumption of signal detection theory, the likelihood ratio decision axis, implies three regularities in recognition memory: (1) the mirror effect, (2) the variance effect, and (3) the z-ROC length effect. For each model, we present the equations that produce the three regularities and show, in computed examples, how they do so. We then show that the regularities appear in data from a range of recognition studies. The analyses and data in our study support the following generalization: Individuals make efficient recognition decisions on the basis of likelihood ratios.

  3. Real-time object recognition in multidimensional images based on joined extended structural tensor and higher-order tensor decomposition methods

    Science.gov (United States)

    Cyganek, Boguslaw; Smolka, Bogdan

    2015-02-01

    In this paper a system for real-time recognition of objects in multidimensional video signals is proposed. Object recognition is done by pattern projection into the tensor subspaces obtained from the factorization of the signal tensors representing the input signal. However, instead of taking only the intensity signal the novelty of this paper is first to build the Extended Structural Tensor representation from the intensity signal that conveys information on signal intensities, as well as on higher-order statistics of the input signals. This way the higher-order input pattern tensors are built from the training samples. Then, the tensor subspaces are built based on the Higher-Order Singular Value Decomposition of the prototype pattern tensors. Finally, recognition relies on measurements of the distance of a test pattern projected into the tensor subspaces obtained from the training tensors. Due to high-dimensionality of the input data, tensor based methods require high memory and computational resources. However, recent achievements in the technology of the multi-core microprocessors and graphic cards allows real-time operation of the multidimensional methods as is shown and analyzed in this paper based on real examples of object detection in digital images.

  4. COSMIC MICROWAVE BACKGROUND LIKELIHOOD APPROXIMATION BY A GAUSSIANIZED BLACKWELL-RAO ESTIMATOR

    International Nuclear Information System (INIS)

    Rudjord, Oe.; Groeneboom, N. E.; Eriksen, H. K.; Huey, Greg; Gorski, K. M.; Jewell, J. B.

    2009-01-01

    We introduce a new cosmic microwave background (CMB) temperature likelihood approximation called the Gaussianized Blackwell-Rao estimator. This estimator is derived by transforming the observed marginal power spectrum distributions obtained by the CMB Gibbs sampler into standard univariate Gaussians, and then approximating their joint transformed distribution by a multivariate Gaussian. The method is exact for full-sky coverage and uniform noise and an excellent approximation for sky cuts and scanning patterns relevant for modern satellite experiments such as the Wilkinson Microwave Anisotropy Probe (WMAP) and Planck. The result is a stable, accurate, and computationally very efficient CMB temperature likelihood representation that allows the user to exploit the unique error propagation capabilities of the Gibbs sampler to high ls. A single evaluation of this estimator between l = 2 and 200 takes ∼0.2 CPU milliseconds, while for comparison, a singe pixel space likelihood evaluation between l = 2 and 30 for a map with ∼2500 pixels requires ∼20 s. We apply this tool to the five-year WMAP temperature data, and re-estimate the angular temperature power spectrum, C l , and likelihood, L(C l ), for l ≤ 200, and derive new cosmological parameters for the standard six-parameter ΛCDM model. Our spectrum is in excellent agreement with the official WMAP spectrum, but we find slight differences in the derived cosmological parameters. Most importantly, the spectral index of scalar perturbations is n s = 0.973 ± 0.014, 1.9σ away from unity and 0.6σ higher than the official WMAP result, n s = 0.965 ± 0.014. This suggests that an exact likelihood treatment is required to higher ls than previously believed, reinforcing and extending our conclusions from the three-year WMAP analysis. In that case, we found that the suboptimal likelihood approximation adopted between l = 12 and 30 by the WMAP team biased n s low by 0.4σ, while here we find that the same approximation

  5. Multimodal Personal Verification Using Likelihood Ratio for the Match Score Fusion

    Directory of Open Access Journals (Sweden)

    Long Binh Tran

    2017-01-01

    Full Text Available In this paper, the authors present a novel personal verification system based on the likelihood ratio test for fusion of match scores from multiple biometric matchers (face, fingerprint, hand shape, and palm print. In the proposed system, multimodal features are extracted by Zernike Moment (ZM. After matching, the match scores from multiple biometric matchers are fused based on the likelihood ratio test. A finite Gaussian mixture model (GMM is used for estimating the genuine and impostor densities of match scores for personal verification. Our approach is also compared to some different famous approaches such as the support vector machine and the sum rule with min-max. The experimental results have confirmed that the proposed system can achieve excellent identification performance for its higher level in accuracy than different famous approaches and thus can be utilized for more application related to person verification.

  6. Planck 2015 results. XI. CMB power spectra, likelihoods, and robustness of parameters

    CERN Document Server

    Aghanim, N.; Ashdown, M.; Aumont, J.; Baccigalupi, C.; Banday, A.J.; Barreiro, R.B.; Bartlett, J.G.; Bartolo, N.; Battaner, E.; Benabed, K.; Benoit, A.; Benoit-Levy, A.; Bernard, J.P.; Bersanelli, M.; Bielewicz, P.; Bock, J.J.; Bonaldi, A.; Bonavera, L.; Bond, J.R.; Borrill, J.; Bouchet, F.R.; Boulanger, F.; Bucher, M.; Burigana, C.; Butler, R.C.; Calabrese, E.; Cardoso, J.F.; Catalano, A.; Challinor, A.; Chiang, H.C.; Christensen, P.R.; Clements, D.L.; Colombo, L.P.L.; Combet, C.; Coulais, A.; Crill, B.P.; Curto, A.; Cuttaia, F.; Danese, L.; Davies, R.D.; Davis, R.J.; de Bernardis, P.; de Rosa, A.; de Zotti, G.; Delabrouille, J.; Desert, F.X.; Di Valentino, E.; Dickinson, C.; Diego, J.M.; Dolag, K.; Dole, H.; Donzelli, S.; Dore, O.; Douspis, M.; Ducout, A.; Dunkley, J.; Dupac, X.; Efstathiou, G.; Elsner, F.; Ensslin, T.A.; Eriksen, H.K.; Fergusson, J.; Finelli, F.; Forni, O.; Frailis, M.; Fraisse, A.A.; Franceschi, E.; Frejsel, A.; Galeotta, S.; Galli, S.; Ganga, K.; Gauthier, C.; Gerbino, M.; Giard, M.; Gjerlow, E.; Gonzalez-Nuevo, J.; Gorski, K.M.; Gratton, S.; Gregorio, A.; Gruppuso, A.; Gudmundsson, J.E.; Hamann, J.; Hansen, F.K.; Harrison, D.L.; Helou, G.; Henrot-Versille, S.; Hernandez-Monteagudo, C.; Herranz, D.; Hildebrandt, S.R.; Hivon, E.; Holmes, W.A.; Hornstrup, A.; Huffenberger, K.M.; Hurier, G.; Jaffe, A.H.; Jones, W.C.; Juvela, M.; Keihanen, E.; Keskitalo, R.; Kiiveri, K.; Knoche, J.; Knox, L.; Kunz, M.; Kurki-Suonio, H.; Lagache, G.; Lahteenmaki, A.; Lamarre, J.M.; Lasenby, A.; Lattanzi, M.; Lawrence, C.R.; Le Jeune, M.; Leonardi, R.; Lesgourgues, J.; Levrier, F.; Lewis, A.; Liguori, M.; Lilje, P.B.; Lilley, M.; Linden-Vornle, M.; Lindholm, V.; Lopez-Caniego, M.; Macias-Perez, J.F.; Maffei, B.; Maggio, G.; Maino, D.; Mandolesi, N.; Mangilli, A.; Maris, M.; Martin, P.G.; Martinez-Gonzalez, E.; Masi, S.; Matarrese, S.; Meinhold, P.R.; Melchiorri, A.; Migliaccio, M.; Millea, M.; Miville-Deschenes, M.A.; Moneti, A.; Montier, L.; Morgante, G.; Mortlock, D.; Munshi, D.; Murphy, J.A.; Narimani, A.; Naselsky, P.; Nati, F.; Natoli, P.; Noviello, F.; Novikov, D.; Novikov, I.; Oxborrow, C.A.; Paci, F.; Pagano, L.; Pajot, F.; Paoletti, D.; Partridge, B.; Pasian, F.; Patanchon, G.; Pearson, T.J.; Perdereau, O.; Perotto, L.; Pettorino, V.; Piacentini, F.; Piat, M.; Pierpaoli, E.; Pietrobon, D.; Plaszczynski, S.; Pointecouteau, E.; Polenta, G.; Ponthieu, N.; Pratt, G.W.; Prunet, S.; Puget, J.L.; Rachen, J.P.; Reinecke, M.; Remazeilles, M.; Renault, C.; Renzi, A.; Ristorcelli, I.; Rocha, G.; Rossetti, M.; Roudier, G.; d'Orfeuil, B.Rouille; Rubino-Martin, J.A.; Rusholme, B.; Salvati, L.; Sandri, M.; Santos, D.; Savelainen, M.; Savini, G.; Scott, D.; Serra, P.; Spencer, L.D.; Spinelli, M.; Stolyarov, V.; Stompor, R.; Sunyaev, R.; Sutton, D.; Suur-Uski, A.S.; Sygnet, J.F.; Tauber, J.A.; Terenzi, L.; Toffolatti, L.; Tomasi, M.; Tristram, M.; Trombetti, T.; Tucci, M.; Tuovinen, J.; Umana, G.; Valenziano, L.; Valiviita, J.; Van Tent, B.; Vielva, P.; Villa, F.; Wade, L.A.; Wandelt, B.D.; Wehus, I.K.; Yvon, D.; Zacchei, A.; Zonca, A.

    2016-01-01

    This paper presents the Planck 2015 likelihoods, statistical descriptions of the 2-point correlation functions of CMB temperature and polarization. They use the hybrid approach employed previously: pixel-based at low multipoles, $\\ell$, and a Gaussian approximation to the distribution of cross-power spectra at higher $\\ell$. The main improvements are the use of more and better processed data and of Planck polarization data, and more detailed foreground and instrumental models. More than doubling the data allows further checks and enhanced immunity to systematics. Progress in foreground modelling enables a larger sky fraction, contributing to enhanced precision. Improvements in processing and instrumental models further reduce uncertainties. Extensive tests establish robustness and accuracy, from temperature, from polarization, and from their combination, and show that the {\\Lambda}CDM model continues to offer a very good fit. We further validate the likelihood against specific extensions to this baseline, suc...

  7. Powdered alcohol: Awareness and likelihood of use among a sample of college students.

    Science.gov (United States)

    Vail-Smith, Karen; Chaney, Beth H; Martin, Ryan J; Don Chaney, J

    2016-01-01

    In March 2015, the Alcohol and Tobacco Tax and Trade Bureau approved the sale of Palcohol, the first powdered alcohol product to be marketed and sold in the U.S. Powdered alcohol is freeze-dried, and one individual-serving size packet added to 6 ounces of liquid is equivalent to a standard drink. This study assessed awareness of powered alcohol and likelihood to use and/or misuse powdered alcohol among college students. Surveys were administered to a convenience sample of 1,841 undergraduate students. Only 16.4% of respondents had heard of powdered alcohol. After being provided a brief description of powdered alcohol, 23% indicated that they would use the product if available, and of those, 62.1% also indicated likelihood of misusing the product (eg, snorting it, mixing it with alcohol). Caucasian students (OR = 1.5) and hazardous drinkers (based on AUDIT-C scores; OR = 4.7) were significantly more likely to indicate likelihood of use. Hazardous drinkers were also six times more likely to indicate likelihood to misuse the product. These findings can inform upstream prevention efforts in states debating bans on powdered alcohol. In states where powdered alcohol will soon be available, alcohol education initiatives should be updated to include information on the potential risks of use and be targeted to those populations most likely to misuse. This is the first peer-reviewed study to assess the awareness of and likelihood to use and/or misuse powdered alcohol, a potentially emerging form of alcohol. © American Academy of Addiction Psychiatry.

  8. Attitude determination and calibration using a recursive maximum likelihood-based adaptive Kalman filter

    Science.gov (United States)

    Kelly, D. A.; Fermelia, A.; Lee, G. K. F.

    1990-01-01

    An adaptive Kalman filter design that utilizes recursive maximum likelihood parameter identification is discussed. At the center of this design is the Kalman filter itself, which has the responsibility for attitude determination. At the same time, the identification algorithm is continually identifying the system parameters. The approach is applicable to nonlinear, as well as linear systems. This adaptive Kalman filter design has much potential for real time implementation, especially considering the fast clock speeds, cache memory and internal RAM available today. The recursive maximum likelihood algorithm is discussed in detail, with special attention directed towards its unique matrix formulation. The procedure for using the algorithm is described along with comments on how this algorithm interacts with the Kalman filter.

  9. Understanding the properties of diagnostic tests - Part 2: Likelihood ratios.

    Science.gov (United States)

    Ranganathan, Priya; Aggarwal, Rakesh

    2018-01-01

    Diagnostic tests are used to identify subjects with and without disease. In a previous article in this series, we examined some attributes of diagnostic tests - sensitivity, specificity, and predictive values. In this second article, we look at likelihood ratios, which are useful for the interpretation of diagnostic test results in everyday clinical practice.

  10. Maximum likelihood estimation of the attenuated ultrasound pulse

    DEFF Research Database (Denmark)

    Rasmussen, Klaus Bolding

    1994-01-01

    The attenuated ultrasound pulse is divided into two parts: a stationary basic pulse and a nonstationary attenuation pulse. A standard ARMA model is used for the basic pulse, and a nonstandard ARMA model is derived for the attenuation pulse. The maximum likelihood estimator of the attenuated...

  11. Planck 2013 results. XV. CMB power spectra and likelihood

    CERN Document Server

    Ade, P.A.R.; Armitage-Caplan, C.; Arnaud, M.; Ashdown, M.; Atrio-Barandela, F.; Aumont, J.; Baccigalupi, C.; Banday, A.J.; Barreiro, R.B.; Bartlett, J.G.; Battaner, E.; Benabed, K.; Benoit, A.; Benoit-Levy, A.; Bernard, J.P.; Bersanelli, M.; Bielewicz, P.; Bobin, J.; Bock, J.J.; Bonaldi, A.; Bonavera, L.; Bond, J.R.; Borrill, J.; Bouchet, F.R.; Boulanger, F.; Bridges, M.; Bucher, M.; Burigana, C.; Butler, R.C.; Calabrese, E.; Cardoso, J.F.; Catalano, A.; Challinor, A.; Chamballu, A.; Chiang, L.Y.; Chiang, H.C.; Christensen, P.R.; Church, S.; Clements, D.L.; Colombi, S.; Colombo, L.P.L.; Combet, C.; Couchot, F.; Coulais, A.; Crill, B.P.; Curto, A.; Cuttaia, F.; Danese, L.; Davies, R.D.; Davis, R.J.; de Bernardis, P.; de Rosa, A.; de Zotti, G.; Delabrouille, J.; Delouis, J.M.; Desert, F.X.; Dickinson, C.; Diego, J.M.; Dole, H.; Donzelli, S.; Dore, O.; Douspis, M.; Dunkley, J.; Dupac, X.; Efstathiou, G.; Elsner, F.; Ensslin, T.A.; Eriksen, H.K.; Finelli, F.; Forni, O.; Frailis, M.; Fraisse, A.A.; Franceschi, E.; Gaier, T.C.; Galeotta, S.; Galli, S.; Ganga, K.; Giard, M.; Giardino, G.; Giraud-Heraud, Y.; Gjerlow, E.; Gonzalez-Nuevo, J.; Gorski, K.M.; Gratton, S.; Gregorio, A.; Gruppuso, A.; Gudmundsson, J.E.; Hansen, F.K.; Hanson, D.; Harrison, D.; Helou, G.; Henrot-Versille, S.; Hernandez-Monteagudo, C.; Herranz, D.; Hildebrandt, S.R.; Hivon, E.; Hobson, M.; Holmes, W.A.; Hornstrup, A.; Hovest, W.; Huffenberger, K.M.; Hurier, G.; Jaffe, T.R.; Jaffe, A.H.; Jewell, J.; Jones, W.C.; Juvela, M.; Keihanen, E.; Keskitalo, R.; Kiiveri, K.; Kisner, T.S.; Kneissl, R.; Knoche, J.; Knox, L.; Kunz, M.; Kurki-Suonio, H.; Lagache, G.; Lahteenmaki, A.; Lamarre, J.M.; Lasenby, A.; Lattanzi, M.; Laureijs, R.J.; Lawrence, C.R.; Le Jeune, M.; Leach, S.; Leahy, J.P.; Leonardi, R.; Leon-Tavares, J.; Lesgourgues, J.; Liguori, M.; Lilje, P.B.; Lindholm, V.; Linden-Vornle, M.; Lopez-Caniego, M.; Lubin, P.M.; Macias-Perez, J.F.; Maffei, B.; Maino, D.; Mandolesi, N.; Marinucci, D.; Maris, M.; Marshall, D.J.; Martin, P.G.; Martinez-Gonzalez, E.; Masi, S.; Matarrese, S.; Matthai, F.; Mazzotta, P.; Meinhold, P.R.; Melchiorri, A.; Mendes, L.; Menegoni, E.; Mennella, A.; Migliaccio, M.; Millea, M.; Mitra, S.; Miville-Deschenes, M.A.; Molinari, D.; Moneti, A.; Montier, L.; Morgante, G.; Mortlock, D.; Moss, A.; Munshi, D.; Naselsky, P.; Nati, F.; Natoli, P.; Netterfield, C.B.; Norgaard-Nielsen, H.U.; Noviello, F.; Novikov, D.; Novikov, I.; O'Dwyer, I.J.; Orieux, F.; Osborne, S.; Oxborrow, C.A.; Paci, F.; Pagano, L.; Pajot, F.; Paladini, R.; Paoletti, D.; Partridge, B.; Pasian, F.; Patanchon, G.; Paykari, P.; Perdereau, O.; Perotto, L.; Perrotta, F.; Piacentini, F.; Piat, M.; Pierpaoli, E.; Pietrobon, D.; Plaszczynski, S.; Pointecouteau, E.; Polenta, G.; Ponthieu, N.; Popa, L.; Poutanen, T.; Pratt, G.W.; Prezeau, G.; Prunet, S.; Puget, J.L.; Rachen, J.P.; Rahlin, A.; Rebolo, R.; Reinecke, M.; Remazeilles, M.; Renault, C.; Ricciardi, S.; Riller, T.; Ringeval, C.; Ristorcelli, I.; Rocha, G.; Rosset, C.; Roudier, G.; Rowan-Robinson, M.; Rubino-Martin, J.A.; Rusholme, B.; Sandri, M.; Sanselme, L.; Santos, D.; Savini, G.; Scott, D.; Seiffert, M.D.; Shellard, E.P.S.; Spencer, L.D.; Starck, J.L.; Stolyarov, V.; Stompor, R.; Sudiwala, R.; Sureau, F.; Sutton, D.; Suur-Uski, A.S.; Sygnet, J.F.; Tauber, J.A.; Tavagnacco, D.; Terenzi, L.; Toffolatti, L.; Tomasi, M.; Tristram, M.; Tucci, M.; Tuovinen, J.; Turler, M.; Valenziano, L.; Valiviita, J.; Van Tent, B.; Varis, J.; Vielva, P.; Villa, F.; Vittorio, N.; Wade, L.A.; Wandelt, B.D.; Wehus, I.K.; White, M.; White, S.D.M.; Yvon, D.; Zacchei, A.; Zonca, A.

    2014-01-01

    We present the Planck likelihood, a complete statistical description of the two-point correlation function of the CMB temperature fluctuations. We use this likelihood to derive the Planck CMB power spectrum over three decades in l, covering 2 = 50, we employ a correlated Gaussian likelihood approximation based on angular cross-spectra derived from the 100, 143 and 217 GHz channels. We validate our likelihood through an extensive suite of consistency tests, and assess the impact of residual foreground and instrumental uncertainties on cosmological parameters. We find good internal agreement among the high-l cross-spectra with residuals of a few uK^2 at l <= 1000. We compare our results with foreground-cleaned CMB maps, and with cross-spectra derived from the 70 GHz Planck map, and find broad agreement in terms of spectrum residuals and cosmological parameters. The best-fit LCDM cosmology is in excellent agreement with preliminary Planck polarisation spectra. The standard LCDM cosmology is well constrained b...

  12. Robust Gaussian Process Regression with a Student-t Likelihood

    NARCIS (Netherlands)

    Jylänki, P.P.; Vanhatalo, J.; Vehtari, A.

    2011-01-01

    This paper considers the robust and efficient implementation of Gaussian process regression with a Student-t observation model, which has a non-log-concave likelihood. The challenge with the Student-t model is the analytically intractable inference which is why several approximative methods have

  13. A simplification of the likelihood ratio test statistic for testing ...

    African Journals Online (AJOL)

    The traditional likelihood ratio test statistic for testing hypothesis about goodness of fit of multinomial probabilities in one, two and multi – dimensional contingency table was simplified. Advantageously, using the simplified version of the statistic to test the null hypothesis is easier and faster because calculating the expected ...

  14. Adaptive Unscented Kalman Filter using Maximum Likelihood Estimation

    DEFF Research Database (Denmark)

    Mahmoudi, Zeinab; Poulsen, Niels Kjølstad; Madsen, Henrik

    2017-01-01

    The purpose of this study is to develop an adaptive unscented Kalman filter (UKF) by tuning the measurement noise covariance. We use the maximum likelihood estimation (MLE) and the covariance matching (CM) method to estimate the noise covariance. The multi-step prediction errors generated...

  15. Likelihood-based inference for clustered line transect data

    DEFF Research Database (Denmark)

    Waagepetersen, Rasmus; Schweder, Tore

    2006-01-01

    The uncertainty in estimation of spatial animal density from line transect surveys depends on the degree of spatial clustering in the animal population. To quantify the clustering we model line transect data as independent thinnings of spatial shot-noise Cox processes. Likelihood-based inference...

  16. Likelihood-based Dynamic Factor Analysis for Measurement and Forecasting

    NARCIS (Netherlands)

    Jungbacker, B.M.J.P.; Koopman, S.J.

    2015-01-01

    We present new results for the likelihood-based analysis of the dynamic factor model. The latent factors are modelled by linear dynamic stochastic processes. The idiosyncratic disturbance series are specified as autoregressive processes with mutually correlated innovations. The new results lead to

  17. Likelihood-based inference for clustered line transect data

    DEFF Research Database (Denmark)

    Waagepetersen, Rasmus Plenge; Schweder, Tore

    The uncertainty in estimation of spatial animal density from line transect surveys depends on the degree of spatial clustering in the animal population. To quantify the clustering we model line transect data as independent thinnings of spatial shot-noise Cox processes. Likelihood-based inference...

  18. Reconceptualizing Social Influence in Counseling: The Elaboration Likelihood Model.

    Science.gov (United States)

    McNeill, Brian W.; Stoltenberg, Cal D.

    1989-01-01

    Presents Elaboration Likelihood Model (ELM) of persuasion (a reconceptualization of the social influence process) as alternative model of attitude change. Contends ELM unifies conflicting social psychology results and can potentially account for inconsistent research findings in counseling psychology. Provides guidelines on integrating…

  19. Counseling Pretreatment and the Elaboration Likelihood Model of Attitude Change.

    Science.gov (United States)

    Heesacker, Martin

    1986-01-01

    Results of the application of the Elaboration Likelihood Model (ELM) to a counseling context revealed that more favorable attitudes toward counseling occurred as subjects' ego involvement increased and as intervention quality improved. Counselor credibility affected the degree to which subjects' attitudes reflected argument quality differences.…

  20. Multilevel maximum likelihood estimation with application to covariance matrices

    Czech Academy of Sciences Publication Activity Database

    Turčičová, Marie; Mandel, J.; Eben, Kryštof

    Published online: 23 January ( 2018 ) ISSN 0361-0926 R&D Projects: GA ČR GA13-34856S Institutional support: RVO:67985807 Keywords : Fisher information * High dimension * Hierarchical maximum likelihood * Nested parameter spaces * Spectral diagonal covariance model * Sparse inverse covariance model Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 0.311, year: 2016

  1. Pendeteksian Outlier pada Regresi Nonlinier dengan Metode statistik Likelihood Displacement

    Directory of Open Access Journals (Sweden)

    Siti Tabi'atul Hasanah

    2012-11-01

    Full Text Available Outlier is an observation that much different (extreme from the other observational data, or data can be interpreted that do not follow the general pattern of the model. Sometimes outliers provide information that can not be provided by other data. That's why outliers should not just be eliminated. Outliers can also be an influential observation. There are many methods that can be used to detect of outliers. In previous studies done on outlier detection of linear regression. Next will be developed detection of outliers in nonlinear regression. Nonlinear regression here is devoted to multiplicative nonlinear regression. To detect is use of statistical method likelihood displacement. Statistical methods abbreviated likelihood displacement (LD is a method to detect outliers by removing the suspected outlier data. To estimate the parameters are used to the maximum likelihood method, so we get the estimate of the maximum. By using LD method is obtained i.e likelihood displacement is thought to contain outliers. Further accuracy of LD method in detecting the outliers are shown by comparing the MSE of LD with the MSE from the regression in general. Statistic test used is Λ. Initial hypothesis was rejected when proved so is an outlier.

  2. Clinical Paresthesia Atlas Illustrates Likelihood of Coverage Based on Spinal Cord Stimulator Electrode Location.

    Science.gov (United States)

    Taghva, Alexander; Karst, Edward; Underwood, Paul

    2017-08-01

    Concordant paresthesia coverage is an independent predictor of pain relief following spinal cord stimulation (SCS). Using aggregate data, our objective is to produce a map of paresthesia coverage as a function of electrode location in SCS. This retrospective analysis used x-rays, SCS programming data, and paresthesia coverage maps from the EMPOWER registry of SCS implants for chronic neuropathic pain. Spinal level of dorsal column stimulation was determined by x-ray adjudication and active cathodes in patient programs. Likelihood of paresthesia coverage was determined as a function of stimulating electrode location. Segments of paresthesia coverage were grouped anatomically. Fisher's exact test was used to identify significant differences in likelihood of paresthesia coverage as a function of spinal stimulation level. In the 178 patients analyzed, the most prevalent areas of paresthesia coverage were buttocks, anterior and posterior thigh (each 98%), and low back (94%). Unwanted paresthesia at the ribs occurred in 8% of patients. There were significant differences in the likelihood of achieving paresthesia, with higher thoracic levels (T5, T6, and T7) more likely to achieve low back coverage but also more likely to introduce paresthesia felt at the ribs. Higher levels in the thoracic spine were associated with greater coverage of the buttocks, back, and thigh, and with lesser coverage of the leg and foot. This paresthesia atlas uses real-world, aggregate data to determine likelihood of paresthesia coverage as a function of stimulating electrode location. It represents an application of "big data" techniques, and a step toward achieving personalized SCS therapy tailored to the individual's chronic pain. © 2017 International Neuromodulation Society.

  3. Gaussian copula as a likelihood function for environmental models

    Science.gov (United States)

    Wani, O.; Espadas, G.; Cecinati, F.; Rieckermann, J.

    2017-12-01

    Parameter estimation of environmental models always comes with uncertainty. To formally quantify this parametric uncertainty, a likelihood function needs to be formulated, which is defined as the probability of observations given fixed values of the parameter set. A likelihood function allows us to infer parameter values from observations using Bayes' theorem. The challenge is to formulate a likelihood function that reliably describes the error generating processes which lead to the observed monitoring data, such as rainfall and runoff. If the likelihood function is not representative of the error statistics, the parameter inference will give biased parameter values. Several uncertainty estimation methods that are currently being used employ Gaussian processes as a likelihood function, because of their favourable analytical properties. Box-Cox transformation is suggested to deal with non-symmetric and heteroscedastic errors e.g. for flow data which are typically more uncertain in high flows than in periods with low flows. Problem with transformations is that the results are conditional on hyper-parameters, for which it is difficult to formulate the analyst's belief a priori. In an attempt to address this problem, in this research work we suggest learning the nature of the error distribution from the errors made by the model in the "past" forecasts. We use a Gaussian copula to generate semiparametric error distributions . 1) We show that this copula can be then used as a likelihood function to infer parameters, breaking away from the practice of using multivariate normal distributions. Based on the results from a didactical example of predicting rainfall runoff, 2) we demonstrate that the copula captures the predictive uncertainty of the model. 3) Finally, we find that the properties of autocorrelation and heteroscedasticity of errors are captured well by the copula, eliminating the need to use transforms. In summary, our findings suggest that copulas are an

  4. On-line validation of linear process models using generalized likelihood ratios

    International Nuclear Information System (INIS)

    Tylee, J.L.

    1981-12-01

    A real-time method for testing the validity of linear models of nonlinear processes is described and evaluated. Using generalized likelihood ratios, the model dynamics are continually monitored to see if the process has moved far enough away from the nominal linear model operating point to justify generation of a new linear model. The method is demonstrated using a seventh-order model of a natural circulation steam generator

  5. Modeling gene expression measurement error: a quasi-likelihood approach

    Directory of Open Access Journals (Sweden)

    Strimmer Korbinian

    2003-03-01

    Full Text Available Abstract Background Using suitable error models for gene expression measurements is essential in the statistical analysis of microarray data. However, the true probabilistic model underlying gene expression intensity readings is generally not known. Instead, in currently used approaches some simple parametric model is assumed (usually a transformed normal distribution or the empirical distribution is estimated. However, both these strategies may not be optimal for gene expression data, as the non-parametric approach ignores known structural information whereas the fully parametric models run the risk of misspecification. A further related problem is the choice of a suitable scale for the model (e.g. observed vs. log-scale. Results Here a simple semi-parametric model for gene expression measurement error is presented. In this approach inference is based an approximate likelihood function (the extended quasi-likelihood. Only partial knowledge about the unknown true distribution is required to construct this function. In case of gene expression this information is available in the form of the postulated (e.g. quadratic variance structure of the data. As the quasi-likelihood behaves (almost like a proper likelihood, it allows for the estimation of calibration and variance parameters, and it is also straightforward to obtain corresponding approximate confidence intervals. Unlike most other frameworks, it also allows analysis on any preferred scale, i.e. both on the original linear scale as well as on a transformed scale. It can also be employed in regression approaches to model systematic (e.g. array or dye effects. Conclusions The quasi-likelihood framework provides a simple and versatile approach to analyze gene expression data that does not make any strong distributional assumptions about the underlying error model. For several simulated as well as real data sets it provides a better fit to the data than competing models. In an example it also

  6. Exclusion probabilities and likelihood ratios with applications to mixtures.

    Science.gov (United States)

    Slooten, Klaas-Jan; Egeland, Thore

    2016-01-01

    The statistical evidence obtained from mixed DNA profiles can be summarised in several ways in forensic casework including the likelihood ratio (LR) and the Random Man Not Excluded (RMNE) probability. The literature has seen a discussion of the advantages and disadvantages of likelihood ratios and exclusion probabilities, and part of our aim is to bring some clarification to this debate. In a previous paper, we proved that there is a general mathematical relationship between these statistics: RMNE can be expressed as a certain average of the LR, implying that the expected value of the LR, when applied to an actual contributor to the mixture, is at least equal to the inverse of the RMNE. While the mentioned paper presented applications for kinship problems, the current paper demonstrates the relevance for mixture cases, and for this purpose, we prove some new general properties. We also demonstrate how to use the distribution of the likelihood ratio for donors of a mixture, to obtain estimates for exceedance probabilities of the LR for non-donors, of which the RMNE is a special case corresponding to L R>0. In order to derive these results, we need to view the likelihood ratio as a random variable. In this paper, we describe how such a randomization can be achieved. The RMNE is usually invoked only for mixtures without dropout. In mixtures, artefacts like dropout and drop-in are commonly encountered and we address this situation too, illustrating our results with a basic but widely implemented model, a so-called binary model. The precise definitions, modelling and interpretation of the required concepts of dropout and drop-in are not entirely obvious, and we attempt to clarify them here in a general likelihood framework for a binary model.

  7. Penalized likelihood fluence optimization with evolutionary components for intensity modulated radiation therapy treatment planning

    International Nuclear Information System (INIS)

    Baydush, Alan H.; Marks, Lawrence B.; Das, Shiva K.

    2004-01-01

    A novel iterative penalized likelihood algorithm with evolutionary components for the optimization of beamlet fluences for intensity modulated radiation therapy (IMRT) is presented. This algorithm is designed to be flexible in terms of the objective function and automatically escalates dose, as long as the objective function increases and all constraints are met. For this study, the objective function employed was the product of target equivalent uniform dose (EUD) and fraction of target tissue within set homogeneity constraints. The likelihood component of the algorithm iteratively attempts to minimize the mean squared error between a homogeneous dose prescription and the actual target dose distribution. The updated beamlet fluences are then adjusted via a quadratic penalty function that is based on the dose-volume histogram (DVH) constraints of the organs at risk. The evolutionary components were included to prevent the algorithm from converging to a local maximum. The algorithm was applied to a prostate cancer dataset, with especially difficult DVH constraints on bladder, rectum, and femoral heads. Dose distributions were generated for manually selected sets of three-, four-, five-, and seven-field treatment plans. Additionally, a global search was performed to find the optimal orientations for an axial three-beam plan. The results from this optimal orientation set were compared to results for manually selected orientation (gantry angle) sets of 3- (0 deg., 90 deg., 270 deg. ), 4- (0 deg., 90 deg., 180 deg., 270 deg. ), 5- (0 deg., 50 deg., 130 deg., 230 deg., 310 deg.), and 7- (0 deg., 40 deg., 90 deg., 140 deg., 230 deg., 270 deg., 320 deg. ) field axial treatment plans. For all the plans generated, all DVH constraints were met and average optimization computation time was approximately 30 seconds. For the manually selected orientations, the algorithm was successful in providing a relatively homogeneous target dose distribution, while simultaneously satisfying

  8. Likelihood to Use Employee Assistance Programs: The Effects of Sociodemographic, Social-Psychological, Sociocultural, Organizational, and Community Factors.

    Science.gov (United States)

    Hall, LaCheata, And Others

    1991-01-01

    Employees (n=62) from large telephone communications company completed questionnaires assessing relationship between likelihood to use Employee Assistance Program (EAP) services and five domains: sociodemographic, social-psychological, sociocultural, organizational, and community. Found that women and individuals in higher income and educational…

  9. A randomized trial to determine the impact on compliance of a psychophysical peripheral cue based on the Elaboration Likelihood Model.

    Science.gov (United States)

    Horton, Rachael Jane; Minniti, Antoinette; Mireylees, Stewart; McEntegart, Damian

    2008-11-01

    Non-compliance in clinical studies is a significant issue, but causes remain unclear. Utilizing the Elaboration Likelihood Model of persuasion, this study assessed the psychophysical peripheral cue 'Interactive Voice Response System (IVRS) call frequency' on compliance. 71 participants were randomized to once daily (OD), twice daily (BID) or three times daily (TID) call schedules over two weeks. Participants completed 30-item cognitive function tests at each call. Compliance was defined as proportion of expected calls within a narrow window (+/- 30 min around scheduled time), and within a relaxed window (-30 min to +4 h). Data were analyzed by ANOVA and pairwise comparisons adjusted by the Bonferroni correction. There was a relationship between call frequency and compliance. Bonferroni adjusted pairwise comparisons showed significantly higher compliance (p=0.03) for the BID (51.0%) than TID (30.3%) for the narrow window; for the extended window, compliance was higher (p=0.04) with OD (59.5%), than TID (38.4%). The IVRS psychophysical peripheral cue call frequency supported the ELM as a route to persuasion. The results also support OD strategy for optimal compliance. Models suggest specific indicators to enhance compliance with medication dosing and electronic patient diaries to improve health outcomes and data integrity respectively.

  10. Balance improvement and reduction of likelihood of falls in older women after Cawthorne and Cooksey exercises.

    Science.gov (United States)

    Ribeiro, Angela dos Santos Bersot; Pereira, João Santos

    2005-01-01

    Vestibular system is the absolute referential for the maintenance of balance. Functional deficit with aging can result in balance disturbance and in increase of likelihood of falls. To verify whether specific therapeutic approach of the system can promote motor learning and can contribute to the improvement of balance and to decrease of likelihood of falls. Clinical prospective. Fifteen women, aged 60 to 69, mean = 64.8 years old (+/- 2.95), resident in Barra Mansa-RJ, were submitted to Cawthorne and Cooksey exercises during three months, three times a week, during sixty minutes. They were evaluated with Berg Balance Scale (BBS), whose scores determine the possibility of fall (PQ). Comparing the data obtained before and after intervention, we observed significant difference (pelderly people.

  11. Constructing valid density matrices on an NMR quantum information processor via maximum likelihood estimation

    Energy Technology Data Exchange (ETDEWEB)

    Singh, Harpreet; Arvind; Dorai, Kavita, E-mail: kavita@iisermohali.ac.in

    2016-09-07

    Estimation of quantum states is an important step in any quantum information processing experiment. A naive reconstruction of the density matrix from experimental measurements can often give density matrices which are not positive, and hence not physically acceptable. How do we ensure that at all stages of reconstruction, we keep the density matrix positive? Recently a method has been suggested based on maximum likelihood estimation, wherein the density matrix is guaranteed to be positive definite. We experimentally implement this protocol on an NMR quantum information processor. We discuss several examples and compare with the standard method of state estimation. - Highlights: • State estimation using maximum likelihood method was performed on an NMR quantum information processor. • Physically valid density matrices were obtained every time in contrast to standard quantum state tomography. • Density matrices of several different entangled and separable states were reconstructed for two and three qubits.

  12. Likelihood updating of random process load and resistance parameters by monitoring

    DEFF Research Database (Denmark)

    Friis-Hansen, Peter; Ditlevsen, Ove Dalager

    2003-01-01

    that maximum likelihood estimation is a rational alternative to an arbitrary weighting for least square fitting. The derived likelihood function gets singularities if the spectrum is prescribed with zero values at some frequencies. This is often the case for models of technically relevant processes......, even though it is of complicated mathematical form, allows an approximate Bayesian updating and control of the time development of the parameters. Some of these parameters can be structural parameters that by too much change reveal progressing damage or other malfunctioning. Thus current process......Spectral parameters for a stationary Gaussian process are most often estimated by Fourier transformation of a realization followed by some smoothing procedure. This smoothing is often a weighted least square fitting of some prespecified parametric form of the spectrum. In this paper it is shown...

  13. Watching television for more than two hours increases the likelihood of reporting poor sleep quality among Brazilian schoolteachers.

    Science.gov (United States)

    de Souza, Sara Carolina Scremin; Campanini, Marcela Zambrim; de Andrade, Selma Maffei; González, Alberto Durán; de Melo, Juliana Moura; Mesas, Arthur Eumann

    2017-10-01

    Although time spent watching television and sleep problems have increased in the last few decades, it is unclear whether these conditions are associated in working adults after controlling for lifestyle, job characteristics and other individual aspects. The present study analyzed the association between time spent watching television and sleep quality among teachers from public schools in Londrina, Brazil. In this cross-sectional study, information from the Pittsburgh Sleep Quality Index (PSQI) and about time spent watching television was obtained during personal interviews. Logistic regression models adjusted by the main confounders (sociodemographic, occupational and lifestyle variables) were used in the analyses. Among the 959 studied teachers (68.2% women, median age: 42years), teachers who watched >120min/day had a higher likelihood of reporting poor sleep quality (PSQI>5) (odds ratio=1.41; 95% confidence interval=1.01; 1.98) compared with those who watched television for up to 60min/day, regardless of gender, age, work hours, leisure time physical activity and other lifestyle variables. This association did not remain significant after the adjustment for health conditions, i.e., obesity, anxiety, depression and chronic pain, which may act as confounding variables in the relationship between watching television and poor sleep quality. Watching television for >120min/day was independently associated with poorer sleep quality, which should be considered in the prevention and treatment of sleep disturbances among working population. Copyright © 2017 Elsevier Inc. All rights reserved.

  14. GENERALIZATION OF RAYLEIGH MAXIMUM LIKELIHOOD DESPECKLING FILTER USING QUADRILATERAL KERNELS

    Directory of Open Access Journals (Sweden)

    S. Sridevi

    2013-02-01

    Full Text Available Speckle noise is the most prevalent noise in clinical ultrasound images. It visibly looks like light and dark spots and deduce the pixel intensity as murkiest. Gazing at fetal ultrasound images, the impact of edge and local fine details are more palpable for obstetricians and gynecologists to carry out prenatal diagnosis of congenital heart disease. A robust despeckling filter has to be contrived to proficiently suppress speckle noise and simultaneously preserve the features. The proposed filter is the generalization of Rayleigh maximum likelihood filter by the exploitation of statistical tools as tuning parameters and use different shapes of quadrilateral kernels to estimate the noise free pixel from neighborhood. The performance of various filters namely Median, Kuwahura, Frost, Homogenous mask filter and Rayleigh maximum likelihood filter are compared with the proposed filter in terms PSNR and image profile. Comparatively the proposed filters surpass the conventional filters.

  15. Likelihood inference for a nonstationary fractional autoregressive model

    DEFF Research Database (Denmark)

    Johansen, Søren; Ørregård Nielsen, Morten

    2010-01-01

    This paper discusses model-based inference in an autoregressive model for fractional processes which allows the process to be fractional of order d or d-b. Fractional differencing involves infinitely many past values and because we are interested in nonstationary processes we model the data X1......,...,X_{T} given the initial values X_{-n}, n=0,1,..., as is usually done. The initial values are not modeled but assumed to be bounded. This represents a considerable generalization relative to all previous work where it is assumed that initial values are zero. For the statistical analysis we assume...... the conditional Gaussian likelihood and for the probability analysis we also condition on initial values but assume that the errors in the autoregressive model are i.i.d. with suitable moment conditions. We analyze the conditional likelihood and its derivatives as stochastic processes in the parameters, including...

  16. Maximum Likelihood Compton Polarimetry with the Compton Spectrometer and Imager

    Energy Technology Data Exchange (ETDEWEB)

    Lowell, A. W.; Boggs, S. E; Chiu, C. L.; Kierans, C. A.; Sleator, C.; Tomsick, J. A.; Zoglauer, A. C. [Space Sciences Laboratory, University of California, Berkeley (United States); Chang, H.-K.; Tseng, C.-H.; Yang, C.-Y. [Institute of Astronomy, National Tsing Hua University, Taiwan (China); Jean, P.; Ballmoos, P. von [IRAP Toulouse (France); Lin, C.-H. [Institute of Physics, Academia Sinica, Taiwan (China); Amman, M. [Lawrence Berkeley National Laboratory (United States)

    2017-10-20

    Astrophysical polarization measurements in the soft gamma-ray band are becoming more feasible as detectors with high position and energy resolution are deployed. Previous work has shown that the minimum detectable polarization (MDP) of an ideal Compton polarimeter can be improved by ∼21% when an unbinned, maximum likelihood method (MLM) is used instead of the standard approach of fitting a sinusoid to a histogram of azimuthal scattering angles. Here we outline a procedure for implementing this maximum likelihood approach for real, nonideal polarimeters. As an example, we use the recent observation of GRB 160530A with the Compton Spectrometer and Imager. We find that the MDP for this observation is reduced by 20% when the MLM is used instead of the standard method.

  17. THESEUS: maximum likelihood superpositioning and analysis of macromolecular structures.

    Science.gov (United States)

    Theobald, Douglas L; Wuttke, Deborah S

    2006-09-01

    THESEUS is a command line program for performing maximum likelihood (ML) superpositions and analysis of macromolecular structures. While conventional superpositioning methods use ordinary least-squares (LS) as the optimization criterion, ML superpositions provide substantially improved accuracy by down-weighting variable structural regions and by correcting for correlations among atoms. ML superpositioning is robust and insensitive to the specific atoms included in the analysis, and thus it does not require subjective pruning of selected variable atomic coordinates. Output includes both likelihood-based and frequentist statistics for accurate evaluation of the adequacy of a superposition and for reliable analysis of structural similarities and differences. THESEUS performs principal components analysis for analyzing the complex correlations found among atoms within a structural ensemble. ANSI C source code and selected binaries for various computing platforms are available under the GNU open source license from http://monkshood.colorado.edu/theseus/ or http://www.theseus3d.org.

  18. Deformation of log-likelihood loss function for multiclass boosting.

    Science.gov (United States)

    Kanamori, Takafumi

    2010-09-01

    The purpose of this paper is to study loss functions in multiclass classification. In classification problems, the decision function is estimated by minimizing an empirical loss function, and then, the output label is predicted by using the estimated decision function. We propose a class of loss functions which is obtained by a deformation of the log-likelihood loss function. There are four main reasons why we focus on the deformed log-likelihood loss function: (1) this is a class of loss functions which has not been deeply investigated so far, (2) in terms of computation, a boosting algorithm with a pseudo-loss is available to minimize the proposed loss function, (3) the proposed loss functions provide a clear correspondence between the decision functions and conditional probabilities of output labels, (4) the proposed loss functions satisfy the statistical consistency of the classification error rate which is a desirable property in classification problems. Based on (3), we show that the deformed log-likelihood loss provides a model of mislabeling which is useful as a statistical model of medical diagnostics. We also propose a robust loss function against outliers in multiclass classification based on our approach. The robust loss function is a natural extension of the existing robust loss function for binary classification. A model of mislabeling and a robust loss function are useful to cope with noisy data. Some numerical studies are presented to show the robustness of the proposed loss function. A mathematical characterization of the deformed log-likelihood loss function is also presented. Copyright 2010 Elsevier Ltd. All rights reserved.

  19. Bayesian interpretation of Generalized empirical likelihood by maximum entropy

    OpenAIRE

    Rochet , Paul

    2011-01-01

    We study a parametric estimation problem related to moment condition models. As an alternative to the generalized empirical likelihood (GEL) and the generalized method of moments (GMM), a Bayesian approach to the problem can be adopted, extending the MEM procedure to parametric moment conditions. We show in particular that a large number of GEL estimators can be interpreted as a maximum entropy solution. Moreover, we provide a more general field of applications by proving the method to be rob...

  20. Menyoal Elaboration Likelihood Model (ELM) dan Teori Retorika

    OpenAIRE

    Yudi Perbawaningsih

    2012-01-01

    Abstract: Persuasion is a communication process to establish or change attitudes, which can be understood through theory of Rhetoric and theory of Elaboration Likelihood Model (ELM). This study elaborates these theories in a Public Lecture series which to persuade the students in choosing their concentration of study. The result shows that in term of persuasion effectiveness it is not quite relevant to separate the message and its source. The quality of source is determined by the quality of ...

  1. Corporate governance effect on financial distress likelihood: Evidence from Spain

    Directory of Open Access Journals (Sweden)

    Montserrat Manzaneque

    2016-01-01

    Full Text Available The paper explores some mechanisms of corporate governance (ownership and board characteristics in Spanish listed companies and their impact on the likelihood of financial distress. An empirical study was conducted between 2007 and 2012 using a matched-pairs research design with 308 observations, with half of them classified as distressed and non-distressed. Based on the previous study by Pindado, Rodrigues, and De la Torre (2008, a broader concept of bankruptcy is used to define business failure. Employing several conditional logistic models, as well as to other previous studies on bankruptcy, the results confirm that in difficult situations prior to bankruptcy, the impact of board ownership and proportion of independent directors on business failure likelihood are similar to those exerted in more extreme situations. These results go one step further, to offer a negative relationship between board size and the likelihood of financial distress. This result is interpreted as a form of creating diversity and to improve the access to the information and resources, especially in contexts where the ownership is highly concentrated and large shareholders have a great power to influence the board structure. However, the results confirm that ownership concentration does not have a significant impact on financial distress likelihood in the Spanish context. It is argued that large shareholders are passive as regards an enhanced monitoring of management and, alternatively, they do not have enough incentives to hold back the financial distress. These findings have important implications in the Spanish context, where several changes in the regulatory listing requirements have been carried out with respect to corporate governance, and where there is no empirical evidence regarding this respect.

  2. Maximum Likelihood, Consistency and Data Envelopment Analysis: A Statistical Foundation

    OpenAIRE

    Rajiv D. Banker

    1993-01-01

    This paper provides a formal statistical basis for the efficiency evaluation techniques of data envelopment analysis (DEA). DEA estimators of the best practice monotone increasing and concave production function are shown to be also maximum likelihood estimators if the deviation of actual output from the efficient output is regarded as a stochastic variable with a monotone decreasing probability density function. While the best practice frontier estimator is biased below the theoretical front...

  3. Maximum likelihood convolutional decoding (MCD) performance due to system losses

    Science.gov (United States)

    Webster, L.

    1976-01-01

    A model for predicting the computational performance of a maximum likelihood convolutional decoder (MCD) operating in a noisy carrier reference environment is described. This model is used to develop a subroutine that will be utilized by the Telemetry Analysis Program to compute the MCD bit error rate. When this computational model is averaged over noisy reference phase errors using a high-rate interpolation scheme, the results are found to agree quite favorably with experimental measurements.

  4. Menyoal Elaboration Likelihood Model (ELM) Dan Teori Retorika

    OpenAIRE

    Perbawaningsih, Yudi

    2012-01-01

    : Persuasion is a communication process to establish or change attitudes, which can be understood through theory of Rhetoric and theory of Elaboration Likelihood Model (ELM). This study elaborates these theories in a Public Lecture series which to persuade the students in choosing their concentration of study. The result shows that in term of persuasion effectiveness it is not quite relevant to separate the message and its source. The quality of source is determined by the quality of the mess...

  5. Penggunaan Elaboration Likelihood Model dalam Menganalisis Penerimaan Teknologi Informasi

    OpenAIRE

    vitrian, vitrian2

    2010-01-01

    This article discusses some technology acceptance models in an organization. Thorough analysis of how technology is acceptable help managers make any planning to implement new teachnology and make sure that new technology could enhance organization's performance. Elaboration Likelihood Model (ELM) is the one which sheds light on some behavioral factors in acceptance of information technology. The basic tenet of ELM states that human behavior in principle can be influenced through central r...

  6. Statistical Bias in Maximum Likelihood Estimators of Item Parameters.

    Science.gov (United States)

    1982-04-01

    34 a> E r’r~e r ,C Ie I# ne,..,.rVi rnd Id.,flfv b1 - bindk numb.r) I; ,t-i i-cd I ’ tiie bias in the maximum likelihood ,st i- i;, ’ t iIeiIrs in...NTC, IL 60088 Psychometric Laboratory University of North Carolina I ERIC Facility-Acquisitions Davie Hall 013A 4833 Rugby Avenue Chapel Hill, NC

  7. Empirical Likelihood in Nonignorable Covariate-Missing Data Problems.

    Science.gov (United States)

    Xie, Yanmei; Zhang, Biao

    2017-04-20

    Missing covariate data occurs often in regression analysis, which frequently arises in the health and social sciences as well as in survey sampling. We study methods for the analysis of a nonignorable covariate-missing data problem in an assumed conditional mean function when some covariates are completely observed but other covariates are missing for some subjects. We adopt the semiparametric perspective of Bartlett et al. (Improving upon the efficiency of complete case analysis when covariates are MNAR. Biostatistics 2014;15:719-30) on regression analyses with nonignorable missing covariates, in which they have introduced the use of two working models, the working probability model of missingness and the working conditional score model. In this paper, we study an empirical likelihood approach to nonignorable covariate-missing data problems with the objective of effectively utilizing the two working models in the analysis of covariate-missing data. We propose a unified approach to constructing a system of unbiased estimating equations, where there are more equations than unknown parameters of interest. One useful feature of these unbiased estimating equations is that they naturally incorporate the incomplete data into the data analysis, making it possible to seek efficient estimation of the parameter of interest even when the working regression function is not specified to be the optimal regression function. We apply the general methodology of empirical likelihood to optimally combine these unbiased estimating equations. We propose three maximum empirical likelihood estimators of the underlying regression parameters and compare their efficiencies with other existing competitors. We present a simulation study to compare the finite-sample performance of various methods with respect to bias, efficiency, and robustness to model misspecification. The proposed empirical likelihood method is also illustrated by an analysis of a data set from the US National Health and

  8. Democracy, Autocracy and the Likelihood of International Conflict

    OpenAIRE

    Tangerås, Thomas

    2008-01-01

    This is a game-theoretic analysis of the link between regime type and international conflict. The democratic electorate can credibly punish the leader for bad conflict outcomes, whereas the autocratic selectorate cannot. For the fear of being thrown out of office, democratic leaders are (i) more selective about the wars they initiate and (ii) on average win more of the wars they start. Foreign policy behaviour is found to display strategic complementarities. The likelihood of interstate war, ...

  9. Approximate maximum likelihood estimation for population genetic inference.

    Science.gov (United States)

    Bertl, Johanna; Ewing, Gregory; Kosiol, Carolin; Futschik, Andreas

    2017-11-27

    In many population genetic problems, parameter estimation is obstructed by an intractable likelihood function. Therefore, approximate estimation methods have been developed, and with growing computational power, sampling-based methods became popular. However, these methods such as Approximate Bayesian Computation (ABC) can be inefficient in high-dimensional problems. This led to the development of more sophisticated iterative estimation methods like particle filters. Here, we propose an alternative approach that is based on stochastic approximation. By moving along a simulated gradient or ascent direction, the algorithm produces a sequence of estimates that eventually converges to the maximum likelihood estimate, given a set of observed summary statistics. This strategy does not sample much from low-likelihood regions of the parameter space, and is fast, even when many summary statistics are involved. We put considerable efforts into providing tuning guidelines that improve the robustness and lead to good performance on problems with high-dimensional summary statistics and a low signal-to-noise ratio. We then investigate the performance of our resulting approach and study its properties in simulations. Finally, we re-estimate parameters describing the demographic history of Bornean and Sumatran orang-utans.

  10. Caching and interpolated likelihoods: accelerating cosmological Monte Carlo Markov chains

    Energy Technology Data Exchange (ETDEWEB)

    Bouland, Adam; Easther, Richard; Rosenfeld, Katherine, E-mail: adam.bouland@aya.yale.edu, E-mail: richard.easther@yale.edu, E-mail: krosenfeld@cfa.harvard.edu [Department of Physics, Yale University, New Haven CT 06520 (United States)

    2011-05-01

    We describe a novel approach to accelerating Monte Carlo Markov Chains. Our focus is cosmological parameter estimation, but the algorithm is applicable to any problem for which the likelihood surface is a smooth function of the free parameters and computationally expensive to evaluate. We generate a high-order interpolating polynomial for the log-likelihood using the first points gathered by the Markov chains as a training set. This polynomial then accurately computes the majority of the likelihoods needed in the latter parts of the chains. We implement a simple version of this algorithm as a patch (InterpMC) to CosmoMC and show that it accelerates parameter estimatation by a factor of between two and four for well-converged chains. The current code is primarily intended as a ''proof of concept'', and we argue that there is considerable room for further performance gains. Unlike other approaches to accelerating parameter fits, we make no use of precomputed training sets or special choices of variables, and InterpMC is almost entirely transparent to the user.

  11. Caching and interpolated likelihoods: accelerating cosmological Monte Carlo Markov chains

    International Nuclear Information System (INIS)

    Bouland, Adam; Easther, Richard; Rosenfeld, Katherine

    2011-01-01

    We describe a novel approach to accelerating Monte Carlo Markov Chains. Our focus is cosmological parameter estimation, but the algorithm is applicable to any problem for which the likelihood surface is a smooth function of the free parameters and computationally expensive to evaluate. We generate a high-order interpolating polynomial for the log-likelihood using the first points gathered by the Markov chains as a training set. This polynomial then accurately computes the majority of the likelihoods needed in the latter parts of the chains. We implement a simple version of this algorithm as a patch (InterpMC) to CosmoMC and show that it accelerates parameter estimatation by a factor of between two and four for well-converged chains. The current code is primarily intended as a ''proof of concept'', and we argue that there is considerable room for further performance gains. Unlike other approaches to accelerating parameter fits, we make no use of precomputed training sets or special choices of variables, and InterpMC is almost entirely transparent to the user

  12. Maximum likelihood as a common computational framework in tomotherapy

    International Nuclear Information System (INIS)

    Olivera, G.H.; Shepard, D.M.; Reckwerdt, P.J.; Ruchala, K.; Zachman, J.; Fitchard, E.E.; Mackie, T.R.

    1998-01-01

    Tomotherapy is a dose delivery technique using helical or axial intensity modulated beams. One of the strengths of the tomotherapy concept is that it can incorporate a number of processes into a single piece of equipment. These processes include treatment optimization planning, dose reconstruction and kilovoltage/megavoltage image reconstruction. A common computational technique that could be used for all of these processes would be very appealing. The maximum likelihood estimator, originally developed for emission tomography, can serve as a useful tool in imaging and radiotherapy. We believe that this approach can play an important role in the processes of optimization planning, dose reconstruction and kilovoltage and/or megavoltage image reconstruction. These processes involve computations that require comparable physical methods. They are also based on equivalent assumptions, and they have similar mathematical solutions. As a result, the maximum likelihood approach is able to provide a common framework for all three of these computational problems. We will demonstrate how maximum likelihood methods can be applied to optimization planning, dose reconstruction and megavoltage image reconstruction in tomotherapy. Results for planning optimization, dose reconstruction and megavoltage image reconstruction will be presented. Strengths and weaknesses of the methodology are analysed. Future directions for this work are also suggested. (author)

  13. Sleep Deprivation in Young and Healthy Subjects Is More Sensitively Identified by Higher Frequencies of Electrodermal Activity than by Skin Conductance Level Evaluated in the Time Domain

    Directory of Open Access Journals (Sweden)

    Hugo F. Posada-Quintero

    2017-06-01

    Full Text Available We analyzed multiple measures of the autonomic nervous system (ANS based on electrodermal activity (EDA and heart rate variability (HRV for young healthy subjects undergoing 24-h sleep deprivation. In this study, we have utilized the error awareness test (EAT every 2 h (13 runs total, to evaluate the deterioration of performance. EAT consists of trials where the subject is presented words representing colors. Subjects are instructed to press a button (“Go” trials or withhold the response if the word presented and the color of the word mismatch (“Stroop No-Go” trial, or the screen is repeated (“Repeat No-Go” trials. We measured subjects' (N = 10 reaction time to the “Go” trials, and accuracy to the “Stroop No-Go” and “Repeat No-Go” trials. Simultaneously, changes in EDA and HRV indices were evaluated. Furthermore, the relationship between reactiveness and vigilance measures and indices of sympathetic control based on HRV were analyzed. We found the performance improved to a stable level from 6 through 16 h of deprivation, with a subsequently sustained impairment after 18 h. Indices of higher frequencies of EDA related more to vigilance measures, whereas lower frequencies index (skin conductance leve, SCL measured the reactiveness of the subject. We conclude that indices of EDA, including those of the higher frequencies, termed TVSymp, EDASymp, and NSSCRs, provide information to better understand the effect of sleep deprivation on subjects' autonomic response and performance.

  14. Do later wake times and increased sleep duration of 12th graders result in more studying, higher grades, and improved SAT/ACT test scores?

    Science.gov (United States)

    Cole, James S

    2016-09-01

    The aim of this study was to investigate the relationship between sleep duration, wake time, and hours studying on high school grades and performance on the Scholastic Aptitude Test (SAT)/ American College Testing (ACT) college entrance exams. Data were collected from 13,071 recently graduated high school seniors who were entering college in the fall of 2014. A column proportions z test with a Bonferroni adjustment was used to analyze proportional differences. Analysis of covariance (ANCOVA) was used to examine mean group differences. Students who woke up prior to 6 a.m. and got less than 8 h of sleep (27 %) were significantly more likely to report studying 11 or more hours per week (30 %), almost double the rate compared to students who got more than 8 h of sleep and woke up the latest (16 %). Post hoc results revealed students who woke up at 7 a.m. or later reported significantly higher high school grades than all other groups (p students who woke up between 6:01 a.m. and 7:00 a.m. and got eight or more hours of sleep. The highest reported SAT/ACT scores were from the group that woke up after 7 a.m. but got less than 8 h sleep (M = 1099.5). Their scores were significantly higher than all other groups. This study provides additional evidence that increased sleep and later wake time are associated with increased high school grades. However, this study also found that students who sleep the longest also reported less studying and lower SAT/ACT scores.

  15. Coalescent-based species tree inference from gene tree topologies under incomplete lineage sorting by maximum likelihood.

    Science.gov (United States)

    Wu, Yufeng

    2012-03-01

    Incomplete lineage sorting can cause incongruence between the phylogenetic history of genes (the gene tree) and that of the species (the species tree), which can complicate the inference of phylogenies. In this article, I present a new coalescent-based algorithm for species tree inference with maximum likelihood. I first describe an improved method for computing the probability of a gene tree topology given a species tree, which is much faster than an existing algorithm by Degnan and Salter (2005). Based on this method, I develop a practical algorithm that takes a set of gene tree topologies and infers species trees with maximum likelihood. This algorithm searches for the best species tree by starting from initial species trees and performing heuristic search to obtain better trees with higher likelihood. This algorithm, called STELLS (which stands for Species Tree InfErence with Likelihood for Lineage Sorting), has been implemented in a program that is downloadable from the author's web page. The simulation results show that the STELLS algorithm is more accurate than an existing maximum likelihood method for many datasets, especially when there is noise in gene trees. I also show that the STELLS algorithm is efficient and can be applied to real biological datasets. © 2011 The Author. Evolution© 2011 The Society for the Study of Evolution.

  16. Comparisons of likelihood and machine learning methods of individual classification

    Science.gov (United States)

    Guinand, B.; Topchy, A.; Page, K.S.; Burnham-Curtis, M. K.; Punch, W.F.; Scribner, K.T.

    2002-01-01

    Classification methods used in machine learning (e.g., artificial neural networks, decision trees, and k-nearest neighbor clustering) are rarely used with population genetic data. We compare different nonparametric machine learning techniques with parametric likelihood estimations commonly employed in population genetics for purposes of assigning individuals to their population of origin (“assignment tests”). Classifier accuracy was compared across simulated data sets representing different levels of population differentiation (low and high FST), number of loci surveyed (5 and 10), and allelic diversity (average of three or eight alleles per locus). Empirical data for the lake trout (Salvelinus namaycush) exhibiting levels of population differentiation comparable to those used in simulations were examined to further evaluate and compare classification methods. Classification error rates associated with artificial neural networks and likelihood estimators were lower for simulated data sets compared to k-nearest neighbor and decision tree classifiers over the entire range of parameters considered. Artificial neural networks only marginally outperformed the likelihood method for simulated data (0–2.8% lower error rates). The relative performance of each machine learning classifier improved relative likelihood estimators for empirical data sets, suggesting an ability to “learn” and utilize properties of empirical genotypic arrays intrinsic to each population. Likelihood-based estimation methods provide a more accessible option for reliable assignment of individuals to the population of origin due to the intricacies in development and evaluation of artificial neural networks. In recent years, characterization of highly polymorphic molecular markers such as mini- and microsatellites and development of novel methods of analysis have enabled researchers to extend investigations of ecological and evolutionary processes below the population level to the level of

  17. Higher Education

    African Journals Online (AJOL)

    Kunle Amuwo: Higher Education Transformation: A Paradigm Shilt in South Africa? ... ty of such skills, especially at the middle management levels within the higher ... istics and virtues of differentiation and diversity. .... may be forced to close shop for lack of capacity to attract ..... necessarily lead to racial and gender equity,.

  18. Evaluation of a reduced centrifugation time and higher centrifugal force on various general chemistry and immunochemistry analytes in plasma and serum.

    Science.gov (United States)

    Møller, Mette F; Søndergaard, Tove R; Kristensen, Helle T; Münster, Anna-Marie B

    2017-09-01

    Background Centrifugation of blood samples is an essential preanalytical step in the clinical biochemistry laboratory. Centrifugation settings are often altered to optimize sample flow and turnaround time. Few studies have addressed the effect of altering centrifugation settings on analytical quality, and almost all studies have been done using collection tubes with gel separator. Methods In this study, we compared a centrifugation time of 5 min at 3000 ×  g to a standard protocol of 10 min at 2200 ×  g. Nine selected general chemistry and immunochemistry analytes and interference indices were studied in lithium heparin plasma tubes and serum tubes without gel separator. Results were evaluated using mean bias, difference plots and coefficient of variation, compared with maximum allowable bias and coefficient of variation used in laboratory routine quality control. Results For all analytes except lactate dehydrogenase, the results were within the predefined acceptance criteria, indicating that the analytical quality was not compromised. Lactate dehydrogenase showed higher values after centrifugation for 5 min at 3000 ×  g, mean bias was 6.3 ± 2.2% and the coefficient of variation was 5%. Conclusions We found that a centrifugation protocol of 5 min at 3000 ×  g can be used for the general chemistry and immunochemistry analytes studied, with the possible exception of lactate dehydrogenase, which requires further assessment.

  19. Robust Multi-Frame Adaptive Optics Image Restoration Algorithm Using Maximum Likelihood Estimation with Poisson Statistics

    Directory of Open Access Journals (Sweden)

    Dongming Li

    2017-04-01

    Full Text Available An adaptive optics (AO system provides real-time compensation for atmospheric turbulence. However, an AO image is usually of poor contrast because of the nature of the imaging process, meaning that the image contains information coming from both out-of-focus and in-focus planes of the object, which also brings about a loss in quality. In this paper, we present a robust multi-frame adaptive optics image restoration algorithm via maximum likelihood estimation. Our proposed algorithm uses a maximum likelihood method with image regularization as the basic principle, and constructs the joint log likelihood function for multi-frame AO images based on a Poisson distribution model. To begin with, a frame selection method based on image variance is applied to the observed multi-frame AO images to select images with better quality to improve the convergence of a blind deconvolution algorithm. Then, by combining the imaging conditions and the AO system properties, a point spread function estimation model is built. Finally, we develop our iterative solutions for AO image restoration addressing the joint deconvolution issue. We conduct a number of experiments to evaluate the performances of our proposed algorithm. Experimental results show that our algorithm produces accurate AO image restoration results and outperforms the current state-of-the-art blind deconvolution methods.

  20. The equivalence of information-theoretic and likelihood-based methods for neural dimensionality reduction.

    Directory of Open Access Journals (Sweden)

    Ross S Williamson

    2015-04-01

    Full Text Available Stimulus dimensionality-reduction methods in neuroscience seek to identify a low-dimensional space of stimulus features that affect a neuron's probability of spiking. One popular method, known as maximally informative dimensions (MID, uses an information-theoretic quantity known as "single-spike information" to identify this space. Here we examine MID from a model-based perspective. We show that MID is a maximum-likelihood estimator for the parameters of a linear-nonlinear-Poisson (LNP model, and that the empirical single-spike information corresponds to the normalized log-likelihood under a Poisson model. This equivalence implies that MID does not necessarily find maximally informative stimulus dimensions when spiking is not well described as Poisson. We provide several examples to illustrate this shortcoming, and derive a lower bound on the information lost when spiking is Bernoulli in discrete time bins. To overcome this limitation, we introduce model-based dimensionality reduction methods for neurons with non-Poisson firing statistics, and show that they can be framed equivalently in likelihood-based or information-theoretic terms. Finally, we show how to overcome practical limitations on the number of stimulus dimensions that MID can estimate by constraining the form of the non-parametric nonlinearity in an LNP model. We illustrate these methods with simulations and data from primate visual cortex.

  1. On the performance of social network and likelihood-based expert weighting schemes

    International Nuclear Information System (INIS)

    Cooke, Roger M.; ElSaadany, Susie; Huang Xinzheng

    2008-01-01

    Using expert judgment data from the TU Delft's expert judgment database, we compare the performance of different weighting schemes, namely equal weighting, performance-based weighting from the classical model [Cooke RM. Experts in uncertainty. Oxford: Oxford University Press; 1991.], social network (SN) weighting and likelihood weighting. The picture that emerges with regard to SN weights is rather mixed. SN theory does not provide an alternative to performance-based combination of expert judgments, since the statistical accuracy of the SN decision maker is sometimes unacceptably low. On the other hand, it does outperform equal weighting in the majority of cases. The results here, though not overwhelmingly positive, do nonetheless motivate further research into social interaction methods for nominating and weighting experts. Indeed, a full expert judgment study with performance measurement requires an investment in time and effort, with a view to securing external validation. If high confidence in a comparable level of validation can be obtained by less intensive methods, this would be very welcome, and would facilitate the application of structured expert judgment in situations where the resources for a full study are not available. Likelihood weights are just as resource intensive as performance-based weights, and the evidence presented here suggests that they are inferior to performance-based weights with regard to those scoring variables which are optimized in performance weights (calibration and information). Perhaps surprisingly, they are also inferior with regard to likelihood. Their use is further discouraged by the fact that they constitute a strongly improper scoring rule

  2. Early lactate clearance in septic patients with elevated lactate levels admitted from the emergency department to intensive care: time to aim higher?

    Science.gov (United States)

    Walker, Craig A; Griffith, David M; Gray, Alasdair J; Datta, Deepankar; Hay, Alasdair W

    2013-10-01

    Septic patients with hyperlactatemia have increased mortality rates, irrespective of hemodynamic and oxygen-derived variables. The aims of the study are the following: (1) to ascertain whether lactate clearance (LC) (percentage change in lactate over unit time) predicts mortality in septic patients admitted to intensive care directly from the emergency department and (2) to calculate the optimal "cut-off" value for mortality prediction. Three-year retrospective observational study of consecutive patients with severe sepsis and septic shock admitted to intensive care from the emergency department of a tertiary UK hospital. We calculated 6-hour LC, performed receiver operating characteristic analyses to calculate optimal cut-off values for initial lactate and LC, dichotomized patients according to the LC cut-off, and calculated hazard ratios using a Cox proportional hazards model. One hundred six patients were identified; 78, after exclusions. Lactate clearance was independently associated with 30-day mortality (P<.04); optimal cut-off, 36%. Mortality rates were 61.1% and 10.7% for patients with 6-hour LC 36% or less and greater than 36%, respectively. Hazard ratio for death with LC 36% or less was 7.33 (95% confidence interval, 2.17-24.73; P<.001). Six-hour LC was independently associated with mortality, and the optimal cut-off value was 36%, significantly higher than previously reported. We would support further research investigating this higher LC as a distinct resuscitation end point in patients with severe sepsis and septic shock. Copyright © 2013 Elsevier Inc. All rights reserved.

  3. Higher Education

    Science.gov (United States)

    & Development (LDRD) National Security Education Center (NSEC) Office of Science Programs Richard P Databases National Security Education Center (NSEC) Center for Nonlinear Studies Engineering Institute Scholarships STEM Education Programs Teachers (K-12) Students (K-12) Higher Education Regional Education

  4. Evidence Based Medicine; Positive and Negative Likelihood Ratios of Diagnostic Tests

    Directory of Open Access Journals (Sweden)

    Alireza Baratloo

    2015-10-01

    Full Text Available In the previous two parts of educational manuscript series in Emergency, we explained some screening characteristics of diagnostic tests including accuracy, sensitivity, specificity, and positive and negative predictive values. In the 3rd  part we aimed to explain positive and negative likelihood ratio (LR as one of the most reliable performance measures of a diagnostic test. To better understand this characteristic of a test, it is first necessary to fully understand the concept of sensitivity and specificity. So we strongly advise you to review the 1st part of this series again. In short, the likelihood ratios are about the percentage of people with and without a disease but having the same test result. The prevalence of a disease can directly influence screening characteristics of a diagnostic test, especially its sensitivity and specificity. Trying to eliminate this effect, LR was developed. Pre-test probability of a disease multiplied by positive or negative LR can estimate post-test probability. Therefore, LR is the most important characteristic of a test to rule out or rule in a diagnosis. A positive likelihood ratio > 1 means higher probability of the disease to be present in a patient with a positive test. The further from 1, either higher or lower, the stronger the evidence to rule in or rule out the disease, respectively. It is obvious that tests with LR close to one are less practical. On the other hand, LR further from one will have more value for application in medicine. Usually tests with 0.1 < LR > 10 are considered suitable for implication in routine practice.

  5. Age-specific incidence of A/H1N1 2009 influenza infection in England from sequential antibody prevalence data using likelihood-based estimation.

    Directory of Open Access Journals (Sweden)

    Marc Baguelin

    2011-02-01

    Full Text Available Estimating the age-specific incidence of an emerging pathogen is essential for understanding its severity and transmission dynamics. This paper describes a statistical method that uses likelihoods to estimate incidence from sequential serological data. The method requires information on seroconversion intervals and allows integration of information on the temporal distribution of cases from clinical surveillance. Among a family of candidate incidences, a likelihood function is derived by reconstructing the change in seroprevalence from seroconversion following infection and comparing it with the observed sequence of positivity among the samples. This method is applied to derive the cumulative and weekly incidence of A/H1N1 pandemic influenza in England during the second wave using sera taken between September 2009 and February 2010 in four age groups (1-4, 5-14, 15-24, 25-44 years. The highest cumulative incidence was in 5-14 year olds (59%, 95% credible interval (CI: 52%, 68% followed by 1-4 year olds (49%, 95% CI: 38%, 61%, rates 20 and 40 times higher respectively than estimated from clinical surveillance. The method provides a more accurate and continuous measure of incidence than achieved by comparing prevalence in samples grouped by time period.

  6. Australian food life style segments and elaboration likelihood differences

    DEFF Research Database (Denmark)

    Brunsø, Karen; Reid, Mike

    As the global food marketing environment becomes more competitive, the international and comparative perspective of consumers' attitudes and behaviours becomes more important for both practitioners and academics. This research employs the Food-Related Life Style (FRL) instrument in Australia...... in order to 1) determine Australian Life Style Segments and compare these with their European counterparts, and to 2) explore differences in elaboration likelihood among the Australian segments, e.g. consumers' interest and motivation to perceive product related communication. The results provide new...

  7. Maximum-likelihood method for numerical inversion of Mellin transform

    International Nuclear Information System (INIS)

    Iqbal, M.

    1997-01-01

    A method is described for inverting the Mellin transform which uses an expansion in Laguerre polynomials and converts the Mellin transform to Laplace transform, then the maximum-likelihood regularization method is used to recover the original function of the Mellin transform. The performance of the method is illustrated by the inversion of the test functions available in the literature (J. Inst. Math. Appl., 20 (1977) 73; Math. Comput., 53 (1989) 589). Effectiveness of the method is shown by results obtained through demonstration by means of tables and diagrams

  8. How to Improve the Likelihood of CDM Approval?

    DEFF Research Database (Denmark)

    Brandt, Urs Steiner; Svendsen, Gert Tinggaard

    2014-01-01

    How can the likelihood of Clean Development Mechanism (CDM) approval be improved in the face of institutional shortcomings? To answer this question, we focus on the three institutional shortcomings of income sharing, risk sharing and corruption prevention concerning afforestation/reforestation (A....../R). Furthermore, three main stakeholders are identified, namely investors, governments and agents in a principal-agent model regarding monitoring and enforcement capacity. Developing countries such as West Africa have, despite huge potentials, not been integrated in A/R CDM projects yet. Remote sensing, however...

  9. Elemental composition of cosmic rays using a maximum likelihood method

    International Nuclear Information System (INIS)

    Ruddick, K.

    1996-01-01

    We present a progress report on our attempts to determine the composition of cosmic rays in the knee region of the energy spectrum. We have used three different devices to measure properties of the extensive air showers produced by primary cosmic rays: the Soudan 2 underground detector measures the muon flux deep underground, a proportional tube array samples shower density at the surface of the earth, and a Cherenkov array observes light produced high in the atmosphere. We have begun maximum likelihood fits to these measurements with the hope of determining the nuclear mass number A on an event by event basis. (orig.)

  10. Likelihood-Based Inference in Nonlinear Error-Correction Models

    DEFF Research Database (Denmark)

    Kristensen, Dennis; Rahbæk, Anders

    We consider a class of vector nonlinear error correction models where the transfer function (or loadings) of the stationary relation- ships is nonlinear. This includes in particular the smooth transition models. A general representation theorem is given which establishes the dynamic properties...... and a linear trend in general. Gaussian likelihood-based estimators are considered for the long- run cointegration parameters, and the short-run parameters. Asymp- totic theory is provided for these and it is discussed to what extend asymptotic normality and mixed normaity can be found. A simulation study...

  11. Process criticality accident likelihoods, consequences and emergency planning

    International Nuclear Information System (INIS)

    McLaughlin, T.P.

    1992-01-01

    Evaluation of criticality accident risks in the processing of significant quantities of fissile materials is both complex and subjective, largely due to the lack of accident statistics. Thus, complying with national and international standards and regulations which require an evaluation of the net benefit of a criticality accident alarm system, is also subjective. A review of guidance found in the literature on potential accident magnitudes is presented for different material forms and arrangements. Reasoned arguments are also presented concerning accident prevention and accident likelihoods for these material forms and arrangements. (Author)

  12. Likelihood Estimation of Gamma Ray Bursts Duration Distribution

    OpenAIRE

    Horvath, Istvan

    2005-01-01

    Two classes of Gamma Ray Bursts have been identified so far, characterized by T90 durations shorter and longer than approximately 2 seconds. It was shown that the BATSE 3B data allow a good fit with three Gaussian distributions in log T90. In the same Volume in ApJ. another paper suggested that the third class of GRBs is may exist. Using the full BATSE catalog here we present the maximum likelihood estimation, which gives us 0.5% probability to having only two subclasses. The MC simulation co...

  13. Process criticality accident likelihoods, consequences, and emergency planning

    Energy Technology Data Exchange (ETDEWEB)

    McLaughlin, T.P.

    1991-01-01

    Evaluation of criticality accident risks in the processing of significant quantities of fissile materials is both complex and subjective, largely due to the lack of accident statistics. Thus, complying with standards such as ISO 7753 which mandates that the need for an alarm system be evaluated, is also subjective. A review of guidance found in the literature on potential accident magnitudes is presented for different material forms and arrangements. Reasoned arguments are also presented concerning accident prevention and accident likelihoods for these material forms and arrangements. 13 refs., 1 fig., 1 tab.

  14. Improved Likelihood Function in Particle-based IR Eye Tracking

    DEFF Research Database (Denmark)

    Satria, R.; Sorensen, J.; Hammoud, R.

    2005-01-01

    In this paper we propose a log likelihood-ratio function of foreground and background models used in a particle filter to track the eye region in dark-bright pupil image sequences. This model fuses information from both dark and bright pupil images and their difference image into one model. Our...... enhanced tracker overcomes the issues of prior selection of static thresholds during the detection of feature observations in the bright-dark difference images. The auto-initialization process is performed using cascaded classifier trained using adaboost and adapted to IR eye images. Experiments show good...

  15. Estimating likelihood of future crashes for crash-prone drivers

    OpenAIRE

    Subasish Das; Xiaoduan Sun; Fan Wang; Charles Leboeuf

    2015-01-01

    At-fault crash-prone drivers are usually considered as the high risk group for possible future incidents or crashes. In Louisiana, 34% of crashes are repeatedly committed by the at-fault crash-prone drivers who represent only 5% of the total licensed drivers in the state. This research has conducted an exploratory data analysis based on the driver faultiness and proneness. The objective of this study is to develop a crash prediction model to estimate the likelihood of future crashes for the a...

  16. Similar tests and the standardized log likelihood ratio statistic

    DEFF Research Database (Denmark)

    Jensen, Jens Ledet

    1986-01-01

    When testing an affine hypothesis in an exponential family the 'ideal' procedure is to calculate the exact similar test, or an approximation to this, based on the conditional distribution given the minimal sufficient statistic under the null hypothesis. By contrast to this there is a 'primitive......' approach in which the marginal distribution of a test statistic considered and any nuisance parameter appearing in the test statistic is replaced by an estimate. We show here that when using standardized likelihood ratio statistics the 'primitive' procedure is in fact an 'ideal' procedure to order O(n -3...

  17. Maximum Likelihood Joint Tracking and Association in Strong Clutter

    Directory of Open Access Journals (Sweden)

    Leonid I. Perlovsky

    2013-01-01

    Full Text Available We have developed a maximum likelihood formulation for a joint detection, tracking and association problem. An efficient non-combinatorial algorithm for this problem is developed in case of strong clutter for radar data. By using an iterative procedure of the dynamic logic process “from vague-to-crisp” explained in the paper, the new tracker overcomes the combinatorial complexity of tracking in highly-cluttered scenarios and results in an orders-of-magnitude improvement in signal-to-clutter ratio.

  18. An efficient implementation of maximum likelihood identification of LTI state-space models by local gradient search

    NARCIS (Netherlands)

    Bergboer, N.H.; Verdult, V.; Verhaegen, M.H.G.

    2002-01-01

    We present a numerically efficient implementation of the nonlinear least squares and maximum likelihood identification of multivariable linear time-invariant (LTI) state-space models. This implementation is based on a local parameterization of the system and a gradient search in the resulting

  19. Likelihood inference for a fractionally cointegrated vector autoregressive model

    DEFF Research Database (Denmark)

    Johansen, Søren; Ørregård Nielsen, Morten

    2012-01-01

    such that the process X_{t} is fractional of order d and cofractional of order d-b; that is, there exist vectors ß for which ß'X_{t} is fractional of order d-b, and no other fractionality order is possible. We define the statistical model by 0inference when the true values satisfy b0¿1/2 and d0-b0......We consider model based inference in a fractionally cointegrated (or cofractional) vector autoregressive model with a restricted constant term, ¿, based on the Gaussian likelihood conditional on initial values. The model nests the I(d) VAR model. We give conditions on the parameters...... process in the parameters when errors are i.i.d. with suitable moment conditions and initial values are bounded. When the limit is deterministic this implies uniform convergence in probability of the conditional likelihood function. If the true value b0>1/2, we prove that the limit distribution of (ß...

  20. Likelihood-Based Inference of B Cell Clonal Families.

    Directory of Open Access Journals (Sweden)

    Duncan K Ralph

    2016-10-01

    Full Text Available The human immune system depends on a highly diverse collection of antibody-making B cells. B cell receptor sequence diversity is generated by a random recombination process called "rearrangement" forming progenitor B cells, then a Darwinian process of lineage diversification and selection called "affinity maturation." The resulting receptors can be sequenced in high throughput for research and diagnostics. Such a collection of sequences contains a mixture of various lineages, each of which may be quite numerous, or may consist of only a single member. As a step to understanding the process and result of this diversification, one may wish to reconstruct lineage membership, i.e. to cluster sampled sequences according to which came from the same rearrangement events. We call this clustering problem "clonal family inference." In this paper we describe and validate a likelihood-based framework for clonal family inference based on a multi-hidden Markov Model (multi-HMM framework for B cell receptor sequences. We describe an agglomerative algorithm to find a maximum likelihood clustering, two approximate algorithms with various trade-offs of speed versus accuracy, and a third, fast algorithm for finding specific lineages. We show that under simulation these algorithms greatly improve upon existing clonal family inference methods, and that they also give significantly different clusters than previous methods when applied to two real data sets.

  1. Menyoal Elaboration Likelihood Model (ELM dan Teori Retorika

    Directory of Open Access Journals (Sweden)

    Yudi Perbawaningsih

    2012-06-01

    Full Text Available Abstract: Persuasion is a communication process to establish or change attitudes, which can be understood through theory of Rhetoric and theory of Elaboration Likelihood Model (ELM. This study elaborates these theories in a Public Lecture series which to persuade the students in choosing their concentration of study. The result shows that in term of persuasion effectiveness it is not quite relevant to separate the message and its source. The quality of source is determined by the quality of the message, and vice versa. Separating the two routes of the persuasion process as described in the ELM theory would not be relevant. Abstrak: Persuasi adalah proses komunikasi untuk membentuk atau mengubah sikap, yang dapat dipahami dengan teori Retorika dan teori Elaboration Likelihood Model (ELM. Penelitian ini mengelaborasi teori tersebut dalam Kuliah Umum sebagai sarana mempersuasi mahasiswa untuk memilih konsentrasi studi studi yang didasarkan pada proses pengolahan informasi. Menggunakan metode survey, didapatkan hasil yaitu tidaklah cukup relevan memisahkan pesan dan narasumber dalam melihat efektivitas persuasi. Keduanya menyatu yang berarti bahwa kualitas narasumber ditentukan oleh kualitas pesan yang disampaikannya, dan sebaliknya. Memisahkan proses persuasi dalam dua lajur seperti yang dijelaskan dalam ELM teori menjadi tidak relevan.

  2. Corporate brand extensions based on the purchase likelihood: governance implications

    Directory of Open Access Journals (Sweden)

    Spyridon Goumas

    2018-03-01

    Full Text Available This paper is examining the purchase likelihood of hypothetical service brand extensions from product companies focusing on consumer electronics based on sector categorization and perceptions of fit between the existing product category and image of the company. Prior research has recognized that levels of brand knowledge eases the transference of associations and affect to the new products. Similarity to the existing products of the parent company and perceived image also influence the success of brand extensions. However, sector categorization may interfere with this relationship. The purpose of this study is to examine Greek consumers’ attitudes towards hypothetical brand extensions, and how these are affected by consumers’ existing knowledge about the brand, sector categorization and perceptions of image and category fit of cross-sector extensions. This aim is examined in the context of technological categories, where less-known companies exhibited significance in purchase likelihood, and contradictory with the existing literature, service companies did not perform as positively as expected. Additional insights to the existing literature about sector categorization are provided. The effect of both image and category fit is also examined and predictions regarding the effect of each are made.

  3. Gauging the likelihood of stable cavitation from ultrasound contrast agents.

    Science.gov (United States)

    Bader, Kenneth B; Holland, Christy K

    2013-01-07

    The mechanical index (MI) was formulated to gauge the likelihood of adverse bioeffects from inertial cavitation. However, the MI formulation did not consider bubble activity from stable cavitation. This type of bubble activity can be readily nucleated from ultrasound contrast agents (UCAs) and has the potential to promote beneficial bioeffects. Here, the presence of stable cavitation is determined numerically by tracking the onset of subharmonic oscillations within a population of bubbles for frequencies up to 7 MHz and peak rarefactional pressures up to 3 MPa. In addition, the acoustic pressure rupture threshold of an UCA population was determined using the Marmottant model. The threshold for subharmonic emissions of optimally sized bubbles was found to be lower than the inertial cavitation threshold for all frequencies studied. The rupture thresholds of optimally sized UCAs were found to be lower than the threshold for subharmonic emissions for either single cycle or steady state acoustic excitations. Because the thresholds of both subharmonic emissions and UCA rupture are linearly dependent on frequency, an index of the form I(CAV) = P(r)/f (where P(r) is the peak rarefactional pressure in MPa and f is the frequency in MHz) was derived to gauge the likelihood of subharmonic emissions due to stable cavitation activity nucleated from UCAs.

  4. Safe semi-supervised learning based on weighted likelihood.

    Science.gov (United States)

    Kawakita, Masanori; Takeuchi, Jun'ichi

    2014-05-01

    We are interested in developing a safe semi-supervised learning that works in any situation. Semi-supervised learning postulates that n(') unlabeled data are available in addition to n labeled data. However, almost all of the previous semi-supervised methods require additional assumptions (not only unlabeled data) to make improvements on supervised learning. If such assumptions are not met, then the methods possibly perform worse than supervised learning. Sokolovska, Cappé, and Yvon (2008) proposed a semi-supervised method based on a weighted likelihood approach. They proved that this method asymptotically never performs worse than supervised learning (i.e., it is safe) without any assumption. Their method is attractive because it is easy to implement and is potentially general. Moreover, it is deeply related to a certain statistical paradox. However, the method of Sokolovska et al. (2008) assumes a very limited situation, i.e., classification, discrete covariates, n(')→∞ and a maximum likelihood estimator. In this paper, we extend their method by modifying the weight. We prove that our proposal is safe in a significantly wide range of situations as long as n≤n('). Further, we give a geometrical interpretation of the proof of safety through the relationship with the above-mentioned statistical paradox. Finally, we show that the above proposal is asymptotically safe even when n(')

  5. Maximum likelihood-based analysis of photon arrival trajectories in single-molecule FRET

    Energy Technology Data Exchange (ETDEWEB)

    Waligorska, Marta [Adam Mickiewicz University, Faculty of Chemistry, Grunwaldzka 6, 60-780 Poznan (Poland); Molski, Andrzej, E-mail: amolski@amu.edu.pl [Adam Mickiewicz University, Faculty of Chemistry, Grunwaldzka 6, 60-780 Poznan (Poland)

    2012-07-25

    Highlights: Black-Right-Pointing-Pointer We study model selection and parameter recovery from single-molecule FRET experiments. Black-Right-Pointing-Pointer We examine the maximum likelihood-based analysis of two-color photon trajectories. Black-Right-Pointing-Pointer The number of observed photons determines the performance of the method. Black-Right-Pointing-Pointer For long trajectories, one can extract mean dwell times that are comparable to inter-photon times. -- Abstract: When two fluorophores (donor and acceptor) are attached to an immobilized biomolecule, anti-correlated fluctuations of the donor and acceptor fluorescence caused by Foerster resonance energy transfer (FRET) report on the conformational kinetics of the molecule. Here we assess the maximum likelihood-based analysis of donor and acceptor photon arrival trajectories as a method for extracting the conformational kinetics. Using computer generated data we quantify the accuracy and precision of parameter estimates and the efficiency of the Akaike information criterion (AIC) and the Bayesian information criterion (BIC) in selecting the true kinetic model. We find that the number of observed photons is the key parameter determining parameter estimation and model selection. For long trajectories, one can extract mean dwell times that are comparable to inter-photon times.

  6. Higher Education.

    Science.gov (United States)

    Hendrickson, Robert M.

    This chapter reports 1982 cases involving aspects of higher education. Interesting cases noted dealt with the federal government's authority to regulate state employees' retirement and raised the questions of whether Title IX covers employment, whether financial aid makes a college a program under Title IX, and whether sex segregated mortality…

  7. A Fast Algorithm for Maximum Likelihood Estimation of Harmonic Chirp Parameters

    DEFF Research Database (Denmark)

    Jensen, Tobias Lindstrøm; Nielsen, Jesper Kjær; Jensen, Jesper Rindom

    2017-01-01

    . A statistically efficient estimator for extracting the parameters of the harmonic chirp model in additive white Gaussian noise is the maximum likelihood (ML) estimator which recently has been demonstrated to be robust to noise and accurate --- even when the model order is unknown. The main drawback of the ML......The analysis of (approximately) periodic signals is an important element in numerous applications. One generalization of standard periodic signals often occurring in practice are harmonic chirp signals where the instantaneous frequency increases/decreases linearly as a function of time...

  8. Maximum Likelihood Method for Predicting Environmental Conditions from Assemblage Composition: The R Package bio.infer

    Directory of Open Access Journals (Sweden)

    Lester L. Yuan

    2007-06-01

    Full Text Available This paper provides a brief introduction to the R package bio.infer, a set of scripts that facilitates the use of maximum likelihood (ML methods for predicting environmental conditions from assemblage composition. Environmental conditions can often be inferred from only biological data, and these inferences are useful when other sources of data are unavailable. ML prediction methods are statistically rigorous and applicable to a broader set of problems than more commonly used weighted averaging techniques. However, ML methods require a substantially greater investment of time to program algorithms and to perform computations. This package is designed to reduce the effort required to apply ML prediction methods.

  9. Factors Associated With the Likelihood of Hospitalization Following Emergency Department Visits for Behavioral Health Conditions.

    Science.gov (United States)

    Hamilton, Jane E; Desai, Pratikkumar V; Hoot, Nathan R; Gearing, Robin E; Jeong, Shin; Meyer, Thomas D; Soares, Jair C; Begley, Charles E

    2016-11-01

    Behavioral health-related emergency department (ED) visits have been linked with ED overcrowding, an increased demand on limited resources, and a longer length of stay (LOS) due in part to patients being admitted to the hospital but waiting for an inpatient bed. This study examines factors associated with the likelihood of hospital admission for ED patients with behavioral health conditions at 16 hospital-based EDs in a large urban area in the southern United States. Using Andersen's Behavioral Model of Health Service Use for guidance, the study examined the relationship between predisposing (characteristics of the individual, i.e., age, sex, race/ethnicity), enabling (system or structural factors affecting healthcare access), and need (clinical) factors and the likelihood of hospitalization following ED visits for behavioral health conditions (n = 28,716 ED visits). In the adjusted analysis, a logistic fixed-effects model with blockwise entry was used to estimate the relative importance of predisposing, enabling, and need variables added separately as blocks while controlling for variation in unobserved hospital-specific practices across hospitals and time in years. Significant predisposing factors associated with an increased likelihood of hospitalization following an ED visit included increasing age, while African American race was associated with a lower likelihood of hospitalization. Among enabling factors, arrival by emergency transport and a longer ED LOS were associated with a greater likelihood of hospitalization while being uninsured and the availability of community-based behavioral health services within 5 miles of the ED were associated with lower odds. Among need factors, having a discharge diagnosis of schizophrenia/psychotic spectrum disorder, an affective disorder, a personality disorder, dementia, or an impulse control disorder as well as secondary diagnoses of suicidal ideation and/or suicidal behavior increased the likelihood of hospitalization

  10. Improved EDELWEISS-III sensitivity for low-mass WIMPs using a profile likelihood approach

    Energy Technology Data Exchange (ETDEWEB)

    Hehn, L. [Karlsruher Institut fuer Technologie, Institut fuer Kernphysik, Karlsruhe (Germany); Armengaud, E.; Boissiere, T. de; Gros, M.; Navick, X.F.; Nones, C.; Paul, B. [CEA Saclay, DSM/IRFU, Gif-sur-Yvette Cedex (France); Arnaud, Q. [Univ Lyon, Universite Claude Bernard Lyon 1, CNRS/IN2P3, Institut de Physique Nucleaire de Lyon, Lyon (France); Queen' s University, Kingston (Canada); Augier, C.; Billard, J.; Cazes, A.; Charlieux, F.; Jesus, M. de; Gascon, J.; Juillard, A.; Queguiner, E.; Sanglard, V.; Vagneron, L. [Univ Lyon, Universite Claude Bernard Lyon 1, CNRS/IN2P3, Institut de Physique Nucleaire de Lyon, Lyon (France); Benoit, A.; Camus, P. [Institut Neel, CNRS/UJF, Grenoble (France); Berge, L.; Chapellier, M.; Dumoulin, L.; Giuliani, A.; Le-Sueur, H.; Marnieros, S.; Olivieri, E.; Poda, D. [CSNSM, Univ. Paris-Sud, CNRS/IN2P3, Universite Paris-Saclay, Orsay (France); Bluemer, J. [Karlsruher Institut fuer Technologie, Institut fuer Kernphysik, Karlsruhe (Germany); Karlsruher Institut fuer Technologie, Institut fuer Experimentelle Kernphysik, Karlsruhe (Germany); Broniatowski, A. [CSNSM, Univ. Paris-Sud, CNRS/IN2P3, Universite Paris-Saclay, Orsay (France); Karlsruher Institut fuer Technologie, Institut fuer Experimentelle Kernphysik, Karlsruhe (Germany); Eitel, K.; Kozlov, V.; Siebenborn, B. [Karlsruher Institut fuer Technologie, Institut fuer Kernphysik, Karlsruhe (Germany); Foerster, N.; Heuermann, G.; Scorza, S. [Karlsruher Institut fuer Technologie, Institut fuer Experimentelle Kernphysik, Karlsruhe (Germany); Jin, Y. [Laboratoire de Photonique et de Nanostructures, CNRS, Route de Nozay, Marcoussis (France); Kefelian, C. [Univ Lyon, Universite Claude Bernard Lyon 1, CNRS/IN2P3, Institut de Physique Nucleaire de Lyon, Lyon (France); Karlsruher Institut fuer Technologie, Institut fuer Experimentelle Kernphysik, Karlsruhe (Germany); Kleifges, M.; Tcherniakhovski, D.; Weber, M. [Karlsruher Institut fuer Technologie, Institut fuer Prozessdatenverarbeitung und Elektronik, Karlsruhe (Germany); Kraus, H. [University of Oxford, Department of Physics, Oxford (United Kingdom); Kudryavtsev, V.A. [University of Sheffield, Department of Physics and Astronomy, Sheffield (United Kingdom); Pari, P. [CEA Saclay, DSM/IRAMIS, Gif-sur-Yvette (France); Piro, M.C. [CSNSM, Univ. Paris-Sud, CNRS/IN2P3, Universite Paris-Saclay, Orsay (France); Rensselaer Polytechnic Institute, Troy, NY (United States); Rozov, S.; Yakushev, E. [JINR, Laboratory of Nuclear Problems, Dubna, Moscow Region (Russian Federation); Schmidt, B. [Karlsruher Institut fuer Technologie, Institut fuer Kernphysik, Karlsruhe (Germany); Lawrence Berkeley National Laboratory, Berkeley, CA (United States)

    2016-10-15

    We report on a dark matter search for a Weakly Interacting Massive Particle (WIMP) in the mass range m{sub χ} element of [4, 30] GeV/c{sup 2} with the EDELWEISS-III experiment. A 2D profile likelihood analysis is performed on data from eight selected detectors with the lowest energy thresholds leading to a combined fiducial exposure of 496 kg-days. External backgrounds from γ- and β-radiation, recoils from {sup 206}Pb and neutrons as well as detector intrinsic backgrounds were modelled from data outside the region of interest and constrained in the analysis. The basic data selection and most of the background models are the same as those used in a previously published analysis based on boosted decision trees (BDT) [1]. For the likelihood approach applied in the analysis presented here, a larger signal efficiency and a subtraction of the expected background lead to a higher sensitivity, especially for the lowest WIMP masses probed. No statistically significant signal was found and upper limits on the spin-independent WIMP-nucleon scattering cross section can be set with a hypothesis test based on the profile likelihood test statistics. The 90 % C.L. exclusion limit set for WIMPs with m{sub χ} = 4 GeV/c{sup 2} is 1.6 x 10{sup -39} cm{sup 2}, which is an improvement of a factor of seven with respect to the BDT-based analysis. For WIMP masses above 15 GeV/c{sup 2} the exclusion limits found with both analyses are in good agreement. (orig.)

  11. From Lean Times to Enrollment Declines: The Governor's Commission on the Future of Higher Education in Michigan. ASHE Annual Meeting Paper.

    Science.gov (United States)

    Widmayer, Patricia

    Conditions in higher education in Michigan and the role of the Governor's Commission on the Future of Higher Education in Michigan are highlighted. The average college tuition rate in Michigan is the highest in the nation, and a critical maintenance and equipment problem exists. The Commission is composed of knowledgeable persons without vested…

  12. Preliminary Estimates of 1972-73 Full-Time Instructional Faculty in Institutions of Higher Education. Bulletin. Advanced Statistics for Management. No. 14, March 1, 1973.

    Science.gov (United States)

    National Center for Educational Statistics (DHEW/OE), Washington, DC.

    In response to needs expressed by the community of higher education institutions, the National Center for Educational Statistics has produced early estimates of a selected group of mean salaries of instructional faculty in institutions of higher education in 1972-73. The number and salaries of male and female instructional staff by rank are of…

  13. The asymptotic behaviour of the maximum likelihood function of Kriging approximations using the Gaussian correlation function

    CSIR Research Space (South Africa)

    Kok, S

    2012-07-01

    Full Text Available continuously as the correlation function hyper-parameters approach zero. Since the global minimizer of the maximum likelihood function is an asymptote in this case, it is unclear if maximum likelihood estimation (MLE) remains valid. Numerical ill...

  14. Neutron spectra unfolding with maximum entropy and maximum likelihood

    International Nuclear Information System (INIS)

    Itoh, Shikoh; Tsunoda, Toshiharu

    1989-01-01

    A new unfolding theory has been established on the basis of the maximum entropy principle and the maximum likelihood method. This theory correctly embodies the Poisson statistics of neutron detection, and always brings a positive solution over the whole energy range. Moreover, the theory unifies both problems of overdetermined and of underdetermined. For the latter, the ambiguity in assigning a prior probability, i.e. the initial guess in the Bayesian sense, has become extinct by virtue of the principle. An approximate expression of the covariance matrix for the resultant spectra is also presented. An efficient algorithm to solve the nonlinear system, which appears in the present study, has been established. Results of computer simulation showed the effectiveness of the present theory. (author)

  15. Preliminary attempt on maximum likelihood tomosynthesis reconstruction of DEI data

    International Nuclear Information System (INIS)

    Wang Zhentian; Huang Zhifeng; Zhang Li; Kang Kejun; Chen Zhiqiang; Zhu Peiping

    2009-01-01

    Tomosynthesis is a three-dimension reconstruction method that can remove the effect of superimposition with limited angle projections. It is especially promising in mammography where radiation dose is concerned. In this paper, we propose a maximum likelihood tomosynthesis reconstruction algorithm (ML-TS) on the apparent absorption data of diffraction enhanced imaging (DEI). The motivation of this contribution is to develop a tomosynthesis algorithm in low-dose or noisy circumstances and make DEI get closer to clinic application. The theoretical statistical models of DEI data in physics are analyzed and the proposed algorithm is validated with the experimental data at the Beijing Synchrotron Radiation Facility (BSRF). The results of ML-TS have better contrast compared with the well known 'shift-and-add' algorithm and FBP algorithm. (authors)

  16. Likelihood Approximation With Hierarchical Matrices For Large Spatial Datasets

    KAUST Repository

    Litvinenko, Alexander

    2017-09-03

    We use available measurements to estimate the unknown parameters (variance, smoothness parameter, and covariance length) of a covariance function by maximizing the joint Gaussian log-likelihood function. To overcome cubic complexity in the linear algebra, we approximate the discretized covariance function in the hierarchical (H-) matrix format. The H-matrix format has a log-linear computational cost and storage O(kn log n), where the rank k is a small integer and n is the number of locations. The H-matrix technique allows us to work with general covariance matrices in an efficient way, since H-matrices can approximate inhomogeneous covariance functions, with a fairly general mesh that is not necessarily axes-parallel, and neither the covariance matrix itself nor its inverse have to be sparse. We demonstrate our method with Monte Carlo simulations and an application to soil moisture data. The C, C++ codes and data are freely available.

  17. Music genre classification via likelihood fusion from multiple feature models

    Science.gov (United States)

    Shiu, Yu; Kuo, C.-C. J.

    2005-01-01

    Music genre provides an efficient way to index songs in a music database, and can be used as an effective means to retrieval music of a similar type, i.e. content-based music retrieval. A new two-stage scheme for music genre classification is proposed in this work. At the first stage, we examine a couple of different features, construct their corresponding parametric models (e.g. GMM and HMM) and compute their likelihood functions to yield soft classification results. In particular, the timbre, rhythm and temporal variation features are considered. Then, at the second stage, these soft classification results are integrated to result in a hard decision for final music genre classification. Experimental results are given to demonstrate the performance of the proposed scheme.

  18. Marginal Maximum Likelihood Estimation of Item Response Models in R

    Directory of Open Access Journals (Sweden)

    Matthew S. Johnson

    2007-02-01

    Full Text Available Item response theory (IRT models are a class of statistical models used by researchers to describe the response behaviors of individuals to a set of categorically scored items. The most common IRT models can be classified as generalized linear fixed- and/or mixed-effect models. Although IRT models appear most often in the psychological testing literature, researchers in other fields have successfully utilized IRT-like models in a wide variety of applications. This paper discusses the three major methods of estimation in IRT and develops R functions utilizing the built-in capabilities of the R environment to find the marginal maximum likelihood estimates of the generalized partial credit model. The currently available R packages ltm is also discussed.

  19. Maximum likelihood estimation of phase-type distributions

    DEFF Research Database (Denmark)

    Esparza, Luz Judith R

    for both univariate and multivariate cases. Methods like the EM algorithm and Markov chain Monte Carlo are applied for this purpose. Furthermore, this thesis provides explicit formulae for computing the Fisher information matrix for discrete and continuous phase-type distributions, which is needed to find......This work is concerned with the statistical inference of phase-type distributions and the analysis of distributions with rational Laplace transform, known as matrix-exponential distributions. The thesis is focused on the estimation of the maximum likelihood parameters of phase-type distributions...... confidence regions for their estimated parameters. Finally, a new general class of distributions, called bilateral matrix-exponential distributions, is defined. These distributions have the entire real line as domain and can be used, for instance, for modelling. In addition, this class of distributions...

  20. The elaboration likelihood model and communication about food risks.

    Science.gov (United States)

    Frewer, L J; Howard, C; Hedderley, D; Shepherd, R

    1997-12-01

    Factors such as hazard type and source credibility have been identified as important in the establishment of effective strategies for risk communication. The elaboration likelihood model was adapted to investigate the potential impact of hazard type, information source, and persuasive content of information on individual engagement in elaborative, or thoughtful, cognitions about risk messages. One hundred sixty respondents were allocated to one of eight experimental groups, and the effects of source credibility, persuasive content of information and hazard type were systematically varied. The impact of the different factors on beliefs about the information and elaborative processing examined. Low credibility was particularly important in reducing risk perceptions, although persuasive content and hazard type were also influential in determining whether elaborative processing occurred.

  1. Maximum Likelihood Blood Velocity Estimator Incorporating Properties of Flow Physics

    DEFF Research Database (Denmark)

    Schlaikjer, Malene; Jensen, Jørgen Arendt

    2004-01-01

    )-data under investigation. The flow physic properties are exploited in the second term, as the range of velocity values investigated in the cross-correlation analysis are compared to the velocity estimates in the temporal and spatial neighborhood of the signal segment under investigation. The new estimator...... has been compared to the cross-correlation (CC) estimator and the previously developed maximum likelihood estimator (MLE). The results show that the CMLE can handle a larger velocity search range and is capable of estimating even low velocity levels from tissue motion. The CC and the MLE produce...... for the CC and the MLE. When the velocity search range is set to twice the limit of the CC and the MLE, the number of incorrect velocity estimates are 0, 19.1, and 7.2% for the CMLE, CC, and MLE, respectively. The ability to handle a larger search range and estimating low velocity levels was confirmed...

  2. Accelerated maximum likelihood parameter estimation for stochastic biochemical systems

    Directory of Open Access Journals (Sweden)

    Daigle Bernie J

    2012-05-01

    Full Text Available Abstract Background A prerequisite for the mechanistic simulation of a biochemical system is detailed knowledge of its kinetic parameters. Despite recent experimental advances, the estimation of unknown parameter values from observed data is still a bottleneck for obtaining accurate simulation results. Many methods exist for parameter estimation in deterministic biochemical systems; methods for discrete stochastic systems are less well developed. Given the probabilistic nature of stochastic biochemical models, a natural approach is to choose parameter values that maximize the probability of the observed data with respect to the unknown parameters, a.k.a. the maximum likelihood parameter estimates (MLEs. MLE computation for all but the simplest models requires the simulation of many system trajectories that are consistent with experimental data. For models with unknown parameters, this presents a computational challenge, as the generation of consistent trajectories can be an extremely rare occurrence. Results We have developed Monte Carlo Expectation-Maximization with Modified Cross-Entropy Method (MCEM2: an accelerated method for calculating MLEs that combines advances in rare event simulation with a computationally efficient version of the Monte Carlo expectation-maximization (MCEM algorithm. Our method requires no prior knowledge regarding parameter values, and it automatically provides a multivariate parameter uncertainty estimate. We applied the method to five stochastic systems of increasing complexity, progressing from an analytically tractable pure-birth model to a computationally demanding model of yeast-polarization. Our results demonstrate that MCEM2 substantially accelerates MLE computation on all tested models when compared to a stand-alone version of MCEM. Additionally, we show how our method identifies parameter values for certain classes of models more accurately than two recently proposed computationally efficient methods

  3. CONSTRUCTING A FLEXIBLE LIKELIHOOD FUNCTION FOR SPECTROSCOPIC INFERENCE

    International Nuclear Information System (INIS)

    Czekala, Ian; Andrews, Sean M.; Mandel, Kaisey S.; Green, Gregory M.; Hogg, David W.

    2015-01-01

    We present a modular, extensible likelihood framework for spectroscopic inference based on synthetic model spectra. The subtraction of an imperfect model from a continuously sampled spectrum introduces covariance between adjacent datapoints (pixels) into the residual spectrum. For the high signal-to-noise data with large spectral range that is commonly employed in stellar astrophysics, that covariant structure can lead to dramatically underestimated parameter uncertainties (and, in some cases, biases). We construct a likelihood function that accounts for the structure of the covariance matrix, utilizing the machinery of Gaussian process kernels. This framework specifically addresses the common problem of mismatches in model spectral line strengths (with respect to data) due to intrinsic model imperfections (e.g., in the atomic/molecular databases or opacity prescriptions) by developing a novel local covariance kernel formalism that identifies and self-consistently downweights pathological spectral line “outliers.” By fitting many spectra in a hierarchical manner, these local kernels provide a mechanism to learn about and build data-driven corrections to synthetic spectral libraries. An open-source software implementation of this approach is available at http://iancze.github.io/Starfish, including a sophisticated probabilistic scheme for spectral interpolation when using model libraries that are sparsely sampled in the stellar parameters. We demonstrate some salient features of the framework by fitting the high-resolution V-band spectrum of WASP-14, an F5 dwarf with a transiting exoplanet, and the moderate-resolution K-band spectrum of Gliese 51, an M5 field dwarf

  4. Likelihood of illegal alcohol sales at professional sport stadiums.

    Science.gov (United States)

    Toomey, Traci L; Erickson, Darin J; Lenk, Kathleen M; Kilian, Gunna R

    2008-11-01

    Several studies have assessed the propensity for illegal alcohol sales at licensed alcohol establishments and community festivals, but no previous studies examined the propensity for these sales at professional sport stadiums. In this study, we assessed the likelihood of alcohol sales to both underage youth and obviously intoxicated patrons at professional sports stadiums across the United States, and assessed the factors related to likelihood of both types of alcohol sales. We conducted pseudo-underage (i.e., persons age 21 or older who appear under 21) and pseudo-intoxicated (i.e., persons feigning intoxication) alcohol purchase attempts at stadiums that house professional hockey, basketball, baseball, and football teams. We conducted the purchase attempts at 16 sport stadiums located in 5 states. We measured 2 outcome variables: pseudo-underage sale (yes, no) and pseudo-intoxicated sale (yes, no), and 3 types of independent variables: (1) seller characteristics, (2) purchase attempt characteristics, and (3) event characteristics. Following univariate and bivariate analyses, we a separate series of logistic generalized mixed regression models for each outcome variable. The overall sales rates to the pseudo-underage and pseudo-intoxicated buyers were 18% and 74%, respectively. In the multivariate logistic analyses, we found that the odds of a sale to a pseudo-underage buyer in the stands was 2.9 as large as the odds of a sale at the concession booths (30% vs. 13%; p = 0.01). The odds of a sale to an obviously intoxicated buyer in the stands was 2.9 as large as the odds of a sale at the concession booths (89% vs. 73%; p = 0.02). Similar to studies assessing illegal alcohol sales at licensed alcohol establishments and community festivals, findings from this study shows the need for interventions specifically focused on illegal alcohol sales at professional sporting events.

  5. Targeted maximum likelihood estimation for a binary treatment: A tutorial.

    Science.gov (United States)

    Luque-Fernandez, Miguel Angel; Schomaker, Michael; Rachet, Bernard; Schnitzer, Mireille E

    2018-04-23

    When estimating the average effect of a binary treatment (or exposure) on an outcome, methods that incorporate propensity scores, the G-formula, or targeted maximum likelihood estimation (TMLE) are preferred over naïve regression approaches, which are biased under misspecification of a parametric outcome model. In contrast propensity score methods require the correct specification of an exposure model. Double-robust methods only require correct specification of either the outcome or the exposure model. Targeted maximum likelihood estimation is a semiparametric double-robust method that improves the chances of correct model specification by allowing for flexible estimation using (nonparametric) machine-learning methods. It therefore requires weaker assumptions than its competitors. We provide a step-by-step guided implementation of TMLE and illustrate it in a realistic scenario based on cancer epidemiology where assumptions about correct model specification and positivity (ie, when a study participant had 0 probability of receiving the treatment) are nearly violated. This article provides a concise and reproducible educational introduction to TMLE for a binary outcome and exposure. The reader should gain sufficient understanding of TMLE from this introductory tutorial to be able to apply the method in practice. Extensive R-code is provided in easy-to-read boxes throughout the article for replicability. Stata users will find a testing implementation of TMLE and additional material in the Appendix S1 and at the following GitHub repository: https://github.com/migariane/SIM-TMLE-tutorial. © 2018 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd.

  6. Maximum-likelihood estimation of the hyperbolic parameters from grouped observations

    DEFF Research Database (Denmark)

    Jensen, Jens Ledet

    1988-01-01

    a least-squares problem. The second procedure Hypesti first approaches the maximum-likelihood estimate by iterating in the profile-log likelihood function for the scale parameter. Close to the maximum of the likelihood function, the estimation is brought to an end by iteration, using all four parameters...

  7. A short proof that phylogenetic tree reconstruction by maximum likelihood is hard.

    Science.gov (United States)

    Roch, Sebastien

    2006-01-01

    Maximum likelihood is one of the most widely used techniques to infer evolutionary histories. Although it is thought to be intractable, a proof of its hardness has been lacking. Here, we give a short proof that computing the maximum likelihood tree is NP-hard by exploiting a connection between likelihood and parsimony observed by Tuffley and Steel.

  8. A Short Proof that Phylogenetic Tree Reconstruction by Maximum Likelihood is Hard

    OpenAIRE

    Roch, S.

    2005-01-01

    Maximum likelihood is one of the most widely used techniques to infer evolutionary histories. Although it is thought to be intractable, a proof of its hardness has been lacking. Here, we give a short proof that computing the maximum likelihood tree is NP-hard by exploiting a connection between likelihood and parsimony observed by Tuffley and Steel.

  9. Assessing Individual Weather Risk-Taking and Its Role in Modeling Likelihood of Hurricane Evacuation

    Science.gov (United States)

    Stewart, A. E.

    2017-12-01

    This research focuses upon measuring an individual's level of perceived risk of different severe and extreme weather conditions using a new self-report measure, the Weather Risk-Taking Scale (WRTS). For 32 severe and extreme situations in which people could perform an unsafe behavior (e. g., remaining outside with lightning striking close by, driving over roadways covered with water, not evacuating ahead of an approaching hurricane, etc.), people rated: 1.their likelihood of performing the behavior, 2. The perceived risk of performing the behavior, 3. the expected benefits of performing the behavior, and 4. whether the behavior has actually been performed in the past. Initial development research with the measure using 246 undergraduate students examined its psychometric properties and found that it was internally consistent (Cronbach's a ranged from .87 to .93 for the four scales) and that the scales possessed good temporal (test-retest) reliability (r's ranged from .84 to .91). A second regression study involving 86 undergraduate students found that taking weather risks was associated with having taken similar risks in one's past and with the personality trait of sensation-seeking. Being more attentive to the weather and perceiving its risks when it became extreme was associated with lower likelihoods of taking weather risks (overall regression model, R2adj = 0.60). A third study involving 334 people examined the contributions of weather risk perceptions and risk-taking in modeling the self-reported likelihood of complying with a recommended evacuation ahead of a hurricane. Here, higher perceptions of hurricane risks and lower perceived benefits of risk-taking along with fear of severe weather and hurricane personal self-efficacy ratings were all statistically significant contributors to the likelihood of evacuating ahead of a hurricane. Psychological rootedness and attachment to one's home also tend to predict lack of evacuation. This research highlights the

  10. The likelihood of achieving quantified road safety targets: a binary logistic regression model for possible factors.

    Science.gov (United States)

    Sze, N N; Wong, S C; Lee, C Y

    2014-12-01

    In past several decades, many countries have set quantified road safety targets to motivate transport authorities to develop systematic road safety strategies and measures and facilitate the achievement of continuous road safety improvement. Studies have been conducted to evaluate the association between the setting of quantified road safety targets and road fatality reduction, in both the short and long run, by comparing road fatalities before and after the implementation of a quantified road safety target. However, not much work has been done to evaluate whether the quantified road safety targets are actually achieved. In this study, we used a binary logistic regression model to examine the factors - including vehicle ownership, fatality rate, and national income, in addition to level of ambition and duration of target - that contribute to a target's success. We analyzed 55 quantified road safety targets set by 29 countries from 1981 to 2009, and the results indicate that targets that are in progress and with lower level of ambitions had a higher likelihood of eventually being achieved. Moreover, possible interaction effects on the association between level of ambition and the likelihood of success are also revealed. Copyright © 2014 Elsevier Ltd. All rights reserved.

  11. Neandertal admixture in Eurasia confirmed by maximum-likelihood analysis of three genomes.

    Science.gov (United States)

    Lohse, Konrad; Frantz, Laurent A F

    2014-04-01

    Although there has been much interest in estimating histories of divergence and admixture from genomic data, it has proved difficult to distinguish recent admixture from long-term structure in the ancestral population. Thus, recent genome-wide analyses based on summary statistics have sparked controversy about the possibility of interbreeding between Neandertals and modern humans in Eurasia. Here we derive the probability of full mutational configurations in nonrecombining sequence blocks under both admixture and ancestral structure scenarios. Dividing the genome into short blocks gives an efficient way to compute maximum-likelihood estimates of parameters. We apply this likelihood scheme to triplets of human and Neandertal genomes and compare the relative support for a model of admixture from Neandertals into Eurasian populations after their expansion out of Africa against a history of persistent structure in their common ancestral population in Africa. Our analysis allows us to conclusively reject a model of ancestral structure in Africa and instead reveals strong support for Neandertal admixture in Eurasia at a higher rate (3.4-7.3%) than suggested previously. Using analysis and simulations we show that our inference is more powerful than previous summary statistics and robust to realistic levels of recombination.

  12. Steady state likelihood ratio sensitivity analysis for stiff kinetic Monte Carlo simulations.

    Science.gov (United States)

    Núñez, M; Vlachos, D G

    2015-01-28

    Kinetic Monte Carlo simulation is an integral tool in the study of complex physical phenomena present in applications ranging from heterogeneous catalysis to biological systems to crystal growth and atmospheric sciences. Sensitivity analysis is useful for identifying important parameters and rate-determining steps, but the finite-difference application of sensitivity analysis is computationally demanding. Techniques based on the likelihood ratio method reduce the computational cost of sensitivity analysis by obtaining all gradient information in a single run. However, we show that disparity in time scales of microscopic events, which is ubiquitous in real systems, introduces drastic statistical noise into derivative estimates for parameters affecting the fast events. In this work, the steady-state likelihood ratio sensitivity analysis is extended to singularly perturbed systems by invoking partial equilibration for fast reactions, that is, by working on the fast and slow manifolds of the chemistry. Derivatives on each time scale are computed independently and combined to the desired sensitivity coefficients to considerably reduce the noise in derivative estimates for stiff systems. The approach is demonstrated in an analytically solvable linear system.

  13. High-dose regions versus likelihood of cure after prostate brachytherapy

    International Nuclear Information System (INIS)

    Wallner, Kent; Merrick, Gregory; Sutlief, Steven; True, Laurence; Butler, Wayne

    2005-01-01

    Purpose: To analyze the effect of high-dose regions on biochemical cancer control rates after prostate brachytherapy. Methods and Materials: Patients with 1997 American Joint Committee on Cancer clinical Stage T1c-T2a prostate carcinoma (Gleason grade 5-6, prostate-specific antigen level 4-10 ng/mL) were randomized to implantation with 125 I (144 Gy) vs. 103 Pd (125 Gy, National Institute of Standards and Technology 1999). Isotope implantation was performed by standard techniques, using a modified peripheral loading pattern. Of the 313 patients entered in the protocol, 270 were included in this analysis. The 125 I source strength ranged from 0.4 to 0.89 mCi (median, 0.55 mCi), and the 103 Pd source strength ranged from 1.3 to 1.6 mCi (median, 1.5 mCi). CT was performed within 4 h after implantation. The dosimetric parameters analyzed included the percentage of the postimplant prostate volume covered by the 100%, 150%, 200%, and 300% prescription dose (V 100 , V 150 , V 200 , and V 300 , respectively). The median time to the last follow-up for patients without failure was 2.7 years. Freedom from biochemical failure was defined as a serum prostate-specific antigen level of ≤0.5 ng/mL at last follow-up. Patients were censored at last follow-up if their serum prostate-specific antigen level was still decreasing. Results: The mean V 100 , V 150 , V 200 , and V 300 value was 90% (±8%), 63% (±14), 35% (±13%), and 14% (±7%), respectively. Patients with a V 100 of ≥90% had a 3-year freedom from biochemical failure rate of 96% vs. 87% for those with a V 100 of 100 of ≥90% were analyzed, no relationship was found between higher dose regions and the likelihood of cancer control. This lack of effect on biochemical control was apparent for both isotopes. Conclusion: High-dose regions do not appear to affect cancer control rates, as long as >90% of the prostate volume is covered by the prescription dose

  14. Accuracy of maximum likelihood estimates of a two-state model in single-molecule FRET

    Energy Technology Data Exchange (ETDEWEB)

    Gopich, Irina V. [Laboratory of Chemical Physics, National Institute of Diabetes and Digestive and Kidney Diseases, National Institutes of Health, Bethesda, Maryland 20892 (United States)

    2015-01-21

    Photon sequences from single-molecule Förster resonance energy transfer (FRET) experiments can be analyzed using a maximum likelihood method. Parameters of the underlying kinetic model (FRET efficiencies of the states and transition rates between conformational states) are obtained by maximizing the appropriate likelihood function. In addition, the errors (uncertainties) of the extracted parameters can be obtained from the curvature of the likelihood function at the maximum. We study the standard deviations of the parameters of a two-state model obtained from photon sequences with recorded colors and arrival times. The standard deviations can be obtained analytically in a special case when the FRET efficiencies of the states are 0 and 1 and in the limiting cases of fast and slow conformational dynamics. These results are compared with the results of numerical simulations. The accuracy and, therefore, the ability to predict model parameters depend on how fast the transition rates are compared to the photon count rate. In the limit of slow transitions, the key parameters that determine the accuracy are the number of transitions between the states and the number of independent photon sequences. In the fast transition limit, the accuracy is determined by the small fraction of photons that are correlated with their neighbors. The relative standard deviation of the relaxation rate has a “chevron” shape as a function of the transition rate in the log-log scale. The location of the minimum of this function dramatically depends on how well the FRET efficiencies of the states are separated.

  15. Likelihood ratio model for classification of forensic evidence

    Energy Technology Data Exchange (ETDEWEB)

    Zadora, G., E-mail: gzadora@ies.krakow.pl [Institute of Forensic Research, Westerplatte 9, 31-033 Krakow (Poland); Neocleous, T., E-mail: tereza@stats.gla.ac.uk [University of Glasgow, Department of Statistics, 15 University Gardens, Glasgow G12 8QW (United Kingdom)

    2009-05-29

    One of the problems of analysis of forensic evidence such as glass fragments, is the determination of their use-type category, e.g. does a glass fragment originate from an unknown window or container? Very small glass fragments arise during various accidents and criminal offences, and could be carried on the clothes, shoes and hair of participants. It is therefore necessary to obtain information on their physicochemical composition in order to solve the classification problem. Scanning Electron Microscopy coupled with an Energy Dispersive X-ray Spectrometer and the Glass Refractive Index Measurement method are routinely used in many forensic institutes for the investigation of glass. A natural form of glass evidence evaluation for forensic purposes is the likelihood ratio-LR = p(E|H{sub 1})/p(E|H{sub 2}). The main aim of this paper was to study the performance of LR models for glass object classification which considered one or two sources of data variability, i.e. between-glass-object variability and(or) within-glass-object variability. Within the proposed model a multivariate kernel density approach was adopted for modelling the between-object distribution and a multivariate normal distribution was adopted for modelling within-object distributions. Moreover, a graphical method of estimating the dependence structure was employed to reduce the highly multivariate problem to several lower-dimensional problems. The performed analysis showed that the best likelihood model was the one which allows to include information about between and within-object variability, and with variables derived from elemental compositions measured by SEM-EDX, and refractive values determined before (RI{sub b}) and after (RI{sub a}) the annealing process, in the form of dRI = log{sub 10}|RI{sub a} - RI{sub b}|. This model gave better results than the model with only between-object variability considered. In addition, when dRI and variables derived from elemental compositions were used, this

  16. Likelihood ratio model for classification of forensic evidence

    International Nuclear Information System (INIS)

    Zadora, G.; Neocleous, T.

    2009-01-01

    One of the problems of analysis of forensic evidence such as glass fragments, is the determination of their use-type category, e.g. does a glass fragment originate from an unknown window or container? Very small glass fragments arise during various accidents and criminal offences, and could be carried on the clothes, shoes and hair of participants. It is therefore necessary to obtain information on their physicochemical composition in order to solve the classification problem. Scanning Electron Microscopy coupled with an Energy Dispersive X-ray Spectrometer and the Glass Refractive Index Measurement method are routinely used in many forensic institutes for the investigation of glass. A natural form of glass evidence evaluation for forensic purposes is the likelihood ratio-LR = p(E|H 1 )/p(E|H 2 ). The main aim of this paper was to study the performance of LR models for glass object classification which considered one or two sources of data variability, i.e. between-glass-object variability and(or) within-glass-object variability. Within the proposed model a multivariate kernel density approach was adopted for modelling the between-object distribution and a multivariate normal distribution was adopted for modelling within-object distributions. Moreover, a graphical method of estimating the dependence structure was employed to reduce the highly multivariate problem to several lower-dimensional problems. The performed analysis showed that the best likelihood model was the one which allows to include information about between and within-object variability, and with variables derived from elemental compositions measured by SEM-EDX, and refractive values determined before (RI b ) and after (RI a ) the annealing process, in the form of dRI = log 10 |RI a - RI b |. This model gave better results than the model with only between-object variability considered. In addition, when dRI and variables derived from elemental compositions were used, this model outperformed two other

  17. A simulation study of likelihood inference procedures in rayleigh distribution with censored data

    International Nuclear Information System (INIS)

    Baklizi, S. A.; Baker, H. M.

    2001-01-01

    Inference procedures based on the likelihood function are considered for the one parameter Rayleigh distribution with type1 and type 2 censored data. Using simulation techniques, the finite sample performances of the maximum likelihood estimator and the large sample likelihood interval estimation procedures based on the Wald, the Rao, and the likelihood ratio statistics are investigated. It appears that the maximum likelihood estimator is unbiased. The approximate variance estimates obtained from the asymptotic normal distribution of the maximum likelihood estimator are accurate under type 2 censored data while they tend to be smaller than the actual variances when considering type1 censored data of small size. It appears also that interval estimation based on the Wald and Rao statistics need much more sample size than interval estimation based on the likelihood ratio statistic to attain reasonable accuracy. (authors). 15 refs., 4 tabs

  18. Assessing the Impact of a Statewide STEM Investment on K-12, Higher Education, and Business/Community STEM Awareness over Time

    Science.gov (United States)

    Sondergeld, Toni A.; Johnson, Carla C.; Walten, Janet B.

    2016-01-01

    Despite monetary and educational investments in science, technology, engineering, and mathematics (STEM) being at record high levels, little attention has been devoted to generating a common understanding of STEM. In addition, working with business, K-12 schools, and/or institutions of higher education to establish a grassroots effort to help…

  19. Maximum likelihood sequence estimation for optical complex direct modulation.

    Science.gov (United States)

    Che, Di; Yuan, Feng; Shieh, William

    2017-04-17

    Semiconductor lasers are versatile optical transmitters in nature. Through the direct modulation (DM), the intensity modulation is realized by the linear mapping between the injection current and the light power, while various angle modulations are enabled by the frequency chirp. Limited by the direct detection, DM lasers used to be exploited only as 1-D (intensity or angle) transmitters by suppressing or simply ignoring the other modulation. Nevertheless, through the digital coherent detection, simultaneous intensity and angle modulations (namely, 2-D complex DM, CDM) can be realized by a single laser diode. The crucial technique of CDM is the joint demodulation of intensity and differential phase with the maximum likelihood sequence estimation (MLSE), supported by a closed-form discrete signal approximation of frequency chirp to characterize the MLSE transition probability. This paper proposes a statistical method for the transition probability to significantly enhance the accuracy of the chirp model. Using the statistical estimation, we demonstrate the first single-channel 100-Gb/s PAM-4 transmission over 1600-km fiber with only 10G-class DM lasers.

  20. Scale invariant for one-sided multivariate likelihood ratio tests

    Directory of Open Access Journals (Sweden)

    Samruam Chongcharoen

    2010-07-01

    Full Text Available Suppose 1 2 , ,..., n X X X is a random sample from Np ( ,V distribution. Consider 0 1 2 : ... 0 p H      and1 : 0 for 1, 2,..., i H   i  p , let 1 0 H  H denote the hypothesis that 1 H holds but 0 H does not, and let ~ 0 H denote thehypothesis that 0 H does not hold. Because the likelihood ratio test (LRT of 0 H versus 1 0 H  H is complicated, severalad hoc tests have been proposed. Tang, Gnecco and Geller (1989 proposed an approximate LRT, Follmann (1996 suggestedrejecting 0 H if the usual test of 0 H versus ~ 0 H rejects 0 H with significance level 2 and a weighted sum of the samplemeans is positive, and Chongcharoen, Singh and Wright (2002 modified Follmann’s test to include information about thecorrelation structure in the sum of the sample means. Chongcharoen and Wright (2007, 2006 give versions of the Tang-Gnecco-Geller tests and Follmann-type tests, respectively, with invariance properties. With LRT’s scale invariant desiredproperty, we investigate its powers by using Monte Carlo techniques and compare them with the tests which we recommendin Chongcharoen and Wright (2007, 2006.

  1. Maximum-likelihood estimation of recent shared ancestry (ERSA).

    Science.gov (United States)

    Huff, Chad D; Witherspoon, David J; Simonson, Tatum S; Xing, Jinchuan; Watkins, W Scott; Zhang, Yuhua; Tuohy, Therese M; Neklason, Deborah W; Burt, Randall W; Guthery, Stephen L; Woodward, Scott R; Jorde, Lynn B

    2011-05-01

    Accurate estimation of recent shared ancestry is important for genetics, evolution, medicine, conservation biology, and forensics. Established methods estimate kinship accurately for first-degree through third-degree relatives. We demonstrate that chromosomal segments shared by two individuals due to identity by descent (IBD) provide much additional information about shared ancestry. We developed a maximum-likelihood method for the estimation of recent shared ancestry (ERSA) from the number and lengths of IBD segments derived from high-density SNP or whole-genome sequence data. We used ERSA to estimate relationships from SNP genotypes in 169 individuals from three large, well-defined human pedigrees. ERSA is accurate to within one degree of relationship for 97% of first-degree through fifth-degree relatives and 80% of sixth-degree and seventh-degree relatives. We demonstrate that ERSA's statistical power approaches the maximum theoretical limit imposed by the fact that distant relatives frequently share no DNA through a common ancestor. ERSA greatly expands the range of relationships that can be estimated from genetic data and is implemented in a freely available software package.

  2. Affective mapping: An activation likelihood estimation (ALE) meta-analysis.

    Science.gov (United States)

    Kirby, Lauren A J; Robinson, Jennifer L

    2017-11-01

    Functional neuroimaging has the spatial resolution to explain the neural basis of emotions. Activation likelihood estimation (ALE), as opposed to traditional qualitative meta-analysis, quantifies convergence of activation across studies within affective categories. Others have used ALE to investigate a broad range of emotions, but without the convenience of the BrainMap database. We used the BrainMap database and analysis resources to run separate meta-analyses on coordinates reported for anger, anxiety, disgust, fear, happiness, humor, and sadness. Resultant ALE maps were compared to determine areas of convergence between emotions, as well as to identify affect-specific networks. Five out of the seven emotions demonstrated consistent activation within the amygdala, whereas all emotions consistently activated the right inferior frontal gyrus, which has been implicated as an integration hub for affective and cognitive processes. These data provide the framework for models of affect-specific networks, as well as emotional processing hubs, which can be used for future studies of functional or effective connectivity. Copyright © 2015 Elsevier Inc. All rights reserved.

  3. Dark matter CMB constraints and likelihoods for poor particle physicists

    Energy Technology Data Exchange (ETDEWEB)

    Cline, James M.; Scott, Pat, E-mail: jcline@physics.mcgill.ca, E-mail: patscott@physics.mcgill.ca [Department of Physics, McGill University, 3600 rue University, Montréal, QC, H3A 2T8 (Canada)

    2013-03-01

    The cosmic microwave background provides constraints on the annihilation and decay of light dark matter at redshifts between 100 and 1000, the strength of which depends upon the fraction of energy ending up in the form of electrons and photons. The resulting constraints are usually presented for a limited selection of annihilation and decay channels. Here we provide constraints on the annihilation cross section and decay rate, at discrete values of the dark matter mass m{sub χ}, for all the annihilation and decay channels whose secondary spectra have been computed using PYTHIA in arXiv:1012.4515 (''PPPC 4 DM ID: a poor particle physicist cookbook for dark matter indirect detection''), namely e, μ, τ, V → e, V → μ, V → τ, u, d s, c, b, t, γ, g, W, Z and h. By interpolating in mass, these can be used to find the CMB constraints and likelihood functions from WMAP7 and Planck for a wide range of dark matter models, including those with annihilation or decay into a linear combination of different channels.

  4. Dark matter CMB constraints and likelihoods for poor particle physicists

    International Nuclear Information System (INIS)

    Cline, James M.; Scott, Pat

    2013-01-01

    The cosmic microwave background provides constraints on the annihilation and decay of light dark matter at redshifts between 100 and 1000, the strength of which depends upon the fraction of energy ending up in the form of electrons and photons. The resulting constraints are usually presented for a limited selection of annihilation and decay channels. Here we provide constraints on the annihilation cross section and decay rate, at discrete values of the dark matter mass m χ , for all the annihilation and decay channels whose secondary spectra have been computed using PYTHIA in arXiv:1012.4515 (''PPPC 4 DM ID: a poor particle physicist cookbook for dark matter indirect detection''), namely e, μ, τ, V → e, V → μ, V → τ, u, d s, c, b, t, γ, g, W, Z and h. By interpolating in mass, these can be used to find the CMB constraints and likelihood functions from WMAP7 and Planck for a wide range of dark matter models, including those with annihilation or decay into a linear combination of different channels

  5. Maximum likelihood pedigree reconstruction using integer linear programming.

    Science.gov (United States)

    Cussens, James; Bartlett, Mark; Jones, Elinor M; Sheehan, Nuala A

    2013-01-01

    Large population biobanks of unrelated individuals have been highly successful in detecting common genetic variants affecting diseases of public health concern. However, they lack the statistical power to detect more modest gene-gene and gene-environment interaction effects or the effects of rare variants for which related individuals are ideally required. In reality, most large population studies will undoubtedly contain sets of undeclared relatives, or pedigrees. Although a crude measure of relatedness might sometimes suffice, having a good estimate of the true pedigree would be much more informative if this could be obtained efficiently. Relatives are more likely to share longer haplotypes around disease susceptibility loci and are hence biologically more informative for rare variants than unrelated cases and controls. Distant relatives are arguably more useful for detecting variants with small effects because they are less likely to share masking environmental effects. Moreover, the identification of relatives enables appropriate adjustments of statistical analyses that typically assume unrelatedness. We propose to exploit an integer linear programming optimisation approach to pedigree learning, which is adapted to find valid pedigrees by imposing appropriate constraints. Our method is not restricted to small pedigrees and is guaranteed to return a maximum likelihood pedigree. With additional constraints, we can also search for multiple high-probability pedigrees and thus account for the inherent uncertainty in any particular pedigree reconstruction. The true pedigree is found very quickly by comparison with other methods when all individuals are observed. Extensions to more complex problems seem feasible. © 2012 Wiley Periodicals, Inc.

  6. Constructing diagnostic likelihood: clinical decisions using subjective versus statistical probability.

    Science.gov (United States)

    Kinnear, John; Jackson, Ruth

    2017-07-01

    Although physicians are highly trained in the application of evidence-based medicine, and are assumed to make rational decisions, there is evidence that their decision making is prone to biases. One of the biases that has been shown to affect accuracy of judgements is that of representativeness and base-rate neglect, where the saliency of a person's features leads to overestimation of their likelihood of belonging to a group. This results in the substitution of 'subjective' probability for statistical probability. This study examines clinicians' propensity to make estimations of subjective probability when presented with clinical information that is considered typical of a medical condition. The strength of the representativeness bias is tested by presenting choices in textual and graphic form. Understanding of statistical probability is also tested by omitting all clinical information. For the questions that included clinical information, 46.7% and 45.5% of clinicians made judgements of statistical probability, respectively. Where the question omitted clinical information, 79.9% of clinicians made a judgement consistent with statistical probability. There was a statistically significant difference in responses to the questions with and without representativeness information (χ2 (1, n=254)=54.45, pprobability. One of the causes for this representativeness bias may be the way clinical medicine is taught where stereotypic presentations are emphasised in diagnostic decision making. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.

  7. Race of source effects in the elaboration likelihood model.

    Science.gov (United States)

    White, P H; Harkins, S G

    1994-11-01

    In a series of experiments, we investigated the effect of race of source on persuasive communications in the Elaboration Likelihood Model (R.E. Petty & J.T. Cacioppo, 1981, 1986). In Experiment 1, we found no evidence that White participants responded to a Black source as a simple negative cue. Experiment 2 suggested the possibility that exposure to a Black source led to low-involvement message processing. In Experiments 3 and 4, a distraction paradigm was used to test this possibility, and it was found that participants under low involvement were highly motivated to process a message presented by a Black source. In Experiment 5, we found that attitudes toward the source's ethnic group, rather than violations of expectancies, accounted for this processing effect. Taken together, the results of these experiments are consistent with S.L. Gaertner and J.F. Dovidio's (1986) theory of aversive racism, which suggests that Whites, because of a combination of egalitarian values and underlying negative racial attitudes, are very concerned about not appearing unfavorable toward Blacks, leading them to be highly motivated to process messages presented by a source from this group.

  8. A Sum-of-Squares and Semidefinite Programming Approach for Maximum Likelihood DOA Estimation

    Directory of Open Access Journals (Sweden)

    Shu Cai

    2016-12-01

    Full Text Available Direction of arrival (DOA estimation using a uniform linear array (ULA is a classical problem in array signal processing. In this paper, we focus on DOA estimation based on the maximum likelihood (ML criterion, transform the estimation problem into a novel formulation, named as sum-of-squares (SOS, and then solve it using semidefinite programming (SDP. We first derive the SOS and SDP method for DOA estimation in the scenario of a single source and then extend it under the framework of alternating projection for multiple DOA estimation. The simulations demonstrate that the SOS- and SDP-based algorithms can provide stable and accurate DOA estimation when the number of snapshots is small and the signal-to-noise ratio (SNR is low. Moveover, it has a higher spatial resolution compared to existing methods based on the ML criterion.

  9. Gender and Relationship Status Interaction and Likelihood of Return to Work Post-Retirement.

    Science.gov (United States)

    Settels, Jason; McMullin, Julie

    2017-09-01

    Population aging is an issue of mounting importance throughout the industrialized world. Concerns over labour force shortages have led to policies that prolong working life. Accordingly, present-day workforce participation patterns of older individuals are extensively varied. This study utilized the 2007 General Social Survey to examine factors associated with post-retirement paid work, focusing on the interaction between gender and relationship status, among Canadians aged 50 to 74 who had retired at least once. We find that although being in a relationship is associated with a higher likelihood of post-retirement work for men, the opposite is true for women. Our findings suggest that the gendered association between relationship status and post-retirement work results partly from the gendered associations between relationship status and one's motivation for learning and community involvement, career orientation, and sense of independence. Gendered meanings of relationship status are thus revealed through analysis of post-retirement work.

  10. Maximum Likelihood PSD Estimation for Speech Enhancement in Reverberation and Noise

    DEFF Research Database (Denmark)

    Kuklasinski, Adam; Doclo, Simon; Jensen, Søren Holdt

    2016-01-01

    In this contribution we focus on the problem of power spectral density (PSD) estimation from multiple microphone signals in reverberant and noisy environments. The PSD estimation method proposed in this paper is based on the maximum likelihood (ML) methodology. In particular, we derive a novel ML...... instrumental measures and is shown to be higher than when the competing estimator is used. Moreover, we perform a speech intelligibility test where we demonstrate that both the proposed and the competing PSD estimators lead to similar intelligibility improvements......., it is shown numerically that the mean squared estimation error achieved by the proposed method is near the limit set by the corresponding Cram´er-Rao lower bound. The speech dereverberation performance of a multi-channel Wiener filter (MWF) based on the proposed PSD estimators is measured using several...

  11. Targeted search for continuous gravitational waves: Bayesian versus maximum-likelihood statistics

    International Nuclear Information System (INIS)

    Prix, Reinhard; Krishnan, Badri

    2009-01-01

    We investigate the Bayesian framework for detection of continuous gravitational waves (GWs) in the context of targeted searches, where the phase evolution of the GW signal is assumed to be known, while the four amplitude parameters are unknown. We show that the orthodox maximum-likelihood statistic (known as F-statistic) can be rediscovered as a Bayes factor with an unphysical prior in amplitude parameter space. We introduce an alternative detection statistic ('B-statistic') using the Bayes factor with a more natural amplitude prior, namely an isotropic probability distribution for the orientation of GW sources. Monte Carlo simulations of targeted searches show that the resulting Bayesian B-statistic is more powerful in the Neyman-Pearson sense (i.e., has a higher expected detection probability at equal false-alarm probability) than the frequentist F-statistic.

  12. HLIBCov: Parallel Hierarchical Matrix Approximation of Large Covariance Matrices and Likelihoods with Applications in Parameter Identification

    KAUST Repository

    Litvinenko, Alexander

    2017-09-26

    The main goal of this article is to introduce the parallel hierarchical matrix library HLIBpro to the statistical community. We describe the HLIBCov package, which is an extension of the HLIBpro library for approximating large covariance matrices and maximizing likelihood functions. We show that an approximate Cholesky factorization of a dense matrix of size $2M\\\\times 2M$ can be computed on a modern multi-core desktop in few minutes. Further, HLIBCov is used for estimating the unknown parameters such as the covariance length, variance and smoothness parameter of a Mat\\\\\\'ern covariance function by maximizing the joint Gaussian log-likelihood function. The computational bottleneck here is expensive linear algebra arithmetics due to large and dense covariance matrices. Therefore covariance matrices are approximated in the hierarchical ($\\\\H$-) matrix format with computational cost $\\\\mathcal{O}(k^2n \\\\log^2 n/p)$ and storage $\\\\mathcal{O}(kn \\\\log n)$, where the rank $k$ is a small integer (typically $k<25$), $p$ the number of cores and $n$ the number of locations on a fairly general mesh. We demonstrate a synthetic example, where the true values of known parameters are known. For reproducibility we provide the C++ code, the documentation, and the synthetic data.

  13. Efficient Maximum Likelihood Estimation for Pedigree Data with the Sum-Product Algorithm.

    Science.gov (United States)

    Engelhardt, Alexander; Rieger, Anna; Tresch, Achim; Mansmann, Ulrich

    2016-01-01

    We analyze data sets consisting of pedigrees with age at onset of colorectal cancer (CRC) as phenotype. The occurrence of familial clusters of CRC suggests the existence of a latent, inheritable risk factor. We aimed to compute the probability of a family possessing this risk factor as well as the hazard rate increase for these risk factor carriers. Due to the inheritability of this risk factor, the estimation necessitates a costly marginalization of the likelihood. We propose an improved EM algorithm by applying factor graphs and the sum-product algorithm in the E-step. This reduces the computational complexity from exponential to linear in the number of family members. Our algorithm is as precise as a direct likelihood maximization in a simulation study and a real family study on CRC risk. For 250 simulated families of size 19 and 21, the runtime of our algorithm is faster by a factor of 4 and 29, respectively. On the largest family (23 members) in the real data, our algorithm is 6 times faster. We introduce a flexible and runtime-efficient tool for statistical inference in biomedical event data with latent variables that opens the door for advanced analyses of pedigree data. © 2017 S. Karger AG, Basel.

  14. Maximum-Likelihood Sequence Detection of Multiple Antenna Systems over Dispersive Channels via Sphere Decoding

    Directory of Open Access Journals (Sweden)

    Hassibi Babak

    2002-01-01

    Full Text Available Multiple antenna systems are capable of providing high data rate transmissions over wireless channels. When the channels are dispersive, the signal at each receive antenna is a combination of both the current and past symbols sent from all transmit antennas corrupted by noise. The optimal receiver is a maximum-likelihood sequence detector and is often considered to be practically infeasible due to high computational complexity (exponential in number of antennas and channel memory. Therefore, in practice, one often settles for a less complex suboptimal receiver structure, typically with an equalizer meant to suppress both the intersymbol and interuser interference, followed by the decoder. We propose a sphere decoding for the sequence detection in multiple antenna communication systems over dispersive channels. The sphere decoding provides the maximum-likelihood estimate with computational complexity comparable to the standard space-time decision-feedback equalizing (DFE algorithms. The performance and complexity of the sphere decoding are compared with the DFE algorithm by means of simulations.

  15. HLIBCov: Parallel Hierarchical Matrix Approximation of Large Covariance Matrices and Likelihoods with Applications in Parameter Identification

    KAUST Repository

    Litvinenko, Alexander

    2017-09-24

    The main goal of this article is to introduce the parallel hierarchical matrix library HLIBpro to the statistical community. We describe the HLIBCov package, which is an extension of the HLIBpro library for approximating large covariance matrices and maximizing likelihood functions. We show that an approximate Cholesky factorization of a dense matrix of size $2M\\\\times 2M$ can be computed on a modern multi-core desktop in few minutes. Further, HLIBCov is used for estimating the unknown parameters such as the covariance length, variance and smoothness parameter of a Mat\\\\\\'ern covariance function by maximizing the joint Gaussian log-likelihood function. The computational bottleneck here is expensive linear algebra arithmetics due to large and dense covariance matrices. Therefore covariance matrices are approximated in the hierarchical ($\\\\mathcal{H}$-) matrix format with computational cost $\\\\mathcal{O}(k^2n \\\\log^2 n/p)$ and storage $\\\\mathcal{O}(kn \\\\log n)$, where the rank $k$ is a small integer (typically $k<25$), $p$ the number of cores and $n$ the number of locations on a fairly general mesh. We demonstrate a synthetic example, where the true values of known parameters are known. For reproducibility we provide the C++ code, the documentation, and the synthetic data.

  16. Maximum Simulated Likelihood and Expectation-Maximization Methods to Estimate Random Coefficients Logit with Panel Data

    DEFF Research Database (Denmark)

    Cherchi, Elisabetta; Guevara, Cristian

    2012-01-01

    with cross-sectional or with panel data, and (d) EM systematically attained more efficient estimators than the MSL method. The results imply that if the purpose of the estimation is only to determine the ratios of the model parameters (e.g., the value of time), the EM method should be preferred. For all......The random coefficients logit model allows a more realistic representation of agents' behavior. However, the estimation of that model may involve simulation, which may become impractical with many random coefficients because of the curse of dimensionality. In this paper, the traditional maximum...... simulated likelihood (MSL) method is compared with the alternative expectation- maximization (EM) method, which does not require simulation. Previous literature had shown that for cross-sectional data, MSL outperforms the EM method in the ability to recover the true parameters and estimation time...

  17. A 3D approximate maximum likelihood solver for localization of fish implanted with acoustic transmitters

    Science.gov (United States)

    Li, Xinya; Deng, Z. Daniel; Sun, Yannan; Martinez, Jayson J.; Fu, Tao; McMichael, Geoffrey A.; Carlson, Thomas J.

    2014-11-01

    Better understanding of fish behavior is vital for recovery of many endangered species including salmon. The Juvenile Salmon Acoustic Telemetry System (JSATS) was developed to observe the out-migratory behavior of juvenile salmonids tagged by surgical implantation of acoustic micro-transmitters and to estimate the survival when passing through dams on the Snake and Columbia Rivers. A robust three-dimensional solver was needed to accurately and efficiently estimate the time sequence of locations of fish tagged with JSATS acoustic transmitters, to describe in sufficient detail the information needed to assess the function of dam-passage design alternatives. An approximate maximum likelihood solver was developed using measurements of time difference of arrival from all hydrophones in receiving arrays on which a transmission was detected. Field experiments demonstrated that the developed solver performed significantly better in tracking efficiency and accuracy than other solvers described in the literature.

  18. Determinants of the abilities to jump higher and shorten the contact time in a running 1-legged vertical jump in basketball.

    Science.gov (United States)

    Miura, Ken; Yamamoto, Masayoshi; Tamaki, Hiroyuki; Zushi, Koji

    2010-01-01

    This study was conducted to obtain useful information for developing training techniques for the running 1-legged vertical jump in basketball (lay-up shot jump). The ability to perform the lay-up shot jump and various basic jumps was measured by testing 19 male basketball players. The basic jumps consisted of the 1-legged repeated rebound jump, the 2-legged repeated rebound jump, and the countermovement jump. Jumping height, contact time, and jumping index (jumping height/contact time) were measured and calculated using a contact mat/computer system that recorded the contact and air times. The jumping index indicates power. No significant correlation existed between the jumping height and contact time of the lay-up shot jump, the 2 components of the lay-up shot jump index. As a result, jumping height and contact time were found to be mutually independent abilities. The relationships in contact time between the lay-up shot jump to the 1-legged repeated rebound jump and the 2-legged repeated rebound jump were correlated on the same significance levels (p jumping height existed between the 1-legged repeated rebound jump and the lay-up shot jump (p jumping height between the lay-up shot jump and both the 2-legged repeated rebound jump and countermovement jump. The lay-up shot index correlated more strongly to the 1-legged repeated rebound jump index (p jump index (p jump is effective in improving both contact time and jumping height in the lay-up shot jump.

  19. Estimating likelihood of future crashes for crash-prone drivers

    Directory of Open Access Journals (Sweden)

    Subasish Das

    2015-06-01

    Full Text Available At-fault crash-prone drivers are usually considered as the high risk group for possible future incidents or crashes. In Louisiana, 34% of crashes are repeatedly committed by the at-fault crash-prone drivers who represent only 5% of the total licensed drivers in the state. This research has conducted an exploratory data analysis based on the driver faultiness and proneness. The objective of this study is to develop a crash prediction model to estimate the likelihood of future crashes for the at-fault drivers. The logistic regression method is used by employing eight years' traffic crash data (2004–2011 in Louisiana. Crash predictors such as the driver's crash involvement, crash and road characteristics, human factors, collision type, and environmental factors are considered in the model. The at-fault and not-at-fault status of the crashes are used as the response variable. The developed model has identified a few important variables, and is used to correctly classify at-fault crashes up to 62.40% with a specificity of 77.25%. This model can identify as many as 62.40% of the crash incidence of at-fault drivers in the upcoming year. Traffic agencies can use the model for monitoring the performance of an at-fault crash-prone drivers and making roadway improvements meant to reduce crash proneness. From the findings, it is recommended that crash-prone drivers should be targeted for special safety programs regularly through education and regulations.

  20. Obstetric History and Likelihood of Preterm Birth of Twins.

    Science.gov (United States)

    Easter, Sarah Rae; Little, Sarah E; Robinson, Julian N; Mendez-Figueroa, Hector; Chauhan, Suneet P

    2018-01-05

     The objective of this study was to investigate the relationship between preterm birth in a prior pregnancy and preterm birth in a twin pregnancy.  We performed a secondary analysis of a randomized controlled trial evaluating 17-α-hydroxyprogesterone caproate in twins. Women were classified as nulliparous, multiparous with a prior term birth, or multiparous with a prior preterm birth. We used logistic regression to examine the odds of spontaneous preterm birth of twins before 35 weeks according to past obstetric history.  Of the 653 women analyzed, 294 were nulliparas, 310 had a prior term birth, and 49 had a prior preterm birth. Prior preterm birth increased the likelihood of spontaneous delivery before 35 weeks (adjusted odds ratio [aOR]: 2.44, 95% confidence interval [CI]: 1.28-4.66), whereas prior term delivery decreased these odds (aOR: 0.55, 95% CI: 0.38-0.78) in the current twin pregnancy compared with the nulliparous reference group. This translated into a lower odds of composite neonatal morbidity (aOR: 0.38, 95% CI: 0.27-0.53) for women with a prior term delivery.  For women carrying twins, a history of preterm birth increases the odds of spontaneous preterm birth, whereas a prior term birth decreases odds of spontaneous preterm birth and neonatal morbidity for the current twin pregnancy. These results offer risk stratification and reassurance for clinicians. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.

  1. Employment status at time of first hospitalization for heart failure is associated with a higher risk of death and rehospitalization for heart failure

    DEFF Research Database (Denmark)

    Rørth, Rasmus; Fosbøl, Emil L; Mogensen, Ulrik M

    2018-01-01

    AIMS: Employment status at time of first heart failure (HF) hospitalization may be an indicator of both self-perceived and objective health status. In this study, we examined the association between employment status and the risk of all-cause mortality and recurrent HF hospitalization in a nation......AIMS: Employment status at time of first heart failure (HF) hospitalization may be an indicator of both self-perceived and objective health status. In this study, we examined the association between employment status and the risk of all-cause mortality and recurrent HF hospitalization...

  2. Papers of the Canadian Institute's forum on natural gas purchasing strategies : critical information for natural gas consumers in a time of diminishing natural gas supplies and higher prices

    International Nuclear Information System (INIS)

    2003-01-01

    This conference provided insight into how to prosper in an increasingly complex natural gas marketplace. The presentations from key industry players offered valuable information on natural gas purchasing strategies that are working in the current volatile price environment. Diminishing natural gas supplies in North America mean that higher prices and volatility will continue. Other market challenges stem from potential cost increases in gas transportation, unbundling of natural gas services, and the changing energy marketing environment. The main factors that will affect prices for the winter of 2004 were outlined along with risk management and the best pricing strategies for businesses. The key strategies for managing the risks associated with natural gas purchase contracts were also reviewed, along with the issue of converging natural gas and electricity markets and the impact on energy consumers. The conference featured 15 presentations, of which 4 have been indexed separately for inclusion in this database. refs., tabs., figs

  3. High protein intake along with paternal part-time employment is associated with higher body fat mass among girls from South China.

    Science.gov (United States)

    Yang, Ming-Zhe; Xue, Hong-Mei; Pan, Jay; Libuda, Lars; Muckelbauer, Rebecca; Yang, Min; Quan, Liming; Cheng, Guo

    2017-05-23

    Protein intake has been suggested to be associated with body composition among western children. Our aim was to determine whether protein intake is associated with body composition among Chinese children and to investigate whether parental socioeconomic status modifies these associations. Cross-sectional data were collected from the baseline survey of an ongoing population-based prospective open cohort study conducted in 2013. In this survey, 2039 children in South China were recruited using cluster random sampling. Information of 1704 children (47% girls), aged 7-12 years from three primary schools (42 classes), on diet and anthropometry was included finally. Their daily protein intake was obtained by 3-day 24-h dietary recalls. Skinfold thickness, body height, and weight were measured to calculate percent body fat (%BF), fat mass index (FMI), and fat-free mass index (FFMI). Parental characteristics were collected by questionnaires. Among girls, protein intake was positively associated with %BF and FMI [estimate (SE) for %BF: 0.007 (0.003), p = 0.04; for FMI: 0.092 (0.002), p = 0.03], adjusted for pubertal stage, breast-feeding, maternal overweight, carbohydrate intake, energy intake, and physical activity level. Furthermore, there was interaction between paternal occupation and the relations of dietary protein with %BF and FMI (p for interaction  ≤ 0.04). None of the associations between protein intake and %BF, FMI, or FFMI was found among boys. Our data indicate that school-aged girls, but not boys, living in South China with higher dietary protein intake might have higher body fat mass, which could be modified by paternal occupation.

  4. High-order Composite Likelihood Inference for Max-Stable Distributions and Processes

    KAUST Repository

    Castruccio, Stefano; Huser, Raphaë l; Genton, Marc G.

    2015-01-01

    In multivariate or spatial extremes, inference for max-stable processes observed at a large collection of locations is a very challenging problem in computational statistics, and current approaches typically rely on less expensive composite likelihoods constructed from small subsets of data. In this work, we explore the limits of modern state-of-the-art computational facilities to perform full likelihood inference and to efficiently evaluate high-order composite likelihoods. With extensive simulations, we assess the loss of information of composite likelihood estimators with respect to a full likelihood approach for some widely-used multivariate or spatial extreme models, we discuss how to choose composite likelihood truncation to improve the efficiency, and we also provide recommendations for practitioners. This article has supplementary material online.

  5. High-order Composite Likelihood Inference for Max-Stable Distributions and Processes

    KAUST Repository

    Castruccio, Stefano

    2015-09-29

    In multivariate or spatial extremes, inference for max-stable processes observed at a large collection of locations is a very challenging problem in computational statistics, and current approaches typically rely on less expensive composite likelihoods constructed from small subsets of data. In this work, we explore the limits of modern state-of-the-art computational facilities to perform full likelihood inference and to efficiently evaluate high-order composite likelihoods. With extensive simulations, we assess the loss of information of composite likelihood estimators with respect to a full likelihood approach for some widely-used multivariate or spatial extreme models, we discuss how to choose composite likelihood truncation to improve the efficiency, and we also provide recommendations for practitioners. This article has supplementary material online.

  6. Supplementary Material for: High-Order Composite Likelihood Inference for Max-Stable Distributions and Processes

    KAUST Repository

    Castruccio, Stefano; Huser, Raphaë l; Genton, Marc G.

    2016-01-01

    In multivariate or spatial extremes, inference for max-stable processes observed at a large collection of points is a very challenging problem and current approaches typically rely on less expensive composite likelihoods constructed from small subsets of data. In this work, we explore the limits of modern state-of-the-art computational facilities to perform full likelihood inference and to efficiently evaluate high-order composite likelihoods. With extensive simulations, we assess the loss of information of composite likelihood estimators with respect to a full likelihood approach for some widely used multivariate or spatial extreme models, we discuss how to choose composite likelihood truncation to improve the efficiency, and we also provide recommendations for practitioners. This article has supplementary material online.

  7. PERBANDINGAN ESTIMASI KEMAMPUAN LATEN ANTARA METODE MAKSIMUM LIKELIHOOD DAN METODE BAYES

    Directory of Open Access Journals (Sweden)

    Heri Retnawati

    2015-10-01

    Full Text Available Studi ini bertujuan untuk membandingkan ketepatan estimasi kemampuan laten (latent trait pada model logistik dengan metode maksimum likelihood (ML gabungan dan bayes. Studi ini menggunakan metode simulasi Monte Carlo, dengan model data ujian nasional matematika SMP. Variabel simulasi adalah panjang tes dan banyaknya peserta.  Data dibangkitkan dengan menggunakan SAS/IML dengan replikasi 40 kali, dan tiap data diestimasi dengan ML dan Bayes. Hasil estimasi kemudian dibandingkan dengan kemampuan yang sebenarnya, dengan menghitung mean square of error (MSE dan korelasi antara kemampuan laten yang sebenarnya dan hasil estimasi. Metode yang memiliki MSE lebih kecil dikatakan sebagai metode estimasi yang lebih baik. Hasil studi menunjukkan bahwa pada estimasi kemampuan laten dengan 15, 20, 25, dan 30 butir dengan 500 dan 1.000 peserta, hasil MSE belum stabil, namun ketika peserta menjadi 1.500 orang, diperoleh akurasi estimasi kemampuan yang hampir sama baik estimasi antara metode ML dan metode Bayes. Pada estimasi dengan 15 dan 20 butir dan peserta 500, 1.000, dan 1.500, hasil MSE belum stabil, dan ketika estimasi melibatkan 25 dan 30 butir, baik dengan peserta 500, 1.000, maupun 1.500 akan diperoleh hasil yang lebih akurat dengan metode ML. Kata kunci: estimasi kemampuan, metode maksimum likelihood, metode Bayes     THE COMPARISON OF ESTIMATION OF LATENT TRAITS USING MAXIMUM LIKELIHOOD AND BAYES METHODS Abstract This study aimed to compare the accuracy of the estimation of latent ability (latent trait in the logistic model using maximum likelihood (ML and Bayes methods. This study uses a quantitative approach that is the Monte Carlo simulation method using students responses to national examination as data model, and variables are the length of the test and the number of participants. The data were generated using SAS/IML with replication 40 times, and each datum is then estimated by ML and Bayes. The estimation results are then compared with the

  8. Higher risks when working unusual times? A cross-validation of the effects on safety, health, and work-life balance.

    Science.gov (United States)

    Greubel, Jana; Arlinghaus, Anna; Nachreiner, Friedhelm; Lombardi, David A

    2016-11-01

    Replication and cross-validation of results on health and safety risks of work at unusual times. Data from two independent surveys (European Working Conditions Surveys 2005 and 2010; EU 2005: n = 23,934 and EU 2010: n = 35,187) were used to examine the relative risks of working at unusual times (evenings, Saturdays, and Sundays) on work-life balance, work-related health complaints, and occupational accidents using logistic regression while controlling for potential confounders such as demographics, work load, and shift work. For the EU 2005 survey, evening work was significantly associated with an increased risk of poor work-life balance (OR 1.69) and work-related health complaints (OR 1.14), Saturday work with poor work-life balance (OR 1.49) and occupational accidents (OR 1.34), and Sunday work with poor work-life balance (OR 1.15) and work-related health complaints (OR 1.17). For EU 2010, evening work was associated with poor work-life balance (OR 1.51) and work-related health complaints (OR 1.12), Saturday work with poor work-life balance (OR 1.60) and occupational accidents (OR 1.19) but a decrease in risk for work-related health complaints (OR 0.86) and Sunday work with work-related health complaints (OR 1.13). Risk estimates in both samples yielded largely similar results with comparable ORs and overlapping confidence intervals. Work at unusual times constitutes a considerable risk to social participation and health and showed structurally consistent effects over time and across samples.

  9. Parametric Roll Resonance Detection using Phase Correlation and Log-likelihood Testing Techniques

    DEFF Research Database (Denmark)

    Galeazzi, Roberto; Blanke, Mogens; Poulsen, Niels Kjølstad

    2009-01-01

    generation warning system the purpose of which is to provide the master with an onboard system able to trigger an alarm when parametric roll is likely to happen within the immediate future. A detection scheme is introduced, which is able to issue a warning within five roll periods after a resonant motion......Real-time detection of parametric roll is still an open issue that is gathering an increasing attention. A first generation warning systems, based on guidelines and polar diagrams, showed their potential to face issues like long-term prediction and risk assessment. This paper presents a second...... started. After having determined statistical properties of the signals at hand, a detector based on the generalised log-likelihood ratio test (GLRT) is designed to look for variation in signal power. The ability of the detector to trigger alarms when parametric roll is going to onset is evaluated on two...

  10. Searching for degenerate Higgs bosons using a profile likelihood ratio method

    CERN Document Server

    Heikkilä, Jaana

    ATLAS and CMS collaborations at the Large Hadron Collider have observed a new resonance con- sistent with the standard model Higgs boson. However, it has been suggested that the observed signal could also be produced by multiple nearly mass-degenerate states that couple differently to the standard model particles. In this work, a method to discriminate between the hypothesis of a single Higgs boson and that of multiple mass-degenerate Higgs bosons was developed. Using the matrix of measured signal strengths in different production and decay modes, parametrizations for the two hypotheses were constructed as a general rank 1 matrix and the most general $5 \\times 4$ matrix, respectively. The test statistic was defined as a ratio of profile likelihoods for the two hypotheses. The method was applied to the CMS measurements. The expected test statistic distribution was estimated twice by generating pseudo-experiments according to both the standard model hypothesis and the single Higgs boson hypothesis best fitting...

  11. Orthogonal series generalized likelihood ratio test for failure detection and isolation. [for aircraft control

    Science.gov (United States)

    Hall, Steven R.; Walker, Bruce K.

    1990-01-01

    A new failure detection and isolation algorithm for linear dynamic systems is presented. This algorithm, the Orthogonal Series Generalized Likelihood Ratio (OSGLR) test, is based on the assumption that the failure modes of interest can be represented by truncated series expansions. This assumption leads to a failure detection algorithm with several desirable properties. Computer simulation results are presented for the detection of the failures of actuators and sensors of a C-130 aircraft. The results show that the OSGLR test generally performs as well as the GLR test in terms of time to detect a failure and is more robust to failure mode uncertainty. However, the OSGLR test is also somewhat more sensitive to modeling errors than the GLR test.

  12. Efficient method for computing the maximum-likelihood quantum state from measurements with additive Gaussian noise.

    Science.gov (United States)

    Smolin, John A; Gambetta, Jay M; Smith, Graeme

    2012-02-17

    We provide an efficient method for computing the maximum-likelihood mixed quantum state (with density matrix ρ) given a set of measurement outcomes in a complete orthonormal operator basis subject to Gaussian noise. Our method works by first changing basis yielding a candidate density matrix μ which may have nonphysical (negative) eigenvalues, and then finding the nearest physical state under the 2-norm. Our algorithm takes at worst O(d(4)) for the basis change plus O(d(3)) for finding ρ where d is the dimension of the quantum state. In the special case where the measurement basis is strings of Pauli operators, the basis change takes only O(d(3)) as well. The workhorse of the algorithm is a new linear-time method for finding the closest probability distribution (in Euclidean distance) to a set of real numbers summing to one.

  13. On Maximum Likelihood Estimation for Left Censored Burr Type III Distribution

    Directory of Open Access Journals (Sweden)

    Navid Feroze

    2015-12-01

    Full Text Available Burr type III is an important distribution used to model the failure time data. The paper addresses the problem of estimation of parameters of the Burr type III distribution based on maximum likelihood estimation (MLE when the samples are left censored. As the closed form expression for the MLEs of the parameters cannot be derived, the approximate solutions have been obtained through iterative procedures. An extensive simulation study has been carried out to investigate the performance of the estimators with respect to sample size, censoring rate and true parametric values. A real life example has also been presented. The study revealed that the proposed estimators are consistent and capable of providing efficient results under small to moderate samples.

  14. Implementation of non-linear filters for iterative penalized maximum likelihood image reconstruction

    International Nuclear Information System (INIS)

    Liang, Z.; Gilland, D.; Jaszczak, R.; Coleman, R.

    1990-01-01

    In this paper, the authors report on the implementation of six edge-preserving, noise-smoothing, non-linear filters applied in image space for iterative penalized maximum-likelihood (ML) SPECT image reconstruction. The non-linear smoothing filters implemented were the median filter, the E 6 filter, the sigma filter, the edge-line filter, the gradient-inverse filter, and the 3-point edge filter with gradient-inverse filter, and the 3-point edge filter with gradient-inverse weight. A 3 x 3 window was used for all these filters. The best image obtained, by viewing the profiles through the image in terms of noise-smoothing, edge-sharpening, and contrast, was the one smoothed with the 3-point edge filter. The computation time for the smoothing was less than 1% of one iteration, and the memory space for the smoothing was negligible. These images were compared with the results obtained using Bayesian analysis

  15. Multivariate normal maximum likelihood with both ordinal and continuous variables, and data missing at random.

    Science.gov (United States)

    Pritikin, Joshua N; Brick, Timothy R; Neale, Michael C

    2018-04-01

    A novel method for the maximum likelihood estimation of structural equation models (SEM) with both ordinal and continuous indicators is introduced using a flexible multivariate probit model for the ordinal indicators. A full information approach ensures unbiased estimates for data missing at random. Exceeding the capability of prior methods, up to 13 ordinal variables can be included before integration time increases beyond 1 s per row. The method relies on the axiom of conditional probability to split apart the distribution of continuous and ordinal variables. Due to the symmetry of the axiom, two similar methods are available. A simulation study provides evidence that the two similar approaches offer equal accuracy. A further simulation is used to develop a heuristic to automatically select the most computationally efficient approach. Joint ordinal continuous SEM is implemented in OpenMx, free and open-source software.

  16. Empirical likelihood based detection procedure for change point in mean residual life functions under random censorship.

    Science.gov (United States)

    Chen, Ying-Ju; Ning, Wei; Gupta, Arjun K

    2016-05-01

    The mean residual life (MRL) function is one of the basic parameters of interest in survival analysis that describes the expected remaining time of an individual after a certain age. The study of changes in the MRL function is practical and interesting because it may help us to identify some factors such as age and gender that may influence the remaining lifetimes of patients after receiving a certain surgery. In this paper, we propose a detection procedure based on the empirical likelihood for the changes in MRL functions with right censored data. Two real examples are also given: Veterans' administration lung cancer study and Stanford heart transplant to illustrate the detecting procedure. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  17. Maximum Likelihood based comparison of the specific growth rates for P. aeruginosa and four mutator strains

    DEFF Research Database (Denmark)

    Philipsen, Kirsten Riber; Christiansen, Lasse Engbo; Mandsberg, Lotte Frigaard

    2008-01-01

    with an exponentially decaying function of the time between observations is suggested. A model with a full covariance structure containing OD-dependent variance and an autocorrelation structure is compared to a model with variance only and with no variance or correlation implemented. It is shown that the model...... are used for parameter estimation. The data is log-transformed such that a linear model can be applied. The transformation changes the variance structure, and hence an OD-dependent variance is implemented in the model. The autocorrelation in the data is demonstrated, and a correlation model...... that best describes data is a model taking into account the full covariance structure. An inference study is made in order to determine whether the growth rate of the five bacteria strains is the same. After applying a likelihood-ratio test to models with a full covariance structure, it is concluded...

  18. Supervised maximum-likelihood weighting of composite protein networks for complex prediction

    Directory of Open Access Journals (Sweden)

    Yong Chern Han

    2012-12-01

    Full Text Available Abstract Background Protein complexes participate in many important cellular functions, so finding the set of existent complexes is essential for understanding the organization and regulation of processes in the cell. With the availability of large amounts of high-throughput protein-protein interaction (PPI data, many algorithms have been proposed to discover protein complexes from PPI networks. However, such approaches are hindered by the high rate of noise in high-throughput PPI data, including spurious and missing interactions. Furthermore, many transient interactions are detected between proteins that are not from the same complex, while not all proteins from the same complex may actually interact. As a result, predicted complexes often do not match true complexes well, and many true complexes go undetected. Results We address these challenges by integrating PPI data with other heterogeneous data sources to construct a composite protein network, and using a supervised maximum-likelihood approach to weight each edge based on its posterior probability of belonging to a complex. We then use six different clustering algorithms, and an aggregative clustering strategy, to discover complexes in the weighted network. We test our method on Saccharomyces cerevisiae and Homo sapiens, and show that complex discovery is improved: compared to previously proposed supervised and unsupervised weighting approaches, our method recalls more known complexes, achieves higher precision at all recall levels, and generates novel complexes of greater functional similarity. Furthermore, our maximum-likelihood approach allows learned parameters to be used to visualize and evaluate the evidence of novel predictions, aiding human judgment of their credibility. Conclusions Our approach integrates multiple data sources with supervised learning to create a weighted composite protein network, and uses six clustering algorithms with an aggregative clustering strategy to

  19. Likelihood ratio meta-analysis: New motivation and approach for an old method.

    Science.gov (United States)

    Dormuth, Colin R; Filion, Kristian B; Platt, Robert W

    2016-03-01

    A 95% confidence interval (CI) in an updated meta-analysis may not have the expected 95% coverage. If a meta-analysis is simply updated with additional data, then the resulting 95% CI will be wrong because it will not have accounted for the fact that the earlier meta-analysis failed or succeeded to exclude the null. This situation can be avoided by using the likelihood ratio (LR) as a measure of evidence that does not depend on type-1 error. We show how an LR-based approach, first advanced by Goodman, can be used in a meta-analysis to pool data from separate studies to quantitatively assess where the total evidence points. The method works by estimating the log-likelihood ratio (LogLR) function from each study. Those functions are then summed to obtain a combined function, which is then used to retrieve the total effect estimate, and a corresponding 'intrinsic' confidence interval. Using as illustrations the CAPRIE trial of clopidogrel versus aspirin in the prevention of ischemic events, and our own meta-analysis of higher potency statins and the risk of acute kidney injury, we show that the LR-based method yields the same point estimate as the traditional analysis, but with an intrinsic confidence interval that is appropriately wider than the traditional 95% CI. The LR-based method can be used to conduct both fixed effect and random effects meta-analyses, it can be applied to old and new meta-analyses alike, and results can be presented in a format that is familiar to a meta-analytic audience. Copyright © 2016 Elsevier Inc. All rights reserved.

  20. Predictors of Likelihood of Speaking Up about Safety Concerns in Labour and Delivery

    Science.gov (United States)

    Lyndon, Audrey; Sexton, J. Bryan; Simpson, Kathleen Rice; Rosenstein, Alan; Lee, Kathryn A.; Wachter, Robert M.

    2011-01-01

    Background Despite widespread emphasis on promoting “assertive communication” by caregivers as essential to patient safety improvement efforts, fairly little is known about when and how clinicians speak up to address safety concerns. In this cross-sectional study we use a new measure of speaking up to begin exploring this issue in maternity care. Methods We developed a scenario-based measure of clinician’s assessment of potential harm and likelihood of speaking up in response to perceived harm. We embedded this scale in a survey with measures of safety climate, teamwork climate, disruptive behaviour, work stress, and personality traits of bravery and assertiveness. The survey was distributed to all registered nurses and obstetricians practicing in two US Labour & Delivery units. Results The response rate was 54% (125 of 230 potential respondents). Respondents were experienced clinicians (13.7 ± 11 years in specialty). Higher perception of harm, respondent role, specialty experience, and site predicted likelihood of speaking up when controlling for bravery and assertiveness. Physicians rated potential harm in common clinical scenarios lower than nurses did (7.5 vs. 8.4 on 2–10 scale; p<0.001). Some participants (12%) indicated they were unlikely to speak up despite perceiving high potential for harm in certain situations. Discussion This exploratory study found nurses and physicians differed in their harm ratings, and harm rating was a predictor of speaking up. This may partially explain persistent discrepancies between physicians and nurses in teamwork climate scores. Differing assessments of potential harms inherent in everyday practice may be a target for teamwork intervention in maternity care. PMID:22927492

  1. Prevalence of bloodstream pathogens is higher in neonatal encephalopathy cases vs. controls using a novel panel of real-time PCR assays.

    Science.gov (United States)

    Tann, Cally J; Nkurunziza, Peter; Nakakeeto, Margaret; Oweka, James; Kurinczuk, Jennifer J; Were, Jackson; Nyombi, Natasha; Hughes, Peter; Willey, Barbara A; Elliott, Alison M; Robertson, Nicola J; Klein, Nigel; Harris, Kathryn A

    2014-01-01

    In neonatal encephalopathy (NE), infectious co-morbidity is difficult to diagnose accurately, but may increase the vulnerability of the developing brain to hypoxia-ischemia. We developed a novel panel of species-specific real-time PCR assays to identify bloodstream pathogens amongst newborns with and without NE in Uganda. Multiplex real-time PCR assays for important neonatal bloodstream pathogens (gram positive and gram negative bacteria, cytomegalovirus (CMV), herpes simplex virus(HSV) and P. falciparum) were performed on whole blood taken from 202 encephalopathic and 101 control infants. Automated blood culture (BACTEC) was performed for all cases and unwell controls. Prevalence of pathogenic bacterial species amongst infants with NE was 3.6%, 6.9% and 8.9%, with culture, PCR and both tests in combination, respectively. More encephalopathic infants than controls had pathogenic bacterial species detected (8.9%vs2.0%, p = 0.028) using culture and PCR in combination. PCR detected bacteremia in 11 culture negative encephalopathic infants (3 Group B Streptococcus, 1 Group A Streptococcus, 1 Staphylococcus aureus and 6 Enterobacteriacae). Coagulase negative staphylococcus, frequently detected by PCR amongst case and control infants, was considered a contaminant. Prevalence of CMV, HSV and malaria amongst cases was low (1.5%, 0.5% and 0.5%, respectively). This real-time PCR panel detected more bacteremia than culture alone and provides a novel tool for detection of neonatal bloodstream pathogens that may be applied across a range of clinical situations and settings. Significantly more encephalopathic infants than controls had pathogenic bacterial species detected suggesting that infection may be an important risk factor for NE in this setting.

  2. Prevalence of bloodstream pathogens is higher in neonatal encephalopathy cases vs. controls using a novel panel of real-time PCR assays.

    Directory of Open Access Journals (Sweden)

    Cally J Tann

    Full Text Available In neonatal encephalopathy (NE, infectious co-morbidity is difficult to diagnose accurately, but may increase the vulnerability of the developing brain to hypoxia-ischemia. We developed a novel panel of species-specific real-time PCR assays to identify bloodstream pathogens amongst newborns with and without NE in Uganda.Multiplex real-time PCR assays for important neonatal bloodstream pathogens (gram positive and gram negative bacteria, cytomegalovirus (CMV, herpes simplex virus(HSV and P. falciparum were performed on whole blood taken from 202 encephalopathic and 101 control infants. Automated blood culture (BACTEC was performed for all cases and unwell controls.Prevalence of pathogenic bacterial species amongst infants with NE was 3.6%, 6.9% and 8.9%, with culture, PCR and both tests in combination, respectively. More encephalopathic infants than controls had pathogenic bacterial species detected (8.9%vs2.0%, p = 0.028 using culture and PCR in combination. PCR detected bacteremia in 11 culture negative encephalopathic infants (3 Group B Streptococcus, 1 Group A Streptococcus, 1 Staphylococcus aureus and 6 Enterobacteriacae. Coagulase negative staphylococcus, frequently detected by PCR amongst case and control infants, was considered a contaminant. Prevalence of CMV, HSV and malaria amongst cases was low (1.5%, 0.5% and 0.5%, respectively.This real-time PCR panel detected more bacteremia than culture alone and provides a novel tool for detection of neonatal bloodstream pathogens that may be applied across a range of clinical situations and settings. Significantly more encephalopathic infants than controls had pathogenic bacterial species detected suggesting that infection may be an important risk factor for NE in this setting.

  3. Maximum Likelihood Estimation and Inference With Examples in R, SAS and ADMB

    CERN Document Server

    Millar, Russell B

    2011-01-01

    This book takes a fresh look at the popular and well-established method of maximum likelihood for statistical estimation and inference. It begins with an intuitive introduction to the concepts and background of likelihood, and moves through to the latest developments in maximum likelihood methodology, including general latent variable models and new material for the practical implementation of integrated likelihood using the free ADMB software. Fundamental issues of statistical inference are also examined, with a presentation of some of the philosophical debates underlying the choice of statis

  4. DREAM3: network inference using dynamic context likelihood of relatedness and the inferelator.

    Directory of Open Access Journals (Sweden)

    Aviv Madar

    2010-03-01

    Full Text Available Many current works aiming to learn regulatory networks from systems biology data must balance model complexity with respect to data availability and quality. Methods that learn regulatory associations based on unit-less metrics, such as Mutual Information, are attractive in that they scale well and reduce the number of free parameters (model complexity per interaction to a minimum. In contrast, methods for learning regulatory networks based on explicit dynamical models are more complex and scale less gracefully, but are attractive as they may allow direct prediction of transcriptional dynamics and resolve the directionality of many regulatory interactions.We aim to investigate whether scalable information based methods (like the Context Likelihood of Relatedness method and more explicit dynamical models (like Inferelator 1.0 prove synergistic when combined. We test a pipeline where a novel modification of the Context Likelihood of Relatedness (mixed-CLR, modified to use time series data is first used to define likely regulatory interactions and then Inferelator 1.0 is used for final model selection and to build an explicit dynamical model.Our method ranked 2nd out of 22 in the DREAM3 100-gene in silico networks challenge. Mixed-CLR and Inferelator 1.0 are complementary, demonstrating a large performance gain relative to any single tested method, with precision being especially high at low recall values. Partitioning the provided data set into four groups (knock-down, knock-out, time-series, and combined revealed that using comprehensive knock-out data alone provides optimal performance. Inferelator 1.0 proved particularly powerful at resolving the directionality of regulatory interactions, i.e. "who regulates who" (approximately of identified true positives were correctly resolved. Performance drops for high in-degree genes, i.e. as the number of regulators per target gene increases, but not with out-degree, i.e. performance is not affected by

  5. MADmap: A Massively Parallel Maximum-Likelihood Cosmic Microwave Background Map-Maker

    Energy Technology Data Exchange (ETDEWEB)

    Cantalupo, Christopher; Borrill, Julian; Jaffe, Andrew; Kisner, Theodore; Stompor, Radoslaw

    2009-06-09

    MADmap is a software application used to produce maximum-likelihood images of the sky from time-ordered data which include correlated noise, such as those gathered by Cosmic Microwave Background (CMB) experiments. It works efficiently on platforms ranging from small workstations to the most massively parallel supercomputers. Map-making is a critical step in the analysis of all CMB data sets, and the maximum-likelihood approach is the most accurate and widely applicable algorithm; however, it is a computationally challenging task. This challenge will only increase with the next generation of ground-based, balloon-borne and satellite CMB polarization experiments. The faintness of the B-mode signal that these experiments seek to measure requires them to gather enormous data sets. MADmap is already being run on up to O(1011) time samples, O(108) pixels and O(104) cores, with ongoing work to scale to the next generation of data sets and supercomputers. We describe MADmap's algorithm based around a preconditioned conjugate gradient solver, fast Fourier transforms and sparse matrix operations. We highlight MADmap's ability to address problems typically encountered in the analysis of realistic CMB data sets and describe its application to simulations of the Planck and EBEX experiments. The massively parallel and distributed implementation is detailed and scaling complexities are given for the resources required. MADmap is capable of analysing the largest data sets now being collected on computing resources currently available, and we argue that, given Moore's Law, MADmap will be capable of reducing the most massive projected data sets.

  6. Higher percentage of in vitro apoptotic cells at time of diagnosis in patients with chronic lymphocytic leukemia indicate earlier treatment requirement: Ten years follow up

    Directory of Open Access Journals (Sweden)

    Kravić-Stevović Tamara

    2014-01-01

    Full Text Available Introduction. Chronic lymphocytic leukemia (CLL has an extremely variable clinical course. Biological reasons for that wide variation in clinical course and survival rates in CLL patients are not fully understood. Objective. The aim of the study was to evaluate the value of spontaneous apoptosis of CLL cells in vitro determined at presentation of disease, in prediction of treatment requirements and evolution of the CLL. Methods. Malignant B cells were isolated from the whole blood of 30 newly diagnosed CLL patients and cultured for 24 hours in RPMI-1640 medium supplemented with 10% of serum obtained from the same CLL patient. Cells were later fixed and processed for embedding in Epon, or cell smears were prepared and stained with TUNEL technique. Results. Ten-year follow-up revealed that patients with lower percentage of cells in apoptosis at presentation of disease had significant longer time treatment initiation (log rank test p0.05. Conclusion. The results of this study emphasize the importance of apoptosis of CLL cells at the time of the initial diagnosis in pathobiology of this disease. [Projekat Ministarstva nauke Republike Srbije, br. 41025

  7. Beyond Sex: Likelihood and Predictors of Effective and Ineffective Intervention in Intimate Partner Violence in Bystanders Perceiving an Emergency.

    Science.gov (United States)

    Chabot, Heather Frasier; Gray, Melissa L; Makande, Tariro B; Hoyt, Robert L

    2016-01-06

    Within the framework of the bystander model of intervention, we examined specific correlates and the likelihood of effective and ineffective intervention strategies of bystanders to an instance of intimate partner violence (IPV) identified as an emergency. We measured psychological variables associated with general prosocial behavior (including sex, instrumentality, expressiveness, empathy, personal distress, dispositional anger, and perceived barriers) as influential predictors in four IPV intervention behaviors (i.e., calling 911, talking to the victim, talking to the perpetrator, and physically interacting with the perpetrator). One hundred seventeen college community members completed preintervention measures, watched a film clip of IPV which they identified as an emergency, reported their likelihood of becoming involved and utilizing intervention behaviors, and identified perceived barriers to intervention. Participants were more likely to indicate using effective over ineffective intervention tactics. Lower perceived barriers to intervention predicted greater intervention likelihood. Hierarchical regression indicated that men and individuals higher in anger and instrumental traits were more likely to report that they would engage in riskier ineffective forms of intervention. Implications regarding bystander training and associations to intervention in related forms of violence including sexual assault are discussed. © The Author(s) 2016.

  8. Is there a hierarchy of social inferences? The likelihood and speed of inferring intentionality, mind, and personality.

    Science.gov (United States)

    Malle, Bertram F; Holbrook, Jess

    2012-04-01

    People interpret behavior by making inferences about agents' intentionality, mind, and personality. Past research studied such inferences 1 at a time; in real life, people make these inferences simultaneously. The present studies therefore examined whether 4 major inferences (intentionality, desire, belief, and personality), elicited simultaneously in response to an observed behavior, might be ordered in a hierarchy of likelihood and speed. To achieve generalizability, the studies included a wide range of stimulus behaviors, presented them verbally and as dynamic videos, and assessed inferences both in a retrieval paradigm (measuring the likelihood and speed of accessing inferences immediately after they were made) and in an online processing paradigm (measuring the speed of forming inferences during behavior observation). Five studies provide evidence for a hierarchy of social inferences-from intentionality and desire to belief to personality-that is stable across verbal and visual presentations and that parallels the order found in developmental and primate research. (c) 2012 APA, all rights reserved.

  9. Low Birthweight Increases the Likelihood of Severe Steatosis in Pediatric Non-Alcoholic Fatty Liver Disease.

    Science.gov (United States)

    Bugianesi, Elisabetta; Bizzarri, Carla; Rosso, Chiara; Mosca, Antonella; Panera, Nadia; Veraldi, Silvio; Dotta, Andrea; Giannone, Germana; Raponi, Massimiliano; Cappa, Marco; Alisi, Anna; Nobili, Valerio

    2017-08-01

    Small for gestational age (SGA) is associated with an increased risk of non-alcoholic fatty liver disease (NAFLD). Our aim was to investigate the correlation of birthweight with the severity of liver damage in a large cohort of children with NAFLD. Two hundred and eighty-eight consecutive Caucasian Italian overweight/obese children with biopsy-proven NAFLD were included in the study. We examined the relative association of each histological feature of NAFLD with metabolic alterations, insulin-resistance, I148M polymorphism in the patatin-like phospholipase domain-containing protein 3 (PNPLA3) gene, and birthweight relative to gestational age. In the whole NAFLD cohort, 12.2% of patients were SGA, 62.8% appropriate for gestational age (AGA), and 25% large for gestational age (LGA). SGA children had a higher prevalence of severe steatosis (69%) and severe portal inflammation (14%) compared with the AGA and LGA groups. Notably, severe steatosis (>66%) was decreasing from SGA to AGA and LGA, whereas the prevalence of moderate steatosis (33-66%) was similar in three groups. The prevalence of type 1 NAFLD is higher in the LGA group with respect to the other two groups (25% vs.5.2% vs.9.4%), whereas the SGA group shows a higher prevalence of overlap type (85.8%) with respect to the LGA group (51.4%) but not compared with the AGA group (75%). At multivariable regression analysis, SGA at birth increased fourfold the likelihood of severe steatosis (odds ratio (OR) 4.0, 95% confidence interval (CI) 1.43-10.9, P=0.008) and threefold the likelihood of NAFLD Activity Score (NAS)≥5 (OR 2.98, 95% CI 1.06-8.33, P=0.037) independently of homeostasis model assessment of insulin resistance and PNPLA3 genotype. The PNPLA3-CC wild-type genotype was the strongest independent predictor of the absence of significant fibrosis (OR 0.26, 95% CI 0.13-0.52, P=<0.001). In children with NAFLD, the risk of severe steatosis is increased by SGA at birth, independent of and in addition to other

  10. An Estimate of the Likelihood for a Climatically Significant Volcanic Eruption Within the Present Decade (2000-2009)

    Science.gov (United States)

    Wilson, Robert M.; Franklin, M. Rose (Technical Monitor)

    2000-01-01

    Since 1750, the number of cataclysmic volcanic eruptions (i.e., those having a volcanic explosivity index, or VEI, equal to 4 or larger) per decade is found to span 2-11, with 96% located in the tropics and extra-tropical Northern Hemisphere, A two-point moving average of the time series has higher values since the 1860s than before, measuring 8.00 in the 1910s (the highest value) and measuring 6.50 in the 1980s, the highest since the 18 1 0s' peak. On the basis of the usual behavior of the first difference of the two-point moving averages, one infers that the two-point moving average for the 1990s will measure about 6.50 +/- 1.00, implying that about 7 +/- 4 cataclysmic volcanic eruptions should be expected during the present decade (2000-2009). Because cataclysmic volcanic eruptions (especially, those having VEI equal to 5 or larger) nearly always have been associated with episodes of short-term global cooling, the occurrence of even one could ameliorate the effects of global warming. Poisson probability distributions reveal that the probability of one or more VEI equal to 4 or larger events occurring within the next ten years is >99%, while it is about 49% for VEI equal to 5 or larger events and 18% for VEI equal to 6 or larger events. Hence, the likelihood that a, climatically significant volcanic eruption will occur within the next 10 years appears reasonably high.

  11. Likelihood of Unemployed Smokers vs Nonsmokers Attaining Reemployment in a One-Year Observational Study.

    Science.gov (United States)

    Prochaska, Judith J; Michalek, Anne K; Brown-Johnson, Catherine; Daza, Eric J; Baiocchi, Michael; Anzai, Nicole; Rogers, Amy; Grigg, Mia; Chieng, Amy

    2016-05-01

    Studies in the United States and Europe have found higher smoking prevalence among unemployed job seekers relative to employed workers. While consistent, the extant epidemiologic investigations of smoking and work status have been cross-sectional, leaving it underdetermined whether tobacco use is a cause or effect of unemployment. To examine differences in reemployment by smoking status in a 12-month period. An observational 2-group study was conducted from September 10, 2013, to August 15, 2015, in employment service settings in the San Francisco Bay Area (California). Participants were 131 daily smokers and 120 nonsmokers, all of whom were unemployed job seekers. Owing to the study's observational design, a propensity score analysis was conducted using inverse probability weighting with trimmed observations. Including covariates of time out of work, age, education, race/ethnicity, and perceived health status as predictors of smoking status. Reemployment at 12-month follow-up. Of the 251 study participants, 165 (65.7) were men, with a mean (SD) age of 48 (11) years; 96 participants were white (38.2%), 90 were black (35.9%), 24 were Hispanic (9.6%), 18 were Asian (7.2%), and 23 were multiracial or other race (9.2%); 78 had a college degree (31.1%), 99 were unstably housed (39.4%), 70 lacked reliable transportation (27.9%), 52 had a criminal history (20.7%), and 72 had received prior treatment for alcohol or drug use (28.7%). Smokers consumed a mean (SD) of 13.5 (8.2) cigarettes per day at baseline. At 12-month follow-up (217 participants retained [86.5%]), 60 of 108 nonsmokers (55.6%) were reemployed compared with 29 of 109 smokers (26.6%) (unadjusted risk difference, 0.29; 95% CI, 0.15-0.42). With 6% of analysis sample observations trimmed, the estimated risk difference indicated that nonsmokers were 30% (95% CI, 12%-48%) more likely on average to be reemployed at 1 year relative to smokers. Results of a sensitivity analysis with additional covariates of sex

  12. Joint maximum-likelihood magnitudes of presumed underground nuclear test explosions

    Science.gov (United States)

    Peacock, Sheila; Douglas, Alan; Bowers, David

    2017-08-01

    Body-wave magnitudes (mb) of 606 seismic disturbances caused by presumed underground nuclear test explosions at specific test sites between 1964 and 1996 have been derived from station amplitudes collected by the International Seismological Centre (ISC), by a joint inversion for mb and station-specific magnitude corrections. A maximum-likelihood method was used to reduce the upward bias of network mean magnitudes caused by data censoring, where arrivals at stations that do not report arrivals are assumed to be hidden by the ambient noise at the time. Threshold noise levels at each station were derived from the ISC amplitudes using the method of Kelly and Lacoss, which fits to the observed magnitude-frequency distribution a Gutenberg-Richter exponential decay truncated at low magnitudes by an error function representing the low-magnitude threshold of the station. The joint maximum-likelihood inversion is applied to arrivals from the sites: Semipalatinsk (Kazakhstan) and Novaya Zemlya, former Soviet Union; Singer (Lop Nor), China; Mururoa and Fangataufa, French Polynesia; and Nevada, USA. At sites where eight or more arrivals could be used to derive magnitudes and station terms for 25 or more explosions (Nevada, Semipalatinsk and Mururoa), the resulting magnitudes and station terms were fixed and a second inversion carried out to derive magnitudes for additional explosions with three or more arrivals. 93 more magnitudes were thus derived. During processing for station thresholds, many stations were rejected for sparsity of data, obvious errors in reported amplitude, or great departure of the reported amplitude-frequency distribution from the expected left-truncated exponential decay. Abrupt changes in monthly mean amplitude at a station apparently coincide with changes in recording equipment and/or analysis method at the station.

  13. Bias Correction for the Maximum Likelihood Estimate of Ability. Research Report. ETS RR-05-15

    Science.gov (United States)

    Zhang, Jinming

    2005-01-01

    Lord's bias function and the weighted likelihood estimation method are effective in reducing the bias of the maximum likelihood estimate of an examinee's ability under the assumption that the true item parameters are known. This paper presents simulation studies to determine the effectiveness of these two methods in reducing the bias when the item…

  14. Bias correction in the hierarchical likelihood approach to the analysis of multivariate survival data.

    Science.gov (United States)

    Jeon, Jihyoun; Hsu, Li; Gorfine, Malka

    2012-07-01

    Frailty models are useful for measuring unobserved heterogeneity in risk of failures across clusters, providing cluster-specific risk prediction. In a frailty model, the latent frailties shared by members within a cluster are assumed to act multiplicatively on the hazard function. In order to obtain parameter and frailty variate estimates, we consider the hierarchical likelihood (H-likelihood) approach (Ha, Lee and Song, 2001. Hierarchical-likelihood approach for frailty models. Biometrika 88, 233-243) in which the latent frailties are treated as "parameters" and estimated jointly with other parameters of interest. We find that the H-likelihood estimators perform well when the censoring rate is low, however, they are substantially biased when the censoring rate is moderate to high. In this paper, we propose a simple and easy-to-implement bias correction method for the H-likelihood estimators under a shared frailty model. We also extend the method to a multivariate frailty model, which incorporates complex dependence structure within clusters. We conduct an extensive simulation study and show that the proposed approach performs very well for censoring rates as high as 80%. We also illustrate the method with a breast cancer data set. Since the H-likelihood is the same as the penalized likelihood function, the proposed bias correction method is also applicable to the penalized likelihood estimators.

  15. Existence and uniqueness of the maximum likelihood estimator for models with a Kronecker product covariance structure

    NARCIS (Netherlands)

    Ros, B.P.; Bijma, F.; de Munck, J.C.; de Gunst, M.C.M.

    2016-01-01

    This paper deals with multivariate Gaussian models for which the covariance matrix is a Kronecker product of two matrices. We consider maximum likelihood estimation of the model parameters, in particular of the covariance matrix. There is no explicit expression for the maximum likelihood estimator

  16. Use of deterministic sampling for exploring likelihoods in linkage analysis for quantitative traits.

    NARCIS (Netherlands)

    Mackinnon, M.J.; Beek, van der S.; Kinghorn, B.P.

    1996-01-01

    Deterministic sampling was used to numerically evaluate the expected log-likelihood surfaces of QTL-marker linkage models in large pedigrees with simple structures. By calculating the expected values of likelihoods, questions of power of experimental designs, bias in parameter estimates, approximate

  17. Likelihood ratio data to report the validation of a forensic fingerprint evaluation method

    NARCIS (Netherlands)

    Ramos, Daniel; Haraksim, Rudolf; Meuwly, Didier

    2017-01-01

    Data to which the authors refer to throughout this article are likelihood ratios (LR) computed from the comparison of 5–12 minutiae fingermarks with fingerprints. These LRs data are used for the validation of a likelihood ratio (LR) method in forensic evidence evaluation. These data present a

  18. A guideline for the validation of likelihood ratio methods used for forensic evidence evaluation

    NARCIS (Netherlands)

    Meuwly, Didier; Ramos, Daniel; Haraksim, Rudolf

    2017-01-01

    This Guideline proposes a protocol for the validation of forensic evaluation methods at the source level, using the Likelihood Ratio framework as defined within the Bayes’ inference model. In the context of the inference of identity of source, the Likelihood Ratio is used to evaluate the strength of

  19. Predictors of Self-Reported Likelihood of Working with Older Adults

    Science.gov (United States)

    Eshbaugh, Elaine M.; Gross, Patricia E.; Satrom, Tatum

    2010-01-01

    This study examined the self-reported likelihood of working with older adults in a future career among 237 college undergraduates at a midsized Midwestern university. Although aging anxiety was not significantly related to likelihood of working with older adults, those students who had a greater level of death anxiety were less likely than other…

  20. Sampling variability in forensic likelihood-ratio computation: A simulation study

    NARCIS (Netherlands)

    Ali, Tauseef; Spreeuwers, Lieuwe Jan; Veldhuis, Raymond N.J.; Meuwly, Didier

    2015-01-01

    Recently, in the forensic biometric community, there is a growing interest to compute a metric called “likelihood- ratio‿ when a pair of biometric specimens is compared using a biometric recognition system. Generally, a biomet- ric recognition system outputs a score and therefore a likelihood-ratio

  1. Statistical modelling of survival data with random effects h-likelihood approach

    CERN Document Server

    Ha, Il Do; Lee, Youngjo

    2017-01-01

    This book provides a groundbreaking introduction to the likelihood inference for correlated survival data via the hierarchical (or h-) likelihood in order to obtain the (marginal) likelihood and to address the computational difficulties in inferences and extensions. The approach presented in the book overcomes shortcomings in the traditional likelihood-based methods for clustered survival data such as intractable integration. The text includes technical materials such as derivations and proofs in each chapter, as well as recently developed software programs in R (“frailtyHL”), while the real-world data examples together with an R package, “frailtyHL” in CRAN, provide readers with useful hands-on tools. Reviewing new developments since the introduction of the h-likelihood to survival analysis (methods for interval estimation of the individual frailty and for variable selection of the fixed effects in the general class of frailty models) and guiding future directions, the book is of interest to research...

  2. The likelihood principle and its proof – a never-ending story…

    DEFF Research Database (Denmark)

    Jørgensen, Thomas Martini

    2015-01-01

    An ongoing controversy in philosophy of statistics is the so-called “likelihood principle” essentially stating that all evidence which is obtained from an experiment about an unknown quantity θ is contained in the likelihood function of θ. Common classical statistical methodology, such as the use...... of significance tests, and confidence intervals, depends on the experimental procedure and unrealized events and thus violates the likelihood principle. The likelihood principle was identified by that name and proved in a famous paper by Allan Birnbaum in 1962. However, ever since both the principle itself...... as well as the proof has been highly debated. This presentation will illustrate the debate of both the principle and its proof, from 1962 and up to today. An often-used experiment to illustrate the controversy between classical interpretation and evidential confirmation based on the likelihood principle...

  3. PROCOV: maximum likelihood estimation of protein phylogeny under covarion models and site-specific covarion pattern analysis

    Directory of Open Access Journals (Sweden)

    Wang Huai-Chun

    2009-09-01

    Full Text Available Abstract Background The covarion hypothesis of molecular evolution holds that selective pressures on a given amino acid or nucleotide site are dependent on the identity of other sites in the molecule that change throughout time, resulting in changes of evolutionary rates of sites along the branches of a phylogenetic tree. At the sequence level, covarion-like evolution at a site manifests as conservation of nucleotide or amino acid states among some homologs where the states are not conserved in other homologs (or groups of homologs. Covarion-like evolution has been shown to relate to changes in functions at sites in different clades, and, if ignored, can adversely affect the accuracy of phylogenetic inference. Results PROCOV (protein covarion analysis is a software tool that implements a number of previously proposed covarion models of protein evolution for phylogenetic inference in a maximum likelihood framework. Several algorithmic and implementation improvements in this tool over previous versions make computationally expensive tree searches with covarion models more efficient and analyses of large phylogenomic data sets tractable. PROCOV can be used to identify covarion sites by comparing the site likelihoods under the covarion process to the corresponding site likelihoods under a rates-across-sites (RAS process. Those sites with the greatest log-likelihood difference between a 'covarion' and an RAS process were found to be of functional or structural significance in a dataset of bacterial and eukaryotic elongation factors. Conclusion Covarion models implemented in PROCOV may be especially useful for phylogenetic estimation when ancient divergences between sequences have occurred and rates of evolution at sites are likely to have changed over the tree. It can also be used to study lineage-specific functional shifts in protein families that result in changes in the patterns of site variability among subtrees.

  4. Internationalization of Chinese Higher Education

    Science.gov (United States)

    Chen, Linhan; Huang, Danyan

    2013-01-01

    This paper probes into the development of internationalization of higher education in China from ancient times to modern times, including the emergence of international connections in Chinese higher education and the subsequent development of such connections, the further development of internationalization of Chinese higher education, and the…

  5. A Distributed Leadership Change Process Model for Higher Education

    Science.gov (United States)

    Jones, Sandra; Harvey, Marina

    2017-01-01

    The higher education sector operates in an increasingly complex global environment that is placing it under considerable stress and resulting in widespread change to the operating context and leadership of higher education institutions. The outcome has been the increased likelihood of conflict between academics and senior leaders, presaging the…

  6. Predicting Teacher Likelihood to Use School Gardens: A Case Study

    Science.gov (United States)

    Kincy, Natalie; Fuhrman, Nicholas E.; Navarro, Maria; Knauft, David

    2016-01-01

    A quantitative survey, built around the theory of planned behavior, was used to investigate elementary teachers' attitudes, school norms, perceived behavioral control, and intent in both current and ideal teaching situations toward using gardens in their curriculum. With positive school norms and teachers who garden in their personal time, 77% of…

  7. The fine-tuning cost of the likelihood in SUSY models

    CERN Document Server

    Ghilencea, D M

    2013-01-01

    In SUSY models, the fine tuning of the electroweak (EW) scale with respect to their parameters gamma_i={m_0, m_{1/2}, mu_0, A_0, B_0,...} and the maximal likelihood L to fit the experimental data are usually regarded as two different problems. We show that, if one regards the EW minimum conditions as constraints that fix the EW scale, this commonly held view is not correct and that the likelihood contains all the information about fine-tuning. In this case we show that the corrected likelihood is equal to the ratio L/Delta of the usual likelihood L and the traditional fine tuning measure Delta of the EW scale. A similar result is obtained for the integrated likelihood over the set {gamma_i}, that can be written as a surface integral of the ratio L/Delta, with the surface in gamma_i space determined by the EW minimum constraints. As a result, a large likelihood actually demands a large ratio L/Delta or equivalently, a small chi^2_{new}=chi^2_{old}+2*ln(Delta). This shows the fine-tuning cost to the likelihood ...

  8. Estimation of Model's Marginal likelihood Using Adaptive Sparse Grid Surrogates in Bayesian Model Averaging

    Science.gov (United States)

    Zeng, X.

    2015-12-01

    A large number of model executions are required to obtain alternative conceptual models' predictions and their posterior probabilities in Bayesian model averaging (BMA). The posterior model probability is estimated through models' marginal likelihood and prior probability. The heavy computation burden hinders the implementation of BMA prediction, especially for the elaborated marginal likelihood estimator. For overcoming the computation burden of BMA, an adaptive sparse grid (SG) stochastic collocation method is used to build surrogates for alternative conceptual models through the numerical experiment of a synthetical groundwater model. BMA predictions depend on model posterior weights (or marginal likelihoods), and this study also evaluated four marginal likelihood estimators, including arithmetic mean estimator (AME), harmonic mean estimator (HME), stabilized harmonic mean estimator (SHME), and thermodynamic integration estimator (TIE). The results demonstrate that TIE is accurate in estimating conceptual models' marginal likelihoods. The BMA-TIE has better predictive performance than other BMA predictions. TIE has high stability for estimating conceptual model's marginal likelihood. The repeated estimated conceptual model's marginal likelihoods by TIE have significant less variability than that estimated by other estimators. In addition, the SG surrogates are efficient to facilitate BMA predictions, especially for BMA-TIE. The number of model executions needed for building surrogates is 4.13%, 6.89%, 3.44%, and 0.43% of the required model executions of BMA-AME, BMA-HME, BMA-SHME, and BMA-TIE, respectively.

  9. Zero-inflated Poisson model based likelihood ratio test for drug safety signal detection.

    Science.gov (United States)

    Huang, Lan; Zheng, Dan; Zalkikar, Jyoti; Tiwari, Ram

    2017-02-01

    In recent decades, numerous methods have been developed for data mining of large drug safety databases, such as Food and Drug Administration's (FDA's) Adverse Event Reporting System, where data matrices are formed by drugs such as columns and adverse events as rows. Often, a large number of cells in these data matrices have zero cell counts and some of them are "true zeros" indicating that the drug-adverse event pairs cannot occur, and these zero counts are distinguished from the other zero counts that are modeled zero counts and simply indicate that the drug-adverse event pairs have not occurred yet or have not been reported yet. In this paper, a zero-inflated Poisson model based likelihood ratio test method is proposed to identify drug-adverse event pairs that have disproportionately high reporting rates, which are also called signals. The maximum likelihood estimates of the model parameters of zero-inflated Poisson model based likelihood ratio test are obtained using the expectation and maximization algorithm. The zero-inflated Poisson model based likelihood ratio test is also modified to handle the stratified analyses for binary and categorical covariates (e.g. gender and age) in the data. The proposed zero-inflated Poisson model based likelihood ratio test method is shown to asymptotically control the type I error and false discovery rate, and its finite sample performance for signal detection is evaluated through a simulation study. The simulation results show that the zero-inflated Poisson model based likelihood ratio test method performs similar to Poisson model based likelihood ratio test method when the estimated percentage of true zeros in the database is small. Both the zero-inflated Poisson model based likelihood ratio test and likelihood ratio test methods are applied to six selected drugs, from the 2006 to 2011 Adverse Event Reporting System database, with varying percentages of observed zero-count cells.

  10. Massive optimal data compression and density estimation for scalable, likelihood-free inference in cosmology

    Science.gov (United States)

    Alsing, Justin; Wandelt, Benjamin; Feeney, Stephen

    2018-03-01

    Many statistical models in cosmology can be simulated forwards but have intractable likelihood functions. Likelihood-free inference methods allow us to perform Bayesian inference from these models using only forward simulations, free from any likelihood assumptions or approximations. Likelihood-free inference generically involves simulating mock data and comparing to the observed data; this comparison in data-space suffers from the curse of dimensionality and requires compression of the data to a small number of summary statistics to be tractable. In this paper we use massive asymptotically-optimal data compression to reduce the dimensionality of the data-space to just one number per parameter, providing a natural and optimal framework for summary statistic choice for likelihood-free inference. Secondly, we present the first cosmological application of Density Estimation Likelihood-Free Inference (DELFI), which learns a parameterized model for joint distribution of data and parameters, yielding both the parameter posterior and the model evidence. This approach is conceptually simple, requires less tuning than traditional Approximate Bayesian Computation approaches to likelihood-free inference and can give high-fidelity posteriors from orders of magnitude fewer forward simulations. As an additional bonus, it enables parameter inference and Bayesian model comparison simultaneously. We demonstrate Density Estimation Likelihood-Free Inference with massive data compression on an analysis of the joint light-curve analysis supernova data, as a simple validation case study. We show that high-fidelity posterior inference is possible for full-scale cosmological data analyses with as few as ˜104 simulations, with substantial scope for further improvement, demonstrating the scalability of likelihood-free inference to large and complex cosmological datasets.

  11. Implementation of linear filters for iterative penalized maximum likelihood SPECT reconstruction

    International Nuclear Information System (INIS)

    Liang, Z.

    1991-01-01

    This paper reports on six low-pass linear filters applied in frequency space implemented for iterative penalized maximum-likelihood (ML) SPECT image reconstruction. The filters implemented were the Shepp-Logan filter, the Butterworth filer, the Gaussian filter, the Hann filter, the Parzen filer, and the Lagrange filter. The low-pass filtering was applied in frequency space to projection data for the initial estimate and to the difference of projection data and reprojected data for higher order approximations. The projection data were acquired experimentally from a chest phantom consisting of non-uniform attenuating media. All the filters could effectively remove the noise and edge artifacts associated with ML approach if the frequency cutoff was properly chosen. The improved performance of the Parzen and Lagrange filters relative to the others was observed. The best image, by viewing its profiles in terms of noise-smoothing, edge-sharpening, and contrast, was the one obtained with the Parzen filter. However, the Lagrange filter has the potential to consider the characteristics of detector response function

  12. Speech perception in autism spectrum disorder: An activation likelihood estimation meta-analysis.

    Science.gov (United States)

    Tryfon, Ana; Foster, Nicholas E V; Sharda, Megha; Hyde, Krista L

    2018-02-15

    Autism spectrum disorder (ASD) is often characterized by atypical language profiles and auditory and speech processing. These can contribute to aberrant language and social communication skills in ASD. The study of the neural basis of speech perception in ASD can serve as a potential neurobiological marker of ASD early on, but mixed results across studies renders it difficult to find a reliable neural characterization of speech processing in ASD. To this aim, the present study examined the functional neural basis of speech perception in ASD versus typical development (TD) using an activation likelihood estimation (ALE) meta-analysis of 18 qualifying studies. The present study included separate analyses for TD and ASD, which allowed us to examine patterns of within-group brain activation as well as both common and distinct patterns of brain activation across the ASD and TD groups. Overall, ASD and TD showed mostly common brain activation of speech processing in bilateral superior temporal gyrus (STG) and left inferior frontal gyrus (IFG). However, the results revealed trends for some distinct activation in the TD group showing additional activation in higher-order brain areas including left superior frontal gyrus (SFG), left medial frontal gyrus (MFG), and right IFG. These results provide a more reliable neural characterization of speech processing in ASD relative to previous single neuroimaging studies and motivate future work to investigate how these brain signatures relate to behavioral measures of speech processing in ASD. Copyright © 2017 Elsevier B.V. All rights reserved.

  13. Maximal information analysis: I - various Wayne State plots and the most common likelihood principle

    International Nuclear Information System (INIS)

    Bonvicini, G.

    2005-01-01

    Statistical analysis using all moments of the likelihood L(y vertical bar α) (y being the data and α being the fit parameters) is presented. The relevant plots for various data fitting situations are presented. The goodness of fit (GOF) parameter (currently the χ 2 ) is redefined as the isoprobability level in a multidimensional space. Many useful properties of statistical analysis are summarized in a new statistical principle which states that the most common likelihood, and not the tallest, is the best possible likelihood, when comparing experiments or hypotheses

  14. Simplified likelihood for the re-interpretation of public CMS results

    CERN Document Server

    The CMS Collaboration

    2017-01-01

    In this note, a procedure for the construction of simplified likelihoods for the re-interpretation of the results of CMS searches for new physics is presented. The procedure relies on the use of a reduced set of information on the background models used in these searches which can readily be provided by the CMS collaboration. A toy example is used to demonstrate the procedure and its accuracy in reproducing the full likelihood for setting limits in models for physics beyond the standard model. Finally, two representative searches from the CMS collaboration are used to demonstrate the validity of the simplified likelihood approach under realistic conditions.

  15. Task-based detectability in CT image reconstruction by filtered backprojection and penalized likelihood estimation

    Energy Technology Data Exchange (ETDEWEB)

    Gang, Grace J. [Institute of Biomaterials and Biomedical Engineering, University of Toronto, Toronto, Ontario M5G 2M9, Canada and Department of Biomedical Engineering, Johns Hopkins University, Baltimore Maryland 21205 (Canada); Stayman, J. Webster; Zbijewski, Wojciech [Department of Biomedical Engineering, Johns Hopkins University, Baltimore Maryland 21205 (United States); Siewerdsen, Jeffrey H., E-mail: jeff.siewerdsen@jhu.edu [Institute of Biomaterials and Biomedical Engineering, University of Toronto, Toronto, Ontario M5G 2M9, Canada and Department of Biomedical Engineering, Johns Hopkins University, Baltimore, Maryland 21205 (United States)

    2014-08-15

    Purpose: Nonstationarity is an important aspect of imaging performance in CT and cone-beam CT (CBCT), especially for systems employing iterative reconstruction. This work presents a theoretical framework for both filtered-backprojection (FBP) and penalized-likelihood (PL) reconstruction that includes explicit descriptions of nonstationary noise, spatial resolution, and task-based detectability index. Potential utility of the model was demonstrated in the optimal selection of regularization parameters in PL reconstruction. Methods: Analytical models for local modulation transfer function (MTF) and noise-power spectrum (NPS) were investigated for both FBP and PL reconstruction, including explicit dependence on the object and spatial location. For FBP, a cascaded systems analysis framework was adapted to account for nonstationarity by separately calculating fluence and system gains for each ray passing through any given voxel. For PL, the point-spread function and covariance were derived using the implicit function theorem and first-order Taylor expansion according toFessler [“Mean and variance of implicitly defined biased estimators (such as penalized maximum likelihood): Applications to tomography,” IEEE Trans. Image Process. 5(3), 493–506 (1996)]. Detectability index was calculated for a variety of simple tasks. The model for PL was used in selecting the regularization strength parameter to optimize task-based performance, with both a constant and a spatially varying regularization map. Results: Theoretical models of FBP and PL were validated in 2D simulated fan-beam data and found to yield accurate predictions of local MTF and NPS as a function of the object and the spatial location. The NPS for both FBP and PL exhibit similar anisotropic nature depending on the pathlength (and therefore, the object and spatial location within the object) traversed by each ray, with the PL NPS experiencing greater smoothing along directions with higher noise. The MTF of FBP

  16. Undernutrition among adults in India: the significance of individual-level and contextual factors impacting on the likelihood of underweight across sub-populations.

    Science.gov (United States)

    Siddiqui, Md Zakaria; Donato, Ronald

    2017-01-01

    To investigate the extent to which individual-level as well as macro-level contextual factors influence the likelihood of underweight across adult sub-populations in India. Population-based cross-sectional survey included in India's National Health Family Survey conducted in 2005-06. We disaggregated into eight sub-populations. Multistage nationally representative household survey covering 99 % of India's population. The survey covered 124 385 females aged 15-49 years and 74 369 males aged 15-54 years. A social gradient in underweight exists in India. Even after allowing for wealth status, differences in the predicted probability of underweight persisted based upon rurality, age/maturity and gender. We found individual-level education lowered the likelihood of underweight for males, but no statistical association for females. Paradoxically, rural young (15-24 years) females from more educated villages had a higher likelihood of underweight relative to those in less educated villages; but for rural mature (>24 years) females the opposite was the case. Christians had a significantly lower likelihood of underweight relative to other socio-religious groups (OR=0·53-0·80). Higher state-level inequality increased the likelihood of underweight across most population groups, while neighbourhood inequality exhibited a similar relationship for the rural young population subgroups only. Individual states/neighbourhoods accounted for 5-9 % of the variation in the prediction of underweight. We found that rural young females represent a particularly highly vulnerable sub-population. Economic growth alone is unlikely to reduce the burden of malnutrition in India; accordingly, policy makers need to address the broader social determinants that contribute to higher underweight prevalence in specific demographic subgroups.

  17. PALM: a paralleled and integrated framework for phylogenetic inference with automatic likelihood model selectors.

    Directory of Open Access Journals (Sweden)

    Shu-Hwa Chen

    Full Text Available BACKGROUND: Selecting an appropriate substitution model and deriving a tree topology for a given sequence set are essential in phylogenetic analysis. However, such time consuming, computationally intensive tasks rely on knowledge of substitution model theories and related expertise to run through all possible combinations of several separate programs. To ensure a thorough and efficient analysis and avert tedious manipulations of various programs, this work presents an intuitive framework, the phylogenetic reconstruction with automatic likelihood model selectors (PALM, with convincing, updated algorithms and a best-fit model selection mechanism for seamless phylogenetic analysis. METHODOLOGY: As an integrated framework of ClustalW, PhyML, MODELTEST, ProtTest, and several in-house programs, PALM evaluates the fitness of 56 substitution models for nucleotide sequences and 112 substitution models for protein sequences with scores in various criteria. The input for PALM can be either sequences in FASTA format or a sequence alignment file in PHYLIP format. To accelerate the computing of maximum likelihood and bootstrapping, this work integrates MPICH2/PhyML, PalmMonitor and Palm job controller across several machines with multiple processors and adopts the task parallelism approach. Moreover, an intuitive and interactive web component, PalmTree, is developed for displaying and operating the output tree with options of tree rooting, branches swapping, viewing the branch length values, and viewing bootstrapping score, as well as removing nodes to restart analysis iteratively. SIGNIFICANCE: The workflow of PALM is straightforward and coherent. Via a succinct, user-friendly interface, researchers unfamiliar with phylogenetic analysis can easily use this server to submit sequences, retrieve the output, and re-submit a job based on a previous result if some sequences are to be deleted or added for phylogenetic reconstruction. PALM results in an inference of

  18. Enhancing their likelihood for a positive future: the perspective of inner-city youth.

    Science.gov (United States)

    Ginsburg, Kenneth R; Alexander, Penny M; Hunt, Jean; Sullivan, Maisha; Zhao, Huaqing; Cnaan, Avital

    2002-06-01

    Inner-city youth must overcome many environmental challenges as they strive for success. Their outcome is influenced by the interplay of protective forces and risk factors. To learn directly from youth what solutions they believe would most influence their likelihood of achieving a positive future. In-school 8th-, 9th-, and 12th-graders in north Philadelphia generated, prioritized, and explained their own solutions through a 4-stage hierarchical process facilitated by AmeriCorps workers. In Stage 1, 60 randomly selected students participated in 8 focus groups to develop the study question. In Stage 2, youth in Nominal Group Technique sessions generated and prioritized solutions. In Stage 3, a survey for each grade that included their top prioritized ideas was distributed, and youth rated each idea on a Likert scale (5= Definitely would make me more likely to have a positive future to 1 = Would definitely not.). One thousand twenty-two ninth-graders (69% of in-school youth at 5 high schools) returned usable surveys. Ninety-three percent of responders were 14 to 16 years old, 44% were male, 54% were black, and 32% were Latino. Four hundred seventeen 8th-graders and 322 12th-graders returned usable surveys. In Stage 4, youth in 10 focus groups added meaning and context to the ideas. The highest rated items in all grades were solutions that promoted education or increased job opportunities. Ninth-graders ranked helping youth get into college first by the Marginal Homogeneity Test. The creation of more jobs was ranked second. Third rank was shared by more job training, keeping youth from dropping out of school, and better books for schools. The next tier of items focused mostly on opportunities for youth to spend their free time productively and to have interactions with adults. Many items calling for the reduction of risk behaviors or disruptive surroundings were rated lower. The Kruskal-Wallis test found little variation in rating of the ideas by gender, race, or

  19. Likelihood Estimation of the Systemic Poison-Induced Morbidity in an Adult North Eastern Romanian Population

    Directory of Open Access Journals (Sweden)

    Cătălina Lionte

    2016-12-01

    Full Text Available Purpose: Acute exposure to a systemic poison represents an important segment of medical emergencies. We aimed to estimate the likelihood of systemic poison-induced morbidity in a population admitted in a tertiary referral center from North East Romania, based on the determinant factors. Methodology: This was a prospective observational cohort study on adult poisoned patients. Demographic, clinical and laboratory characteristics were recorded in all patients. We analyzed three groups of patients, based on the associated morbidity during hospitalization. We identified significant differences between groups and predictors with significant effects on morbidity using multiple multinomial logistic regressions. ROC analysis proved that a combination of tests could improve diagnostic accuracy of poison-related morbidity. Main findings: Of the 180 patients included, aged 44.7 ± 17.2 years, 51.1% males, 49.4% had no poison-related morbidity, 28.9% developed a mild morbidity, and 21.7% had a severe morbidity, followed by death in 16 patients (8.9%. Multiple complications and deaths were recorded in patients aged 53.4 ± 17.6 years (p .001, with a lower Glasgow Coma Scale (GCS score upon admission and a significantly higher heart rate (101 ± 32 beats/min, p .011. Routine laboratory tests were significantly higher in patients with a recorded morbidity. Multiple logistic regression analysis demonstrated that a GCS < 8, a high white blood cells count (WBC, alanine aminotransferase (ALAT, myoglobin, glycemia and brain natriuretic peptide (BNP are strongly predictive for in-hospital severe morbidity. Originality: This is the first Romanian prospective study on adult poisoned patients, which identifies the factors responsible for in-hospital morbidity using logistic regression analyses, with resulting receiver operating characteristic (ROC curves. Conclusion: In acute intoxication with systemic poisons, we identified several clinical and laboratory variables

  20. Debris Likelihood, based on GhostNet, NASA Aqua MODIS, and GOES Imager, EXPERIMENTAL

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Debris Likelihood Index (Estimated) is calculated from GhostNet, NASA Aqua MODIS Chl a and NOAA GOES Imager SST data. THIS IS AN EXPERIMENTAL PRODUCT: intended...