A Predictive Likelihood Approach to Bayesian Averaging
Directory of Open Access Journals (Sweden)
Tomáš Jeřábek
2015-01-01
Full Text Available Multivariate time series forecasting is applied in a wide range of economic activities related to regional competitiveness and is the basis of almost all macroeconomic analysis. In this paper we combine multivariate density forecasts of GDP growth, inflation and real interest rates from four various models, two type of Bayesian vector autoregression (BVAR models, a New Keynesian dynamic stochastic general equilibrium (DSGE model of small open economy and DSGE-VAR model. The performance of models is identified using historical dates including domestic economy and foreign economy, which is represented by countries of the Eurozone. Because forecast accuracy of observed models are different, the weighting scheme based on the predictive likelihood, the trace of past MSE matrix, model ranks are used to combine the models. The equal-weight scheme is used as a simple combination scheme. The results show that optimally combined densities are comparable to the best individual models.
A Specific Network Link and Path Likelihood Prediction Tool
National Research Council Canada - National Science Library
Moy, Gary
1996-01-01
.... Providing a specific network link and path likelihood prediction tool gives strategic military commanders additional intelligence information and enables them to manage their limited resources more efficiently...
A generative model for predicting terrorist incidents
Verma, Dinesh C.; Verma, Archit; Felmlee, Diane; Pearson, Gavin; Whitaker, Roger
2017-05-01
A major concern in coalition peace-support operations is the incidence of terrorist activity. In this paper, we propose a generative model for the occurrence of the terrorist incidents, and illustrate that an increase in diversity, as measured by the number of different social groups to which that an individual belongs, is inversely correlated with the likelihood of a terrorist incident in the society. A generative model is one that can predict the likelihood of events in new contexts, as opposed to statistical models which are used to predict the future incidents based on the history of the incidents in an existing context. Generative models can be useful in planning for persistent Information Surveillance and Reconnaissance (ISR) since they allow an estimation of regions in the theater of operation where terrorist incidents may arise, and thus can be used to better allocate the assignment and deployment of ISR assets. In this paper, we present a taxonomy of terrorist incidents, identify factors related to occurrence of terrorist incidents, and provide a mathematical analysis calculating the likelihood of occurrence of terrorist incidents in three common real-life scenarios arising in peace-keeping operations
Prediction of Safety Incidents
National Aeronautics and Space Administration — Safety incidents, including injuries, property damage and mission failures, cost NASA and contractors thousands of dollars in direct and indirect costs. This project...
Predicting incident size from limited information
International Nuclear Information System (INIS)
Englehardt, J.D.
1995-01-01
Predicting the size of low-probability, high-consequence natural disasters, industrial accidents, and pollutant releases is often difficult due to limitations in the availability of data on rare events and future circumstances. When incident data are available, they may be difficult to fit with a lognormal distribution. Two Bayesian probability distributions for inferring future incident-size probabilities from limited, indirect, and subjective information are proposed in this paper. The distributions are derived from Pareto distributions that are shown to fit data on different incident types and are justified theoretically. The derived distributions incorporate both inherent variability and uncertainty due to information limitations. Results were analyzed to determine the amount of data needed to predict incident-size probabilities in various situations. Information requirements for incident-size prediction using the methods were low, particularly when the population distribution had a thick tail. Use of the distributions to predict accumulated oil-spill consequences was demonstrated
Moral Identity Predicts Doping Likelihood via Moral Disengagement and Anticipated Guilt.
Kavussanu, Maria; Ring, Christopher
2017-08-01
In this study, we integrated elements of social cognitive theory of moral thought and action and the social cognitive model of moral identity to better understand doping likelihood in athletes. Participants (N = 398) recruited from a variety of team sports completed measures of moral identity, moral disengagement, anticipated guilt, and doping likelihood. Moral identity predicted doping likelihood indirectly via moral disengagement and anticipated guilt. Anticipated guilt about potential doping mediated the relationship between moral disengagement and doping likelihood. Our findings provide novel evidence to suggest that athletes, who feel that being a moral person is central to their self-concept, are less likely to use banned substances due to their lower tendency to morally disengage and the more intense feelings of guilt they expect to experience for using banned substances.
Predicting rotator cuff tears using data mining and Bayesian likelihood ratios.
Directory of Open Access Journals (Sweden)
Hsueh-Yi Lu
Full Text Available Rotator cuff tear is a common cause of shoulder diseases. Correct diagnosis of rotator cuff tears can save patients from further invasive, costly and painful tests. This study used predictive data mining and Bayesian theory to improve the accuracy of diagnosing rotator cuff tears by clinical examination alone.In this retrospective study, 169 patients who had a preliminary diagnosis of rotator cuff tear on the basis of clinical evaluation followed by confirmatory MRI between 2007 and 2011 were identified. MRI was used as a reference standard to classify rotator cuff tears. The predictor variable was the clinical assessment results, which consisted of 16 attributes. This study employed 2 data mining methods (ANN and the decision tree and a statistical method (logistic regression to classify the rotator cuff diagnosis into "tear" and "no tear" groups. Likelihood ratio and Bayesian theory were applied to estimate the probability of rotator cuff tears based on the results of the prediction models.Our proposed data mining procedures outperformed the classic statistical method. The correction rate, sensitivity, specificity and area under the ROC curve of predicting a rotator cuff tear were statistical better in the ANN and decision tree models compared to logistic regression. Based on likelihood ratios derived from our prediction models, Fagan's nomogram could be constructed to assess the probability of a patient who has a rotator cuff tear using a pretest probability and a prediction result (tear or no tear.Our predictive data mining models, combined with likelihood ratios and Bayesian theory, appear to be good tools to classify rotator cuff tears as well as determine the probability of the presence of the disease to enhance diagnostic decision making for rotator cuff tears.
Tsou, Tsung-Shan
2018-02-01
Intuitively, one only needs patients with two positive screening test results for positive predictive values comparison, and those with two negative screening test results for contrasting negative predictive values. Nevertheless, current existing methods rely on the multinomial model that includes superfluous parameters unnecessary for specific comparisons. This practice results in complex statistics formulas. We introduce a novel likelihood approach that fits the intuition by including a minimum number of parameters of interest in paired designs. It is demonstrated that our robust score test statistic is identical to a newly proposed weighted generalized score test statistic. Simulations and real data analysis are used for illustration.
Supervised maximum-likelihood weighting of composite protein networks for complex prediction
Directory of Open Access Journals (Sweden)
Yong Chern Han
2012-12-01
Full Text Available Abstract Background Protein complexes participate in many important cellular functions, so finding the set of existent complexes is essential for understanding the organization and regulation of processes in the cell. With the availability of large amounts of high-throughput protein-protein interaction (PPI data, many algorithms have been proposed to discover protein complexes from PPI networks. However, such approaches are hindered by the high rate of noise in high-throughput PPI data, including spurious and missing interactions. Furthermore, many transient interactions are detected between proteins that are not from the same complex, while not all proteins from the same complex may actually interact. As a result, predicted complexes often do not match true complexes well, and many true complexes go undetected. Results We address these challenges by integrating PPI data with other heterogeneous data sources to construct a composite protein network, and using a supervised maximum-likelihood approach to weight each edge based on its posterior probability of belonging to a complex. We then use six different clustering algorithms, and an aggregative clustering strategy, to discover complexes in the weighted network. We test our method on Saccharomyces cerevisiae and Homo sapiens, and show that complex discovery is improved: compared to previously proposed supervised and unsupervised weighting approaches, our method recalls more known complexes, achieves higher precision at all recall levels, and generates novel complexes of greater functional similarity. Furthermore, our maximum-likelihood approach allows learned parameters to be used to visualize and evaluate the evidence of novel predictions, aiding human judgment of their credibility. Conclusions Our approach integrates multiple data sources with supervised learning to create a weighted composite protein network, and uses six clustering algorithms with an aggregative clustering strategy to
Fatty liver incidence and predictive variables
International Nuclear Information System (INIS)
Tsuneto, Akira; Seto, Shinji; Maemura, Koji; Hida, Ayumi; Sera, Nobuko; Imaizumi, Misa; Ichimaru, Shinichiro; Nakashima, Eiji; Akahoshi, Masazumi
2010-01-01
Although fatty liver predicts ischemic heart disease, the incidence and predictors of fatty liver need examination. The objective of this study was to determine fatty liver incidence and predictive variables. Using abdominal ultrasonography, we followed biennially through 2007 (mean follow-up, 11.6±4.6 years) 1635 Nagasaki atomic bomb survivors (606 men) without fatty liver at baseline (November 1990 through October 1992). We examined potential predictive variables with the Cox proportional hazard model and longitudinal trends with the Wilcoxon rank-sum test. In all, 323 (124 men) new fatty liver cases were diagnosed. The incidence was 19.9/1000 person-years (22.3 for men, 18.6 for women) and peaked in the sixth decade of life. After controlling for age, sex, and smoking and drinking habits, obesity (relative risk (RR), 2.93; 95% confidence interval (CI), 2.33-3.69, P<0.001), low high-density lipoprotein-cholesterol (RR, 1.87; 95% CI, 1.42-2.47; P<0.001), hypertriglyceridemia (RR, 2.49; 95% CI, 1.96-3.15; P<0.001), glucose intolerance (RR, 1.51; 95% CI, 1.09-2.10; P=0.013) and hypertension (RR, 1.63; 95% CI, 1.30-2.04; P<0.001) were predictive of fatty liver. In multivariate analysis including all variables, obesity (RR, 2.55; 95% CI, 1.93-3.38; P<0.001), hypertriglyceridemia (RR, 1.92; 95% CI, 1.41-2.62; P<0.001) and hypertension (RR, 1.31; 95% CI, 1.01-1.71; P=0.046) remained predictive. In fatty liver cases, body mass index and serum triglycerides, but not systolic or diastolic blood pressure, increased significantly and steadily up to the time of the diagnosis. Obesity, hypertriglyceridemia and, to a lesser extent, hypertension might serve as predictive variables for fatty liver. (author)
Bayesian Inference using Neural Net Likelihood Models for Protein Secondary Structure Prediction
Directory of Open Access Journals (Sweden)
Seong-Gon Kim
2011-06-01
Full Text Available Several techniques such as Neural Networks, Genetic Algorithms, Decision Trees and other statistical or heuristic methods have been used to approach the complex non-linear task of predicting Alpha-helicies, Beta-sheets and Turns of a proteins secondary structure in the past. This project introduces a new machine learning method by using an offline trained Multilayered Perceptrons (MLP as the likelihood models within a Bayesian Inference framework to predict secondary structures proteins. Varying window sizes are used to extract neighboring amino acid information and passed back and forth between the Neural Net models and the Bayesian Inference process until there is a convergence of the posterior secondary structure probability.
PREDICTION OF THE LIKELIHOOD OF HOUSEHOLDS FOOD SECURITY IN THE LAKE VICTORIA REGION OF KENYA
Directory of Open Access Journals (Sweden)
Peter Nyamuhanga Mwita
2011-06-01
Full Text Available This paper considers the modeling and prediction of households food security status using a sample of households in the Lake Victoria region of Kenya. A priori expected food security factors and their measurements are given. A binary logistic regression model derived was fitted to thirteen priori expected factors. Analysis of the marginal effects revealed that effecting the use of the seven significant determinants: farmland size, per capita aggregate production, household size, gender of household head, use of fertilizer, use of pesticide/herbicide and education of household head, increase the likelihood of a household being food secure. Finally, interpretations of predicted conditional probabilities, following improvement of significant determinants, are given.
Phalangeal bone mineral density predicts incident fractures
DEFF Research Database (Denmark)
Friis-Holmberg, Teresa; Brixen, Kim; Rubin, Katrine Hass
2012-01-01
This prospective study investigates the use of phalangeal bone mineral density (BMD) in predicting fractures in a cohort (15,542) who underwent a BMD scan. In both women and men, a decrease in BMD was associated with an increased risk of fracture when adjusted for age and prevalent fractures....... PURPOSE: The aim of this study was to evaluate the ability of a compact and portable scanner using radiographic absorptiometry (RA) to predict major osteoporotic fractures. METHODS: This prospective study included a cohort of 15,542 men and women aged 18-95 years, who underwent a BMD scan in Danish Health...... Examination Survey 2007-2008. BMD at the middle phalanges of the second, third and fourth digits of the non-dominant hand was measured using RA (Alara MetriScan®). These data were merged with information on incident fractures retrieved from the Danish National Patient Registry comprising the International...
Curtis, Gary P.; Lu, Dan; Ye, Ming
2015-01-01
While Bayesian model averaging (BMA) has been widely used in groundwater modeling, it is infrequently applied to groundwater reactive transport modeling because of multiple sources of uncertainty in the coupled hydrogeochemical processes and because of the long execution time of each model run. To resolve these problems, this study analyzed different levels of uncertainty in a hierarchical way, and used the maximum likelihood version of BMA, i.e., MLBMA, to improve the computational efficiency. This study demonstrates the applicability of MLBMA to groundwater reactive transport modeling in a synthetic case in which twenty-seven reactive transport models were designed to predict the reactive transport of hexavalent uranium (U(VI)) based on observations at a former uranium mill site near Naturita, CO. These reactive transport models contain three uncertain model components, i.e., parameterization of hydraulic conductivity, configuration of model boundary, and surface complexation reactions that simulate U(VI) adsorption. These uncertain model components were aggregated into the alternative models by integrating a hierarchical structure into MLBMA. The modeling results of the individual models and MLBMA were analyzed to investigate their predictive performance. The predictive logscore results show that MLBMA generally outperforms the best model, suggesting that using MLBMA is a sound strategy to achieve more robust model predictions relative to a single model. MLBMA works best when the alternative models are structurally distinct and have diverse model predictions. When correlation in model structure exists, two strategies were used to improve predictive performance by retaining structurally distinct models or assigning smaller prior model probabilities to correlated models. Since the synthetic models were designed using data from the Naturita site, the results of this study are expected to provide guidance for real-world modeling. Limitations of applying MLBMA to the
Rosenquist, Peter B; Dunn, Aaron; Rapp, Stephen; Gaba, Aline; McCall, W Vaughn
2006-03-01
To examine the relationship between stated intention to choose electroconvulsive therapy (ECT) as a future treatment option and measures of function and quality of life, mood, and cognition in the month after this therapy. Understanding the factors influencing patient choice of ECT is a source of insight into the interplay between measures of response and perceived value of this treatment to patients, lending perspective to patient-centered quality improvement efforts. In a prospective sample of 77 depressed patients given ECT, we surveyed recipients at 1 month about their expressed likelihood of choosing ECT given a future episode and examined predictors of their responses. Thirty-four subjects were classified as "likely" to choose a course of ECT, whereas 33 patients were "unlikely." A model including Hamilton baseline and change scores as well as baseline scores in instrumental activities of daily living significantly predicted likeliness after controlling for age and sex (R = 0.34, P quality-of-life variables and measures of change in cognition were not significant in the model. In our sample, choosing ECT as a future treatment option was more likely for those who were more depressed before treatment, had more impaired instrumental activities at the outset of treatment, and experienced a more robust improvement in depressive symptoms. This variance was not explained by treatment-associated improvements in quality of life, function, or deficits in cognitive status.
Predicting the Likelihood of Going to Graduate School: The Importance of Locus of Control
Nordstrom, Cynthia R.; Segrist, Dan J.
2009-01-01
Although many undergraduates apply to graduate school, only a fraction will be admitted. A question arises as to what factors relate to the likelihood of pursuing graduate studies. The current research examined this question by surveying students in a Careers in Psychology course. We hypothesized that GPA, a more internal locus of control…
A new, accurate predictive model for incident hypertension
DEFF Research Database (Denmark)
Völzke, Henry; Fung, Glenn; Ittermann, Till
2013-01-01
Data mining represents an alternative approach to identify new predictors of multifactorial diseases. This work aimed at building an accurate predictive model for incident hypertension using data mining procedures....
Prediction of high incidence of dengue in the Philippines.
Buczak, Anna L; Baugher, Benjamin; Babin, Steven M; Ramac-Thomas, Liane C; Guven, Erhan; Elbert, Yevgeniy; Koshute, Phillip T; Velasco, John Mark S; Roque, Vito G; Tayag, Enrique A; Yoon, In-Kyu; Lewis, Sheri H
2014-04-01
Accurate prediction of dengue incidence levels weeks in advance of an outbreak may reduce the morbidity and mortality associated with this neglected disease. Therefore, models were developed to predict high and low dengue incidence in order to provide timely forewarnings in the Philippines. Model inputs were chosen based on studies indicating variables that may impact dengue incidence. The method first uses Fuzzy Association Rule Mining techniques to extract association rules from these historical epidemiological, environmental, and socio-economic data, as well as climate data indicating future weather patterns. Selection criteria were used to choose a subset of these rules for a classifier, thereby generating a Prediction Model. The models predicted high or low incidence of dengue in a Philippines province four weeks in advance. The threshold between high and low was determined relative to historical incidence data. Model accuracy is described by Positive Predictive Value (PPV), Negative Predictive Value (NPV), Sensitivity, and Specificity computed on test data not previously used to develop the model. Selecting a model using the F0.5 measure, which gives PPV more importance than Sensitivity, gave these results: PPV = 0.780, NPV = 0.938, Sensitivity = 0.547, Specificity = 0.978. Using the F3 measure, which gives Sensitivity more importance than PPV, the selected model had PPV = 0.778, NPV = 0.948, Sensitivity = 0.627, Specificity = 0.974. The decision as to which model has greater utility depends on how the predictions will be used in a particular situation. This method builds prediction models for future dengue incidence in the Philippines and is capable of being modified for use in different situations; for diseases other than dengue; and for regions beyond the Philippines. The Philippines dengue prediction models predicted high or low incidence of dengue four weeks in advance of an outbreak with high accuracy, as measured by PPV
Numerical Prediction of Green Water Incidents
DEFF Research Database (Denmark)
Nielsen, K. B.; Mayer, Stefan
2004-01-01
Green water loads on moored or sailing ships occur when an incoming wave signigicantly exceeds the freeboard and water runs onto the deck. In this paper, a Navier-Stokes solver with a free surface capturing scheme (i.e. the VOF model; Hirt and Nichols, 1981) is used to numerically model green water...... loads on a moored FPSO exposed to head sea waves. Two cases are investigated: first, green water ona fixed vessel has been analysed, where resulting waterheight on deck, and impact pressure on a deck mounted structure have been computed. These results have been compared to experimental data obtained...... by Greco (2001) and show very favourable agreement. Second, a full green water incident, including vessel motions has been modelled. In these computations, the vertical motion has been modelled by the use of transfer functions for heave and pitch, but the rotational contribution from the pitch motion has...
Christiansen, Bo
2015-04-01
Linear regression methods are without doubt the most used approaches to describe and predict data in the physical sciences. They are often good first order approximations and they are in general easier to apply and interpret than more advanced methods. However, even the properties of univariate regression can lead to debate over the appropriateness of various models as witnessed by the recent discussion about climate reconstruction methods. Before linear regression is applied important choices have to be made regarding the origins of the noise terms and regarding which of the two variables under consideration that should be treated as the independent variable. These decisions are often not easy to make but they may have a considerable impact on the results. We seek to give a unified probabilistic - Bayesian with flat priors - treatment of univariate linear regression and prediction by taking, as starting point, the general errors-in-variables model (Christiansen, J. Clim., 27, 2014-2031, 2014). Other versions of linear regression can be obtained as limits of this model. We derive the likelihood of the model parameters and predictands of the general errors-in-variables model by marginalizing over the nuisance parameters. The resulting likelihood is relatively simple and easy to analyze and calculate. The well known unidentifiability of the errors-in-variables model is manifested as the absence of a well-defined maximum in the likelihood. However, this does not mean that probabilistic inference can not be made; the marginal likelihoods of model parameters and the predictands have, in general, well-defined maxima. We also include a probabilistic version of classical calibration and show how it is related to the errors-in-variables model. The results are illustrated by an example from the coupling between the lower stratosphere and the troposphere in the Northern Hemisphere winter.
Incidence and predicting factors of falls of older inpatients
Directory of Open Access Journals (Sweden)
Hellen Cristina de Almeida Abreu
2015-01-01
Full Text Available OBJECTIVE To estimate the incidence and predicting factors associated with falls among older inpatients. METHODS Prospective cohort study conducted in clinical units of three hospitals in Cuiaba, MT, Midwestern Brazil, from March to August 2013. In this study, 221 inpatients aged 60 or over were followed until hospital discharge, death, or fall. The method of incidence density was used to calculate incidence rates. Bivariate analysis was performed by Chi-square test, and multiple analysis was performed by Cox regression. RESULTS The incidence of falls was 12.6 per 1,000 patients/day. Predicting factors for falls during hospitalization were: low educational level (RR = 2.48; 95%CI 1.17;5.25, polypharmacy (RR = 4.42; 95%CI 1.77;11.05, visual impairment (RR = 2.06; 95%CI 1.01;4.23, gait and balance impairment (RR = 2.95; 95%CI 1.22;7.14, urinary incontinence (RR = 5.67; 95%CI 2.58;12.44 and use of laxatives (RR = 4.21; 95%CI 1.15;15.39 and antipsychotics (RR = 4.10; 95%CI 1.38;12.13. CONCLUSIONS The incidence of falls of older inpatients is high. Predicting factors found for falls were low education level, polypharmacy, visual impairment, gait and balance impairment, urinary incontinence and use of laxatives and antipsychotics. Measures to prevent falls in hospitals are needed to reduce the incidence of this event.
Directory of Open Access Journals (Sweden)
Miao Luo
2015-01-01
Conclusions: The established clinical nomogram provides high accuracy in predicting the individual risk of OSA. This tool may help physicians better make decisions on PSG arrangement for the patients referred to sleep centers.
Thomas B. Lynch; Jean Nkouka; Michael M. Huebschmann; James M. Guldin
2003-01-01
A logistic equation is the basis for a model that predicts the probability of obtaining regeneration at specified densities. The density of regeneration (trees/ha) for which an estimate of probability is desired can be specified by means of independent variables in the model. When estimating parameters, the dependent variable is set to 1 if the regeneration density (...
Predicting the likelihood of suicide attempts for rural outpatients with schizophrenia.
Lee, Jwo-Leun; Ma, Wei-Fen; Yen, Wen-Jiuan; Huang, Xuan-Yi; Chiang, Li-Chi
2012-10-01
To explore suicide predictors in rural outpatients with schizophrenia. Background. Suicide is a major cause of mortality in patients with schizophrenia. Evidence indicates that patients in rural areas are at high risk for inadequate health care services. However, information is limited on suicide risk in outpatients with schizophrenia in rural areas. Cross-sectional survey. Data were collected on individuals enrolled in the 2007 Taiwan National Health Insurance program as diagnosed with schizophrenia, ≥ 18 years, and living in a rural county. Eligible individuals (n=1655) were assessed by 12 community-based nurses at 12 public health centres. Participants' personal information was retrieved from National Health Insurance records using a personal data sheet, and treatment experiences were obtained by interviewing patients with a 10-item risk-assessment inventory. Data were collected over 18 months (2007-2008) and analysed by descriptive statistics and regression analyses. Risk of suicide attempt in the previous year had four significant predictors: number of self-harm incidents during the previous year, violent incidents towards others during the previous year, number of follow-ups by mental health clinics and number of involuntary hospitalisations during the previous year (R(2) = 0.337, adjusted R(2) = 0.334, F=133.19, p=0.000). Health care providers should assess rural outpatients with schizophrenia for suicidal thoughts by asking simple questions to evaluate for a history of self-harm and violence and by comparing this information with health system data on follow-ups by mental health clinics and involuntary hospitalisations. Community-based health providers may use these results to prioritise assessments when they have a high case load of patients with schizophrenia. Community-based nurses need to be trained to recognise these four predictors to increase their sensitivity to suicidality among patients with schizophrenia. © 2012 Blackwell Publishing Ltd.
Prediction Model for Gastric Cancer Incidence in Korean Population.
Eom, Bang Wool; Joo, Jungnam; Kim, Sohee; Shin, Aesun; Yang, Hye-Ryung; Park, Junghyun; Choi, Il Ju; Kim, Young-Woo; Kim, Jeongseon; Nam, Byung-Ho
2015-01-01
Predicting high risk groups for gastric cancer and motivating these groups to receive regular checkups is required for the early detection of gastric cancer. The aim of this study is was to develop a prediction model for gastric cancer incidence based on a large population-based cohort in Korea. Based on the National Health Insurance Corporation data, we analyzed 10 major risk factors for gastric cancer. The Cox proportional hazards model was used to develop gender specific prediction models for gastric cancer development, and the performance of the developed model in terms of discrimination and calibration was also validated using an independent cohort. Discrimination ability was evaluated using Harrell's C-statistics, and the calibration was evaluated using a calibration plot and slope. During a median of 11.4 years of follow-up, 19,465 (1.4%) and 5,579 (0.7%) newly developed gastric cancer cases were observed among 1,372,424 men and 804,077 women, respectively. The prediction models included age, BMI, family history, meal regularity, salt preference, alcohol consumption, smoking and physical activity for men, and age, BMI, family history, salt preference, alcohol consumption, and smoking for women. This prediction model showed good accuracy and predictability in both the developing and validation cohorts (C-statistics: 0.764 for men, 0.706 for women). In this study, a prediction model for gastric cancer incidence was developed that displayed a good performance.
Prediction Model for Gastric Cancer Incidence in Korean Population.
Directory of Open Access Journals (Sweden)
Bang Wool Eom
Full Text Available Predicting high risk groups for gastric cancer and motivating these groups to receive regular checkups is required for the early detection of gastric cancer. The aim of this study is was to develop a prediction model for gastric cancer incidence based on a large population-based cohort in Korea.Based on the National Health Insurance Corporation data, we analyzed 10 major risk factors for gastric cancer. The Cox proportional hazards model was used to develop gender specific prediction models for gastric cancer development, and the performance of the developed model in terms of discrimination and calibration was also validated using an independent cohort. Discrimination ability was evaluated using Harrell's C-statistics, and the calibration was evaluated using a calibration plot and slope.During a median of 11.4 years of follow-up, 19,465 (1.4% and 5,579 (0.7% newly developed gastric cancer cases were observed among 1,372,424 men and 804,077 women, respectively. The prediction models included age, BMI, family history, meal regularity, salt preference, alcohol consumption, smoking and physical activity for men, and age, BMI, family history, salt preference, alcohol consumption, and smoking for women. This prediction model showed good accuracy and predictability in both the developing and validation cohorts (C-statistics: 0.764 for men, 0.706 for women.In this study, a prediction model for gastric cancer incidence was developed that displayed a good performance.
Southwell, Brian G; Slater, Jonathan S; Rothman, Alexander J; Friedenberg, Laura M; Allison, Tiffany R; Nelson, Christina L
2010-11-01
Engaging social networks to encourage preventive health behavior offers a supplement to conventional mass media campaigns and yet we do not fully understand the conditions that facilitate or hamper such interpersonal diffusion. One set of factors that should affect the diffusion of health campaign information involves a person's community. Variables describing geographic communities should predict the likelihood of residents accepting campaign invitations to pass along information to friends, family, and others. We investigate two aspects of a community--the availability of community ties and residential stability--as potential influences on diffusion of publicly-funded breast cancer screening in the United States in 2008-2009. In a survey study of 1515 participants living in 91 zip codes across the State of Minnesota, USA, we focus on the extent to which women refer others when given the opportunity to nominate family, friends, and peers to receive free mammograms. We predicted nomination tendency for a particular zip code would be a function of available community ties, measured as religious congregation density in that zip code, and also expected the predictive power of available ties would be greatest in communities with relatively high residential stability (meaning lower turnover in home residence). Results support our hypotheses. Congregation density positively predicted nomination tendency both in bivariate analysis and in Tobit regression models, and was most predictive in zip codes above the median in residential stability. We conclude that having a local infrastructure of social ties available in a community predicts the diffusion of available health care services in that community. Copyright © 2010 Elsevier Ltd. All rights reserved.
Dynamic prediction of cumulative incidence functions by direct binomial regression.
Grand, Mia K; de Witte, Theo J M; Putter, Hein
2018-03-25
In recent years there have been a series of advances in the field of dynamic prediction. Among those is the development of methods for dynamic prediction of the cumulative incidence function in a competing risk setting. These models enable the predictions to be updated as time progresses and more information becomes available, for example when a patient comes back for a follow-up visit after completing a year of treatment, the risk of death, and adverse events may have changed since treatment initiation. One approach to model the cumulative incidence function in competing risks is by direct binomial regression, where right censoring of the event times is handled by inverse probability of censoring weights. We extend the approach by combining it with landmarking to enable dynamic prediction of the cumulative incidence function. The proposed models are very flexible, as they allow the covariates to have complex time-varying effects, and we illustrate how to investigate possible time-varying structures using Wald tests. The models are fitted using generalized estimating equations. The method is applied to bone marrow transplant data and the performance is investigated in a simulation study. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
CERN. Geneva
2015-01-01
Most physics results at the LHC end in a likelihood ratio test. This includes discovery and exclusion for searches as well as mass, cross-section, and coupling measurements. The use of Machine Learning (multivariate) algorithms in HEP is mainly restricted to searches, which can be reduced to classification between two fixed distributions: signal vs. background. I will show how we can extend the use of ML classifiers to distributions parameterized by physical quantities like masses and couplings as well as nuisance parameters associated to systematic uncertainties. This allows for one to approximate the likelihood ratio while still using a high dimensional feature vector for the data. Both the MEM and ABC approaches mentioned above aim to provide inference on model parameters (like cross-sections, masses, couplings, etc.). ABC is fundamentally tied Bayesian inference and focuses on the “likelihood free” setting where only a simulator is available and one cannot directly compute the likelihood for the dat...
Owen, Art B
2001-01-01
Empirical likelihood provides inferences whose validity does not depend on specifying a parametric model for the data. Because it uses a likelihood, the method has certain inherent advantages over resampling methods: it uses the data to determine the shape of the confidence regions, and it makes it easy to combined data from multiple sources. It also facilitates incorporating side information, and it simplifies accounting for censored, truncated, or biased sampling.One of the first books published on the subject, Empirical Likelihood offers an in-depth treatment of this method for constructing confidence regions and testing hypotheses. The author applies empirical likelihood to a range of problems, from those as simple as setting a confidence region for a univariate mean under IID sampling, to problems defined through smooth functions of means, regression models, generalized linear models, estimating equations, or kernel smooths, and to sampling with non-identically distributed data. Abundant figures offer vi...
A new, accurate predictive model for incident hypertension.
Völzke, Henry; Fung, Glenn; Ittermann, Till; Yu, Shipeng; Baumeister, Sebastian E; Dörr, Marcus; Lieb, Wolfgang; Völker, Uwe; Linneberg, Allan; Jørgensen, Torben; Felix, Stephan B; Rettig, Rainer; Rao, Bharat; Kroemer, Heyo K
2013-11-01
Data mining represents an alternative approach to identify new predictors of multifactorial diseases. This work aimed at building an accurate predictive model for incident hypertension using data mining procedures. The primary study population consisted of 1605 normotensive individuals aged 20-79 years with 5-year follow-up from the population-based study, that is the Study of Health in Pomerania (SHIP). The initial set was randomly split into a training and a testing set. We used a probabilistic graphical model applying a Bayesian network to create a predictive model for incident hypertension and compared the predictive performance with the established Framingham risk score for hypertension. Finally, the model was validated in 2887 participants from INTER99, a Danish community-based intervention study. In the training set of SHIP data, the Bayesian network used a small subset of relevant baseline features including age, mean arterial pressure, rs16998073, serum glucose and urinary albumin concentrations. Furthermore, we detected relevant interactions between age and serum glucose as well as between rs16998073 and urinary albumin concentrations [area under the receiver operating characteristic (AUC 0.76)]. The model was confirmed in the SHIP validation set (AUC 0.78) and externally replicated in INTER99 (AUC 0.77). Compared to the established Framingham risk score for hypertension, the predictive performance of the new model was similar in the SHIP validation set and moderately better in INTER99. Data mining procedures identified a predictive model for incident hypertension, which included innovative and easy-to-measure variables. The findings promise great applicability in screening settings and clinical practice.
Heinrich, Verena; Kamphans, Tom; Mundlos, Stefan; Robinson, Peter N; Krawitz, Peter M
2017-01-01
Next generation sequencing technology considerably changed the way we screen for pathogenic mutations in rare Mendelian disorders. However, the identification of the disease-causing mutation amongst thousands of variants of partly unknown relevance is still challenging and efficient techniques that reduce the genomic search space play a decisive role. Often segregation- or linkage analysis are used to prioritize candidates, however, these approaches require correct information about the degree of relationship among the sequenced samples. For quality assurance an automated control of pedigree structures and sample assignment is therefore highly desirable in order to detect label mix-ups that might otherwise corrupt downstream analysis. We developed an algorithm based on likelihood ratios that discriminates between different classes of relationship for an arbitrary number of genotyped samples. By identifying the most likely class we are able to reconstruct entire pedigrees iteratively, even for highly consanguineous families. We tested our approach on exome data of different sequencing studies and achieved high precision for all pedigree predictions. By analyzing the precision for varying degrees of relatedness or inbreeding we could show that a prediction is robust down to magnitudes of a few hundred loci. A java standalone application that computes the relationships between multiple samples as well as a Rscript that visualizes the pedigree information is available for download as well as a web service at www.gene-talk.de CONTACT: heinrich@molgen.mpg.deSupplementary information: Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press.
Bilska-Wolak, Anna O.; Floyd, Carey E., Jr.; Lo, Joseph Y.
2003-05-01
Potential malignancy of a mammographic lesion can be assessed using the mathematically optimal likelihood ratio (LR) from signal detection theory. We developed a LR classifier for prediction of breast biopsy outcome of mammographic masses from BI-RADS findings. We used cases from Duke University Medical Center (645 total, 232 malignant) and University of Pennsylvania (496, 200). The LR was trained and tested alternatively on both subsets. Leave-one-out sampling was used when training and testing was performed on the same data set. When tested on the Duke set, the LR achieved a Received Operating Characteristic (ROC) area of 0.91+/- 0.01, regardless of whether Duke or Pennsylvania set was used for training. The LR achieved a ROC area of 0.85+/- 0.02 for the Pennsylvania set, again regardless of which set was used for training. When using actual case data for training, the LR's procedure is equivalent to case-based reasoning, and can explain the classifier's decisions in terms of similarity to other cases. These preliminary results suggest that the LR is a robust classifier for prediction of biopsy outcome using biopsy cases from different medical centers.
Clinical-genetic model predicts incident impulse control disorders in Parkinson’s disease
Kraemmer, Julia; Smith, Kara; Weintraub, Daniel; Guillemot, Vincent; Nalls, Mike A; Cormier-Dequaire, Florence; Moszer, Ivan; Brice, Alexis; Singleton, Andrew B; Corvol, Jean-Christophe
2016-01-01
Objectives Impulse control disorders (ICD) are commonly associated with dopamine replacement therapy (DRT) in patients with Parkinson’s disease (PD). Our aims were to estimate ICD heritability and to predict ICD by a candidate genetic multivariable panel in patients with PD. Methods Data from de novo patients with PD, drug-naïve and free of ICD behaviour at baseline, were obtained from the Parkinson’s Progression Markers Initiative cohort. Incident ICD behaviour was defined as positive score on the Questionnaire for Impulsive-Compulsive Disorders in PD. ICD heritability was estimated by restricted maximum likelihood analysis on whole exome sequencing data. 13 candidate variants were selected from the DRD2, DRD3, DAT1, COMT, DDC, GRIN2B, ADRA2C, SERT, TPH2, HTR2A, OPRK1 and OPRM1 genes. ICD prediction was evaluated by the area under the curve (AUC) of receiver operating characteristic (ROC) curves. Results Among 276 patients with PD included in the analysis, 86% started DRT, 40% were on dopamine agonists (DA), 19% reported incident ICD behaviour during follow-up. We found heritability of this symptom to be 57%. Adding genotypes from the 13 candidate variants significantly increased ICD predictability (AUC=76%, 95% CI (70% to 83%)) compared to prediction based on clinical variables only (AUC=65%, 95% CI (58% to 73%), p=0.002). The clinical-genetic prediction model reached highest accuracy in patients initiating DA therapy (AUC=87%, 95% CI (80% to 93%)). OPRK1, HTR2A and DDC genotypes were the strongest genetic predictive factors. Conclusions Our results show that adding a candidate genetic panel increases ICD predictability, suggesting potential for developing clinical-genetic models to identify patients with PD at increased risk of ICD development and guide DRT management. PMID:27076492
Blood Epigenetic Age may Predict Cancer Incidence and Mortality.
Zheng, Yinan; Joyce, Brian T; Colicino, Elena; Liu, Lei; Zhang, Wei; Dai, Qi; Shrubsole, Martha J; Kibbe, Warren A; Gao, Tao; Zhang, Zhou; Jafari, Nadereh; Vokonas, Pantel; Schwartz, Joel; Baccarelli, Andrea A; Hou, Lifang
2016-03-01
Biological measures of aging are important for understanding the health of an aging population, with epigenetics particularly promising. Previous studies found that tumor tissue is epigenetically older than its donors are chronologically. We examined whether blood Δage (the discrepancy between epigenetic and chronological ages) can predict cancer incidence or mortality, thus assessing its potential as a cancer biomarker. In a prospective cohort, Δage and its rate of change over time were calculated in 834 blood leukocyte samples collected from 442 participants free of cancer at blood draw. About 3-5 years before cancer onset or death, Δage was associated with cancer risks in a dose-responsive manner (P = 0.02) and a one-year increase in Δage was associated with cancer incidence (HR: 1.06, 95% CI: 1.02-1.10) and mortality (HR: 1.17, 95% CI: 1.07-1.28). Participants with smaller Δage and decelerated epigenetic aging over time had the lowest risks of cancer incidence (P = 0.003) and mortality (P = 0.02). Δage was associated with cancer incidence in a 'J-shaped' manner for subjects examined pre-2003, and with cancer mortality in a time-varying manner. We conclude that blood epigenetic age may mirror epigenetic abnormalities related to cancer development, potentially serving as a minimally invasive biomarker for cancer early detection.
Analysis of significant factors for dengue fever incidence prediction.
Siriyasatien, Padet; Phumee, Atchara; Ongruk, Phatsavee; Jampachaisri, Katechan; Kesorn, Kraisak
2016-04-16
Many popular dengue forecasting techniques have been used by several researchers to extrapolate dengue incidence rates, including the K-H model, support vector machines (SVM), and artificial neural networks (ANN). The time series analysis methodology, particularly ARIMA and SARIMA, has been increasingly applied to the field of epidemiological research for dengue fever, dengue hemorrhagic fever, and other infectious diseases. The main drawback of these methods is that they do not consider other variables that are associated with the dependent variable. Additionally, new factors correlated to the disease are needed to enhance the prediction accuracy of the model when it is applied to areas of similar climates, where weather factors such as temperature, total rainfall, and humidity are not substantially different. Such drawbacks may consequently lower the predictive power for the outbreak. The predictive power of the forecasting model-assessed by Akaike's information criterion (AIC), Bayesian information criterion (BIC), and the mean absolute percentage error (MAPE)-is improved by including the new parameters for dengue outbreak prediction. This study's selected model outperforms all three other competing models with the lowest AIC, the lowest BIC, and a small MAPE value. The exclusive use of climate factors from similar locations decreases a model's prediction power. The multivariate Poisson regression, however, effectively forecasts even when climate variables are slightly different. Female mosquitoes and seasons were strongly correlated with dengue cases. Therefore, the dengue incidence trends provided by this model will assist the optimization of dengue prevention. The present work demonstrates the important roles of female mosquito infection rates from the previous season and climate factors (represented as seasons) in dengue outbreaks. Incorporating these two factors in the model significantly improves the predictive power of dengue hemorrhagic fever forecasting
Leung, Gabriel M; Woo, Pauline P S; McGhee, Sarah M; Cheung, Annie N Y; Fan, Susan; Mang, Oscar; Thach, Thuan Q; Ngan, Hextan Y S
2006-08-01
To examine the secular effects of opportunistic screening for cervical cancer in a rich, developed community where most other such populations have long adopted organised screening. The analysis was based on 15 140 cases of invasive cervical cancer from 1972 to 2001. The effects of chronological age, time period, and birth cohort were decomposed using both maximum likelihood and Bayesian methods. The overall age adjusted incidence decreased from 24.9 in 1972-74 to 9.5 per 100,000 in 1999-2001, in a log-linear fashion, yielding an average annual reduction of 4.0% (p1920s cohort curve representing an age-period interaction masquerading as a cohort change that denotes the first availability of Pap testing during the 1960s concentrated among women in their 40s; (2) a hook around the calendar years 1982-83 when cervical cytology became a standard screening test for pregnant women. Hong Kong's cervical cancer rates have declined since Pap tests first became available in the 1960s, most probably because of increasing population coverage over time and in successive generations in a haphazard fashion and punctuated by the systematic introduction of routine cytology as part of antenatal care in the 1980s.
Energy Technology Data Exchange (ETDEWEB)
Wall, M.J.W.
1992-07-01
The notion of {open_quotes}probability{close_quotes} is generalized to that of {open_quotes}likelihood,{close_quotes} and a natural logical structure is shown to exist for any physical theory which predicts likelihoods. Two physically based axioms are given for this logical structure to form an orthomodular poset, with an order-determining set of states. The results strengthen the basis of the quantum logic approach to axiomatic quantum theory. 25 refs.
Blood Epigenetic Age may Predict Cancer Incidence and Mortality
Directory of Open Access Journals (Sweden)
Yinan Zheng
2016-03-01
Full Text Available Biological measures of aging are important for understanding the health of an aging population, with epigenetics particularly promising. Previous studies found that tumor tissue is epigenetically older than its donors are chronologically. We examined whether blood Δage (the discrepancy between epigenetic and chronological ages can predict cancer incidence or mortality, thus assessing its potential as a cancer biomarker. In a prospective cohort, Δage and its rate of change over time were calculated in 834 blood leukocyte samples collected from 442 participants free of cancer at blood draw. About 3–5 years before cancer onset or death, Δage was associated with cancer risks in a dose-responsive manner (P = 0.02 and a one-year increase in Δage was associated with cancer incidence (HR: 1.06, 95% CI: 1.02–1.10 and mortality (HR: 1.17, 95% CI: 1.07–1.28. Participants with smaller Δage and decelerated epigenetic aging over time had the lowest risks of cancer incidence (P = 0.003 and mortality (P = 0.02. Δage was associated with cancer incidence in a ‘J-shaped’ manner for subjects examined pre-2003, and with cancer mortality in a time-varying manner. We conclude that blood epigenetic age may mirror epigenetic abnormalities related to cancer development, potentially serving as a minimally invasive biomarker for cancer early detection.
Predicting Cumulative Incidence Probability: Marginal and Cause-Specific Modelling
DEFF Research Database (Denmark)
Scheike, Thomas H.; Zhang, Mei-Jie
2005-01-01
cumulative incidence probability; cause-specific hazards; subdistribution hazard; binomial modelling......cumulative incidence probability; cause-specific hazards; subdistribution hazard; binomial modelling...
Predicting Cumulative Incidence Probability by Direct Binomial Regression
DEFF Research Database (Denmark)
Scheike, Thomas H.; Zhang, Mei-Jie
Binomial modelling; cumulative incidence probability; cause-specific hazards; subdistribution hazard......Binomial modelling; cumulative incidence probability; cause-specific hazards; subdistribution hazard...
Gariel, C; Cogniat, B; Desgranges, F-P; Chassard, D; Bouvet, L
2018-03-01
Medication errors are not uncommon in hospitalized patients. Paediatric patients may have increased risk for medication errors related to complexity of weight-based dosing calculations or problems with drug preparation and dilution. This study aimed to determine the incidence of medication errors in paediatric anaesthesia in a university paediatric hospital, and to identify their characteristics and potential predictive factors. This prospective incident monitoring study was conducted between November 2015 and January 2016 in an exclusively paediatric surgical centre. Children medication error (2.6%). Drugs most commonly involved in medication errors were opioids and antibiotics. Incorrect dose was the most frequently reported type of error (n=27, 67.5%), with dilution error involved in 7/27 (26%) cases of incorrect dose. Duration of procedure >120 min was the only factor independently associated with medication error [adjusted odds ratio: 4 (95% confidence interval: 2-8); P=0.0001]. Medication errors are not uncommon in paediatric anaesthesia. Identification of the mechanisms related to medication errors might allow preventive measures that can be assessed in further studies. Copyright © 2017 British Journal of Anaesthesia. Published by Elsevier Ltd. All rights reserved.
Conditional predictive inference for online surveillance of spatial disease incidence
Corberán-Vallet, Ana; Lawson, Andrew B.
2012-01-01
This paper deals with the development of statistical methodology for timely detection of incident disease clusters in space and time. The increasing availability of data on both the time and the location of events enables the construction of multivariate surveillance techniques, which may enhance the ability to detect localized clusters of disease relative to the surveillance of the overall count of disease cases across the entire study region. We introduce the surveillance conditional predictive ordinate as a general Bayesian model-based surveillance technique that allows us to detect small areas of increased disease incidence when spatial data are available. To address the problem of multiple comparisons, we incorporate a common probability that each small area signals an alarm when no change in the risk pattern of disease takes place into the analysis. We investigate the performance of the proposed surveillance technique within the framework of Bayesian hierarchical Poisson models using a simulation study. Finally, we present a case study of salmonellosis in South Carolina. PMID:21898522
Incidence and predictability of amiodarone-induced thyrotoxicosis and hypothyroidism.
Hofmann, Andrea; Nawara, Clemens; Ofluoglu, Sedat; Holzmannhofer, Johannes; Strohmer, Bernhard; Pirich, Christian
2008-01-01
To determine the incidence and predictability of amiodarone-induced thyrotoxicosis (AIT) and hypothyroidism (AIH) in patients with cardiomyopathy. A total of 72 patients (mean age 69 +/- 11 years) living in an area previously endemic for thyroid disease but with currently sufficient iodine intake were enrolled in this prospective study. All participants were treated with amiodarone for the first time. The course of thyroid function in patients with normal thyroid morphology and in those with goiter was monitored over a median follow-up period of eight months in 71 (98.6%) patients. Of 72 participants, 18 (25.0%) had a morphologically normal thyroid gland as evidenced by sonography. The prevalence of thyroid dysfunction before initiation of amiodarone was 37.6% (27 of 72) with almost equal distribution between hypothyroidism and hyperthyroidism (14 and 13 patients). After treatment with amiodarone, thyroid dysfunction was diagnosed in 56.8% (25 of 44) of the patients without preexisting dysfunction. Of these 25 patients, nine (36%) developed either subclinical or overt AIH and 16 (64.0%) developed either subclinical or overt AIT. Although 61.1% (44 of 72) had normal thyroid function before initiation of amiodarone, this number decreased to 26.7% (19 of 71, P amiodarone. Cases of AIT and AIH occurred in patients with and without preexisting thyroid disorders. Because of the high incidence of amiodarone-induced thyroid dysfunction, regular testing of thyroid function is mandatory during and following amiodarone treatment.
Predicting the Incidence of Human Cataract through Retinal Imaging Technology.
Horng, Chi-Ting; Sun, Han-Ying; Liu, Hsiang-Jui; Lue, Jiann-Hwa; Yeh, Shang-Min
2015-11-19
With the progress of science, technology and medicine, the proportion of elderly people in society has gradually increased over the years. Thus, the medical care and health issues of this population have drawn increasing attention. In particular, among the common medical problems of the elderly, the occurrence of cataracts has been widely observed. In this study, we developed retinal imaging technology by establishing a human eye module with ray tracing. Periodic hole arrays with different degrees were constructed on the anterior surface of the lens to emulate the eyesight decline caused by cataracts. Then, we successfully predicted the incidence of cataracts among people with myopia ranging from -3.0 D to -9.0 D. Results show that periodic hole arrays cause severe eyesight decline when they are centralized in the visual center. However, the wide distribution of these arrays on the anterior surface of the lens would not significantly affect one's eyesight.
Predicting the Incidence of Human Cataract through Retinal Imaging Technology
Directory of Open Access Journals (Sweden)
Chi-Ting Horng
2015-11-01
Full Text Available With the progress of science, technology and medicine, the proportion of elderly people in society has gradually increased over the years. Thus, the medical care and health issues of this population have drawn increasing attention. In particular, among the common medical problems of the elderly, the occurrence of cataracts has been widely observed. In this study, we developed retinal imaging technology by establishing a human eye module with ray tracing. Periodic hole arrays with different degrees were constructed on the anterior surface of the lens to emulate the eyesight decline caused by cataracts. Then, we successfully predicted the incidence of cataracts among people with myopia ranging from −3.0 D to −9.0 D. Results show that periodic hole arrays cause severe eyesight decline when they are centralized in the visual center. However, the wide distribution of these arrays on the anterior surface of the lens would not significantly affect one’s eyesight.
Pirracchio, Romain; Yue, John K; Manley, Geoffrey T; van der Laan, Mark J; Hubbard, Alan E
2018-01-01
Standard statistical practice used for determining the relative importance of competing causes of disease typically relies on ad hoc methods, often byproducts of machine learning procedures (stepwise regression, random forest, etc.). Causal inference framework and data-adaptive methods may help to tailor parameters to match the clinical question and free one from arbitrary modeling assumptions. Our focus is on implementations of such semiparametric methods for a variable importance measure (VIM). We propose a fully automated procedure for VIM based on collaborative targeted maximum likelihood estimation (cTMLE), a method that optimizes the estimate of an association in the presence of potentially numerous competing causes. We applied the approach to data collected from traumatic brain injury patients, specifically a prospective, observational study including three US Level-1 trauma centers. The primary outcome was a disability score (Glasgow Outcome Scale - Extended (GOSE)) collected three months post-injury. We identified clinically important predictors among a set of risk factors using a variable importance analysis based on targeted maximum likelihood estimators (TMLE) and on cTMLE. Via a parametric bootstrap, we demonstrate that the latter procedure has the potential for robust automated estimation of variable importance measures based upon machine-learning algorithms. The cTMLE estimator was associated with substantially less positivity bias as compared to TMLE and larger coverage of the 95% CI. This study confirms the power of an automated cTMLE procedure that can target model selection via machine learning to estimate VIMs in complicated, high-dimensional data.
Paperno, Denis; Marelli, Marco; Tentori, Katya; Baroni, Marco
2014-11-01
This paper draws a connection between statistical word association measures used in linguistics and confirmation measures from epistemology. Having theoretically established the connection, we replicate, in the new context of the judgments of word co-occurrence, an intriguing finding from the psychology of reasoning, namely that confirmation values affect intuitions about likelihood. We show that the effect, despite being based in this case on very subtle statistical insights about thousands of words, is stable across three different experimental settings. Our theoretical and empirical results suggest that factors affecting traditional reasoning tasks are also at play when linguistic knowledge is probed, and they provide further evidence for the importance of confirmation in a new domain. Copyright © 2014 Elsevier Inc. All rights reserved.
Adherence to a Mediterranean diet and prediction of incident stroke.
Tsivgoulis, Georgios; Psaltopoulou, Theodora; Wadley, Virginia G; Alexandrov, Andrei V; Howard, George; Unverzagt, Frederick W; Moy, Claudia; Howard, Virginia J; Kissela, Brett; Judd, Suzanne E
2015-03-01
There are limited data on the potential association of adherence to Mediterranean diet (MeD) with incident stroke. We sought to assess the longitudinal association between greater adherence to MeD and risk of incident stroke. We prospectively evaluated a population-based cohort of 30 239 individuals enrolled in REasons for Geographic and Racial Differences in Stroke (REGARDS) study, after excluding participants with stroke history, missing demographic data or food frequency questionnaires, and unavailable follow-up information. Adherence to MeD was categorized using MeD score. Incident stroke was adjudicated by expert panel review of medical records during a mean follow-up period of 6.5 years. Incident stroke was identified in 565 participants (2.8%; 497 and 68 cases of ischemic stroke [IS] and hemorrhagic stroke, respectively) of 20 197 individuals fulfilling the inclusion criteria. High adherence to MeD (MeD score, 5-9) was associated with lower risk of incident IS in unadjusted analyses (hazard ratio, 0.83; 95% confidence interval, 0.70-1.00; P=0.046). The former association retained its significance (hazard ratio, 0.79; 95% confidence interval, 0.65-0.96; P=0.016) after adjustment for demographics, vascular risk factors, blood pressure levels, and antihypertensive medications. When MeD was evaluated as a continuous variable, a 1-point increase in MeD score was independently associated with a 5% reduction in the risk of incident IS (95% confidence interval, 0-11%). We documented no association of adherence to MeD with incident hemorrhagic stroke. There was no interaction of race (P=0.37) on the association of adherence to MeD with incident IS. High adherence to MeD seems to be associated with a lower risk of incident IS independent of potential confounders. Adherence to MeD is not related to the risk of incident hemorrhagic stroke. © 2015 American Heart Association, Inc.
Predicting the incidence of human campylobacteriosis in Finland with time series analysis.
Sumi, Ayako; Hemilä, Harri; Mise, Keiji; Kobayashi, Nobumichi
2009-08-01
Human campylobacteriosis is a common bacterial cause of gastrointestinal infections. In this study, we tested whether spectral analysis based on the maximum entropy method (MEM) is useful in predicting the incidence of campylobacteriosis in five provinces in Finland, which has been accumulating good quality incidence data under the surveillance program for water- and food-borne infections. On the basis of the spectral analysis, we identified the periodic modes explaining the underlying variations of the incidence data in the years 2000-2005. The optimum least squares fitting (LSF) curve calculated by using the periodic modes reproduced the underlying variation of the incidence data. We extrapolated the LSF curve to the years 2006 and 2007 and predicted the incidence of campylobacteriosis. Our study suggests that MEM spectral analysis allows us to model temporal variations of the disease incidence with multiple periodic modes much more effectively than using the Fourier model, which has been previously used for modeling seasonally varying incidence data.
Real Time Big Data Analytics for Predicting Terrorist Incidents
Toure, Ibrahim
2017-01-01
Terrorism is a complex and evolving phenomenon. In the past few decades, we have witnessed an increase in the number of terrorist incidents in the world. The security and stability of many countries is threatened by terrorist groups. Perpetrators now use sophisticated weapons and the attacks are more and more lethal. Currently, terrorist incidents…
Schoups, G.; Vrugt, J.A.
2010-01-01
Estimation of parameter and predictive uncertainty of hydrologic models has traditionally relied on several simplifying assumptions. Residual errors are often assumed to be independent and to be adequately described by a Gaussian probability distribution with a mean of zero and a constant variance.
Directory of Open Access Journals (Sweden)
Luan Yihui
2009-09-01
Full Text Available Abstract Background Many aspects of biological functions can be modeled by biological networks, such as protein interaction networks, metabolic networks, and gene coexpression networks. Studying the statistical properties of these networks in turn allows us to infer biological function. Complex statistical network models can potentially more accurately describe the networks, but it is not clear whether such complex models are better suited to find biologically meaningful subnetworks. Results Recent studies have shown that the degree distribution of the nodes is not an adequate statistic in many molecular networks. We sought to extend this statistic with 2nd and 3rd order degree correlations and developed a pseudo-likelihood approach to estimate the parameters. The approach was used to analyze the MIPS and BIOGRID yeast protein interaction networks, and two yeast coexpression networks. We showed that 2nd order degree correlation information gave better predictions of gene interactions in both protein interaction and gene coexpression networks. However, in the biologically important task of predicting functionally homogeneous modules, degree correlation information performs marginally better in the case of the MIPS and BIOGRID protein interaction networks, but worse in the case of gene coexpression networks. Conclusion Our use of dK models showed that incorporation of degree correlations could increase predictive power in some contexts, albeit sometimes marginally, but, in all contexts, the use of third-order degree correlations decreased accuracy. However, it is possible that other parameter estimation methods, such as maximum likelihood, will show the usefulness of incorporating 2nd and 3rd degree correlations in predicting functionally homogeneous modules.
Cabreira, Verónica; Pinto, Carla; Pinheiro, Manuela; Lopes, Paula; Peixoto, Ana; Santos, Catarina; Veiga, Isabel; Rocha, Patrícia; Pinto, Pedro; Henrique, Rui; Teixeira, Manuel R
2017-01-01
Lynch syndrome (LS) accounts for up to 4 % of all colorectal cancers (CRC). Detection of a pathogenic germline mutation in one of the mismatch repair genes is the definitive criterion for LS diagnosis, but it is time-consuming and expensive. Immunohistochemistry is the most sensitive prescreening test and its predictive value is very high for loss of expression of MSH2, MSH6, and (isolated) PMS2, but not for MLH1. We evaluated if LS predictive models have a role to improve the molecular testing algorithm in this specific setting by studying 38 individuals referred for molecular testing and who were subsequently shown to have loss of MLH1 immunoexpression in their tumors. For each proband we calculated a risk score, which represents the probability that the patient with CRC carries a pathogenic MLH1 germline mutation, using the PREMM 1,2,6 and MMRpro predictive models. Of the 38 individuals, 18.4 % had a pathogenic MLH1 germline mutation. MMRpro performed better for the purpose of this study, presenting a AUC of 0.83 (95 % CI 0.67-0.9; P < 0.001) compared with a AUC of 0.68 (95 % CI 0.51-0.82, P = 0.09) for PREMM 1,2,6 . Considering a threshold of 5 %, MMRpro would eliminate unnecessary germline mutation analysis in a significant proportion of cases while keeping very high sensitivity. We conclude that MMRpro is useful to correctly predict who should be screened for a germline MLH1 gene mutation and propose an algorithm to improve the cost-effectiveness of LS diagnosis.
Identifying Predictive Factors for Incident Reports in Patients Receiving Radiation Therapy
Energy Technology Data Exchange (ETDEWEB)
Elnahal, Shereef M., E-mail: selnaha1@jhmi.edu [Department of Radiation Oncology and Molecular Radiation Sciences, Sidney Kimmel Comprehensive Cancer Center, Johns Hopkins University School of Medicine, Baltimore, Maryland (United States); Blackford, Amanda [Department of Oncology Biostatistics, Sidney Kimmel Comprehensive Cancer Center, Johns Hopkins University School of Medicine, Baltimore, Maryland (United States); Smith, Koren; Souranis, Annette N.; Briner, Valerie; McNutt, Todd R.; DeWeese, Theodore L.; Wright, Jean L.; Terezakis, Stephanie A. [Department of Radiation Oncology and Molecular Radiation Sciences, Sidney Kimmel Comprehensive Cancer Center, Johns Hopkins University School of Medicine, Baltimore, Maryland (United States)
2016-04-01
Purpose: To describe radiation therapy cases during which voluntary incident reporting occurred; and identify patient- or treatment-specific factors that place patients at higher risk for incidents. Methods and Materials: We used our institution's incident learning system to build a database of patients with incident reports filed between January 2011 and December 2013. Patient- and treatment-specific data were reviewed for all patients with reported incidents, which were classified by step in the process and root cause. A control group of patients without events was generated for comparison. Summary statistics, likelihood ratios, and mixed-effect logistic regression models were used for group comparisons. Results: The incident and control groups comprised 794 and 499 patients, respectively. Common root causes included documentation errors (26.5%), communication (22.5%), technical treatment planning (37.5%), and technical treatment delivery (13.5%). Incidents were more frequently reported in minors (age <18 years) than in adult patients (37.7% vs 0.4%, P<.001). Patients with head and neck (16% vs 8%, P<.001) and breast (20% vs 15%, P=.03) primaries more frequently had incidents, whereas brain (18% vs 24%, P=.008) primaries were less frequent. Larger tumors (17% vs 10% had T4 lesions, P=.02), and cases on protocol (9% vs 5%, P=.005) or with intensity modulated radiation therapy/image guided intensity modulated radiation therapy (52% vs 43%, P=.001) were more likely to have incidents. Conclusions: We found several treatment- and patient-specific variables associated with incidents. These factors should be considered by treatment teams at the time of peer review to identify patients at higher risk. Larger datasets are required to recommend changes in care process standards, to minimize safety risks.
Extended likelihood inference in reliability
International Nuclear Information System (INIS)
Martz, H.F. Jr.; Beckman, R.J.; Waller, R.A.
1978-10-01
Extended likelihood methods of inference are developed in which subjective information in the form of a prior distribution is combined with sampling results by means of an extended likelihood function. The extended likelihood function is standardized for use in obtaining extended likelihood intervals. Extended likelihood intervals are derived for the mean of a normal distribution with known variance, the failure-rate of an exponential distribution, and the parameter of a binomial distribution. Extended second-order likelihood methods are developed and used to solve several prediction problems associated with the exponential and binomial distributions. In particular, such quantities as the next failure-time, the number of failures in a given time period, and the time required to observe a given number of failures are predicted for the exponential model with a gamma prior distribution on the failure-rate. In addition, six types of life testing experiments are considered. For the binomial model with a beta prior distribution on the probability of nonsurvival, methods are obtained for predicting the number of nonsurvivors in a given sample size and for predicting the required sample size for observing a specified number of nonsurvivors. Examples illustrate each of the methods developed. Finally, comparisons are made with Bayesian intervals in those cases where these are known to exist
Sanders, J.B.; Bremmer, M.A.; Deeg, D.J.H.; Beekman, A.T.F.
2012-01-01
Objectives To investigate whether gait speed predicts incident depressive symptoms and whether depressive symptoms predict incident gait speed impairment; to ascertain the presence of shared risk factors for these associations. Design The Longitudinal Aging Study Amsterdam, a prospective cohort
Emmanuel, K.; Quinn, E.; Niu, J.; Guermazi, A.; Roemer, F.; Wirth, W.; Eckstein, F.; Felson, D.
2017-01-01
SUMMARY Objective To test the hypothesis that quantitative measures of meniscus extrusion predict incident radiographic knee osteoarthritis (KOA), prior to the advent of radiographic disease. Methods 206 knees with incident radiographic KOA (Kellgren Lawrence Grade (KLG) 0 or 1 at baseline, developing KLG 2 or greater with a definite osteophyte and joint space narrowing (JSN) grade ≥1 by year 4) were matched to 232 control knees not developing incident KOA. Manual segmentation of the central five slices of the medial and lateral meniscus was performed on coronal 3T DESS MRI and quantitative meniscus position was determined. Cases and controls were compared using conditional logistic regression adjusting for age, sex, BMI, race and clinical site. Sensitivity analyses of early (year [Y] 1/2) and late (Y3/4) incidence was performed. Results Mean medial extrusion distance was significantly greater for incident compared to non-incident knees (1.56 mean ± 1.12 mm SD vs 1.29 ± 0.99 mm; +21%, P meniscus (25.8 ± 15.8% vs 22.0 ± 13.5%; +17%, P meniscus in incident medial KOA, or for the tibial plateau coverage between incident and non-incident knees. Restricting the analysis to medial incident KOA at Y1/2 differences were attenuated, but reached significance for extrusion distance, whereas no significant differences were observed at incident KOA in Y3/4. Conclusion Greater medial meniscus extrusion predicts incident radiographic KOA. Early onset KOA showed greater differences for meniscus position between incident and non-incident knees than late onset KOA. PMID:26318658
Emmanuel, K; Quinn, E; Niu, J; Guermazi, A; Roemer, F; Wirth, W; Eckstein, F; Felson, D
2016-02-01
To test the hypothesis that quantitative measures of meniscus extrusion predict incident radiographic knee osteoarthritis (KOA), prior to the advent of radiographic disease. 206 knees with incident radiographic KOA (Kellgren Lawrence Grade (KLG) 0 or 1 at baseline, developing KLG 2 or greater with a definite osteophyte and joint space narrowing (JSN) grade ≥1 by year 4) were matched to 232 control knees not developing incident KOA. Manual segmentation of the central five slices of the medial and lateral meniscus was performed on coronal 3T DESS MRI and quantitative meniscus position was determined. Cases and controls were compared using conditional logistic regression adjusting for age, sex, BMI, race and clinical site. Sensitivity analyses of early (year [Y] 1/2) and late (Y3/4) incidence was performed. Mean medial extrusion distance was significantly greater for incident compared to non-incident knees (1.56 mean ± 1.12 mm SD vs 1.29 ± 0.99 mm; +21%, P meniscus (25.8 ± 15.8% vs 22.0 ± 13.5%; +17%, P meniscus in incident medial KOA, or for the tibial plateau coverage between incident and non-incident knees. Restricting the analysis to medial incident KOA at Y1/2 differences were attenuated, but reached significance for extrusion distance, whereas no significant differences were observed at incident KOA in Y3/4. Greater medial meniscus extrusion predicts incident radiographic KOA. Early onset KOA showed greater differences for meniscus position between incident and non-incident knees than late onset KOA. Copyright © 2015 Osteoarthritis Research Society International. Published by Elsevier Ltd. All rights reserved.
Parker, Christina N; Finlayson, Kathleen J; Edwards, Helen E
2017-10-01
Venous leg ulcers are characterized by a long healing process and repeated cycles of ulceration. A secondary analysis of data from multisite longitudinal studies was conducted to identify risk factors for delayed healing and recurrence of venous leg ulcers for development of risk assessment tools, and a single-site prospective study was performed to as- sess the new tools' interrater reliability (IRR). The development of the risk assessment tools was based on results from previous multivariate analyses combined with further risk factors documented in the literature from systematic reviews, randomized controlled trials, and cohort studies with regard to delayed healing and recurrence. The delayed healing tool contained 10 items, including patient demographics, living status, use of high-compression therapy, ulcer area, wound bed tissue type, and percent reduction in ulcer area after 2 weeks. The recurrence tool included 8 items, including his- tory of deep vein thrombosis, duration of previous ulcer, history of previous ulcers, body mass index, living alone, leg elevation, walking, and compression. Using consensus procedures, content validity was established by an advisory group of 21 expert multidisciplinary clinicians and researchers. To determine intraclass correlation (ICC) and IRR, 3 rat- ers assessed 26 patients with an open ulcer and 22 with a healed ulcer. IRR analysis indicated statistically signi cant agreement for the delayed healing tool (ICC 0.84; 95% con dence interval [CI], 0.70-0.92; P venous leg ulcers. Studies to examine the items with low ICC scores and to determine the predictive validity of these tools are warranted.
Anwar, Mohammad Y; Lewnard, Joseph A; Parikh, Sunil; Pitzer, Virginia E
2016-11-22
Malaria remains endemic in Afghanistan. National control and prevention strategies would be greatly enhanced through a better ability to forecast future trends in disease incidence. It is, therefore, of interest to develop a predictive tool for malaria patterns based on the current passive and affordable surveillance system in this resource-limited region. This study employs data from Ministry of Public Health monthly reports from January 2005 to September 2015. Malaria incidence in Afghanistan was forecasted using autoregressive integrated moving average (ARIMA) models in order to build a predictive tool for malaria surveillance. Environmental and climate data were incorporated to assess whether they improve predictive power of models. Two models were identified, each appropriate for different time horizons. For near-term forecasts, malaria incidence can be predicted based on the number of cases in the four previous months and 12 months prior (Model 1); for longer-term prediction, malaria incidence can be predicted using the rates 1 and 12 months prior (Model 2). Next, climate and environmental variables were incorporated to assess whether the predictive power of proposed models could be improved. Enhanced vegetation index was found to have increased the predictive accuracy of longer-term forecasts. Results indicate ARIMA models can be applied to forecast malaria patterns in Afghanistan, complementing current surveillance systems. The models provide a means to better understand malaria dynamics in a resource-limited context with minimal data input, yielding forecasts that can be used for public health planning at the national level.
DEFF Research Database (Denmark)
Kruse, C; Goemaere, S; De Buyser, S
2018-01-01
and bone mineral density scores were the most important predictors. INTRODUCTION: Machine learning principles were used to predict 5-year mortality and 3-year incident severe immobility in a population of older men by frailty and sarcopenia characteristics. METHODS: Using prospective data from 1997 on 264......There is an increasing awareness of sarcopenia in older people. We applied machine learning principles to predict mortality and incident immobility in older Belgian men through sarcopenia and frailty characteristics. Mortality could be predicted with good accuracy. Serum 25-hydroxyvitamin D...... the most important predictors of immobility. Sarcopenia assessed by lean mass estimates was relevant to mortality prediction but not immobility prediction. CONCLUSIONS: Using advanced statistical models and a machine learning approach 5-year mortality can be predicted with good accuracy using a Bayesian...
Patterns, incidence and predictive factors for pain after interventional radiology
International Nuclear Information System (INIS)
England, A.; Tam, C.L.; Thacker, D.E.; Walker, A.L.; Parkinson, A.S.; DeMello, W.; Bradley, A.J.; Tuck, J.S.; Laasch, H.-U.; Butterfield, J.S.; Ashleigh, R.J.; England, R.E.; Martin, D.F.
2005-01-01
AIM: To evaluate prospectively the pattern, severity and predictive factors of pain after interventional radiological procedures. MATERIALS AND METHODS: All patients undergoing non-arterial radiological interventional procedures were assessed using a visual-analogue scale (VAS) for pain before and at regular intervals for 24 h after their procedure. RESULTS: One hundred and fifty patients (87 men, mean age 62 years, range 18-92 years) were entered into the study. Significant increases in VAS score occurred 8 h after percutaneous biliary procedures (+47.7 mm, SD 14.9 mm; p=0.001), 6 h after central venous access and gastrostomy insertion (+23.7 mm, SD 19.5 mm; p=0.001 and +28.4 mm, SD 9.7 mm; p=0.007, respectively) and 4 h after oesophageal stenting (+27.8 mm, SD 20.2 mm, p=0.001). Non-significant increases in VAS pain score were observed after duodenal and colonic stenting (duodenal: +5.13 mm, SD 7.47 mm; p=0.055, colonic: +23.3 mm, SD 13.10 mm, p=0.250) at a mean of 5 h (range 4-6 h). Patients reported a significant reduction in pain score for nephrostomy insertion (-28.4 mm, SD 7.11 mm, p=0.001). Post-procedural analgesia was required in 99 patients (69.2%), 40 (28.0%) requiring opiates. Maximum post-procedural VAS pain score was significantly higher in patients who had no pre-procedural analgesia (p=0.003). CONCLUSION: Post-procedural pain is common and the pattern and severity of pain between procedures is variable. Pain control after interventional procedures is often inadequate, and improvements in pain management are required
Adams, Megan A; Hosmer, Amy E; Wamsteker, Erik J; Anderson, Michelle A; Elta, Grace H; Kubiliun, Nisa M; Kwon, Richard S; Piraka, Cyrus R; Scheiman, James M; Waljee, Akbar K; Hussain, Hero K; Elmunzer, B Joseph
2015-07-01
Existing guidelines aim to stratify the likelihood of choledocholithiasis to guide the use of ERCP versus a lower-risk diagnostic study such as EUS, MRCP, or intraoperative cholangiography. To assess the performance of existing guidelines in predicting choledocholithiasis and to determine whether trends in laboratory parameters improve diagnostic accuracy. Retrospective cohort study. Tertiary-care hospital. Hospitalized patients presenting with suspected choledocholithiasis over a 6-year period. Assessment of the American Society for Gastrointestinal Endoscopy (ASGE) guidelines, its component variables, and laboratory trends in predicting choledocholithiasis. The presence of choledocholithiasis confirmed by EUS, MRCP, or ERCP. A total of 179 (35.9%) of the 498 eligible patients met ASGE high-probability criteria for choledocholithiasis on initial presentation. Of those, 99 patients (56.3%) had a stone/sludge on subsequent confirmatory test. Of patients not meeting high-probability criteria on presentation, 111 (34.8%) had a stone/sludge. The overall accuracy of the guidelines in detecting choledocholithiasis was 62.1% (47.4% sensitivity, 73% specificity) based on data available at presentation. The accuracy was unchanged when incorporating the second set of liver chemistries obtained after admission (63.2%), suggesting that laboratory trends do not improve performance. Retrospective study, inconsistent timing of the second set of biochemical markers. In our cohort of patients, existing choledocholithiasis guidelines lacked diagnostic accuracy, likely resulting in overuse of ERCP. Incorporation of laboratory trends did not improve performance. Additional research focused on risk stratification is necessary to meet the goal of eliminating unnecessary diagnostic ERCP. Copyright © 2015 American Society for Gastrointestinal Endoscopy. Published by Elsevier Inc. All rights reserved.
Nixon, Reginald D. V.; Ellis, Alicia A.; Nehmy, Thomas J.; Ball, Shelley-Anne
2010-01-01
Three screening methods to predict posttraumatic stress disorder (PTSD) and depression symptoms in children following single-incident trauma were tested. Children and adolescents (N = 90; aged 7-17 years) were assessed within 4 weeks of an injury that led to hospital treatment and followed up 3 and 6 months later. Screening methods were adapted…
A spatial model to predict the incidence of neural tube defects
Directory of Open Access Journals (Sweden)
Li Lianfa
2012-11-01
Full Text Available Abstract Background Environmental exposure may play an important role in the incidences of neural tube defects (NTD of birth defects. Their influence on NTD may likely be non-linear; few studies have considered spatial autocorrelation of residuals in the estimation of NTD risk. We aimed to develop a spatial model based on generalized additive model (GAM plus cokriging to examine and model the expected incidences of NTD and make the inference of the incidence risk. Methods We developed a spatial model to predict the expected incidences of NTD at village level in Heshun County, Shanxi Province, China, a region with high NTD cases. GAM was used to establish linear and non-linear relationships between local covariates and the expected NTD incidences. We examined the following village-level covariates in the model: projected coordinates, soil types, lithodological classes, distance to watershed, rivers, faults and major roads, annual average fertilizer uses, fruit and vegetable production, gross domestic product, and the number of doctors. The residuals from GAM were assumed to be spatially auto-correlative and cokriged with regional residuals to improve the prediction. Our approach was compared with three other models, universal kriging, generalized linear regression and GAM. Cross validation was conducted for validation. Results Our model predicted the expected incidences of NTD well, with a good CV R2 of 0.80. Important predictive factors included the fertilizer uses, locations of the centroid of each village, the shortest distance to rivers and faults and lithological classes with significant spatial autocorrelation of residuals. Our model out-performed the other three methods by 16% or more in term of R2. Conclusions The variance explained by our model was approximately 80%. This modeling approach is useful for NTD epidemiological studies and intervention planning.
Symptoms of delirium predict incident delirium in older long-term care residents.
Cole, Martin G; McCusker, Jane; Voyer, Philippe; Monette, Johanne; Champoux, Nathalie; Ciampi, Antonio; Vu, Minh; Dyachenko, Alina; Belzile, Eric
2013-06-01
Detection of long-term care (LTC) residents at risk of delirium may lead to prevention of this disorder. The primary objective of this study was to determine if the presence of one or more Confusion Assessment Method (CAM) core symptoms of delirium at baseline assessment predicts incident delirium. Secondary objectives were to determine if the number or the type of symptoms predict incident delirium. The study was a secondary analysis of data collected for a prospective study of delirium among older residents of seven LTC facilities in Montreal and Quebec City, Canada. The Mini-Mental State Exam (MMSE), CAM, Delirium Index (DI), Hierarchic Dementia Scale, Barthel Index, and Cornell Scale for Depression were completed at baseline. The MMSE, CAM, and DI were repeated weekly for six months. Multivariate Cox regression models were used to determine if baseline symptoms predict incident delirium. Of 273 residents, 40 (14.7%) developed incident delirium. Mean (SD) time to onset of delirium was 10.8 (7.4) weeks. When one or more CAM core symptoms were present at baseline, the Hazard Ratio (HR) for incident delirium was 3.5 (95% CI = 1.4, 8.9). The HRs for number of symptoms present ranged from 2.9 (95% CI = 1.0, 8.3) for one symptom to 3.8 (95% CI = 1.3, 11.0) for three symptoms. The HR for one type of symptom, fluctuation, was 2.2 (95% CI = 1.2, 4.2). The presence of CAM core symptoms at baseline assessment predicts incident delirium in older LTC residents. These findings have potentially important implications for clinical practice and research in LTC settings.
Prediction of cancer incidence in Tyrol/Austria for year of diagnosis 2020.
Oberaigner, Willi; Geiger-Gritsch, Sabine
2014-10-01
Prediction of the number of incident cancer cases is very relevant for health planning purposes and allocation of resources. The shift towards elder age groups in central European populations in the next decades is likely to contribute to an increase in cancer incidence for many cancer sites. In Tyrol, cancer incidence data have been registered on a high level of completeness for more than 20 years. We therefore aimed to compute well-founded predictions of cancer incidence for Tyrol for the year 2020 for all frequent cancer sites and for all cancer sites combined. After defining a prediction base range for every cancer site, we extrapolated the age-specific time trends in the prediction base range following a linear model for increasing and a log-linear model for decreasing time trends. The extrapolated time trends were evaluated for the year 2020 applying population figures supplied by Statistics Austria. Compared with the number of annual incident cases for the year 2009 for all cancer sites combined except non-melanoma skin cancer, we predicted an increase of 235 (15 %) and 362 (21 %) for females and males, respectively. For both sexes, more than 90 % of the increase is attributable to the shift toward older age groups in the next decade. The biggest increase in absolute numbers is seen for females in breast cancer (92, 21 %), lung cancer (64, 52 %), colorectal cancer (40, 24 %), melanoma (38, 30 %) and the haematopoietic system (37, 35 %) and for males in prostate cancer (105, 25 %), colorectal cancer (91, 45 %), the haematopoietic system (71, 55 %), bladder cancer (69, 100 %) and melanoma (64, 52 %). The increase in the number of incident cancer cases of 15 % in females and 21 % in males in the next decade is very relevant for planning purposes. However, external factors cause uncertainty in the prediction of some cancer sites (mainly prostate cancer and colorectal cancer) and the prediction intervals are still broad. Therefore
Earthquake likelihood model testing
Schorlemmer, D.; Gerstenberger, M.C.; Wiemer, S.; Jackson, D.D.; Rhoades, D.A.
2007-01-01
INTRODUCTIONThe Regional Earthquake Likelihood Models (RELM) project aims to produce and evaluate alternate models of earthquake potential (probability per unit volume, magnitude, and time) for California. Based on differing assumptions, these models are produced to test the validity of their assumptions and to explore which models should be incorporated in seismic hazard and risk evaluation. Tests based on physical and geological criteria are useful but we focus on statistical methods using future earthquake catalog data only. We envision two evaluations: a test of consistency with observed data and a comparison of all pairs of models for relative consistency. Both tests are based on the likelihood method, and both are fully prospective (i.e., the models are not adjusted to fit the test data). To be tested, each model must assign a probability to any possible event within a specified region of space, time, and magnitude. For our tests the models must use a common format: earthquake rates in specified “bins” with location, magnitude, time, and focal mechanism limits.Seismology cannot yet deterministically predict individual earthquakes; however, it should seek the best possible models for forecasting earthquake occurrence. This paper describes the statistical rules of an experiment to examine and test earthquake forecasts. The primary purposes of the tests described below are to evaluate physical models for earthquakes, assure that source models used in seismic hazard and risk studies are consistent with earthquake data, and provide quantitative measures by which models can be assigned weights in a consensus model or be judged as suitable for particular regions.In this paper we develop a statistical method for testing earthquake likelihood models. A companion paper (Schorlemmer and Gerstenberger 2007, this issue) discusses the actual implementation of these tests in the framework of the RELM initiative.Statistical testing of hypotheses is a common task and a
Electrocardiogram (ECG) for the Prediction of Incident Atrial Fibrillation: An Overview.
Aizawa, Yoshifusa; Watanabe, Hiroshi; Okumura, Ken
2017-12-01
Electrocardiograms (ECGs) have been employed to medically evaluate participants in population-based studies, and ECG-derived predictors have been reported for incident atrial fibrillation (AF). Here, we reviewed the status of ECG in predicting new-onset AF. We surveyed population-based studies and revealed ECG variables to be risk factors for incident AF. When available, the predictive values of each ECG risk marker were calculated. Both the atrium-related and ventricle-related ECG variables were risk factors for incident AF, with significant hazard risks (HRs) even after multivariate adjustments. The risk factors included P-wave indices (maximum P-wave duration, its dispersion or variation and P-wave morphology) and premature atrial contractions (PACs) or runs. In addition, left ventricular hypertrophy (LVH), ST-T abnormalities, intraventricular conduction delay, QTc interval and premature ventricular contractions (PVCs) or runs were a risk of incident AF. An HR of greater than 2.0 was observed in the upper 5th percentile of the P-wave durations, P-wave durations greater than 130 ms, P-wave morpholyg, PACs (PVCs) or runs, LVH, QTc and left anterior fascicular blocks. The sensitivity , specificity and the positive and negative predictive values were 3.6-53.8%, 61.7-97.9%, 2.9-61.7% and 77.4-97.7%, respectively. ECG variables are risk factors for incident AF. The correlation between the ECG-derived AF predictors, especially P-wave indices, and underlying diseases and the effects of the reversal of the ECG-derived predictors on incident AF by treatment of comorbidities require further study.
Social determinants of health predict state incidence of HIV and AIDS: a short report.
Zeglin, Robert J; Stein, J Paul
2015-01-01
There are approximately 1.2 million people living with HIV/AIDS (PLWHA) in the USA. Each year, there are roughly 50,000 new HIV diagnoses. The World Health Organization Commission on Social Determinants of Health (CSDH) identified several social determinants of health and health inequity (SDH) including childcare, education, employment, gender equality, health insurance, housing, and income. The CSDH also noted the significant impact the SDH can have on advocacy for social change, social interventions to reduce HIV prevalence, and health monitoring. The current analysis evaluated the predictive ability of five SDH for HIV and AIDS incidence on the state level. The SDH used in the analysis were education, employment, housing, income, and insurance; other SDH were not included because reliable and appropriate state-level data were not available. The results of multiple regression analyses indicate that the use of these five SDH create statistically significant models predicting HIV incidence (adjusted R(2) = .54) and AIDS incidence (adjusted R(2) = .37) and account for a sizable portion of the variance for each. Stepwise variable selection reduced the necessary SDH to two: (1) education and (2) housing. These models are also statistically significant and account for a notable portion of variance in HIV incidence (adjusted R(2) = .55) and AIDS incidence (adjusted R(2) = .40). These outcomes demonstrate that state-level SDH, particularly education and housing, offer significant explanatory power regarding HIV and AIDS incidence rates. Congruent with the recommendations of the CSDH, the results of the current analysis suggest that state-sponsored policy and social interventions should consider and target SDH, especially education and housing, in attempts to reduce HIV and AIDS incidence rates.
Financial and Health Literacy Predict Incident Alzheimer's Disease Dementia and Pathology.
Yu, Lei; Wilson, Robert S; Schneider, Julie A; Bennett, David A; Boyle, Patricia A
2017-01-01
Domain specific literacy is a multidimensional construct that requires multiple resources including cognitive and non-cognitive factors. We test the hypothesis that domain specific literacy is associated with Alzheimer's disease (AD) dementia and AD pathology after controlling for cognition. Participants were community-based older persons who completed a baseline literacy assessment, underwent annual clinical evaluations for up to 8 years, and agreed to organ donation after death. Financial and health literacy was measured using 32 questions and cognition was measured using 19 tests. Annual diagnosis of AD dementia followed standard criteria. AD pathology was examined postmortem by quantifying plaques and tangles. Cox models examined the association of literacy with incident AD dementia. Performance of model prediction for incident AD dementia was assessed using indices for integrated discrimination improvement and continuous net reclassification improvement. Linear regression models examined the independent association of literacy with AD pathology in autopsied participants. All 805 participants were free of dementia at baseline and 102 (12.7%) developed AD dementia during the follow-up. Lower literacy was associated with higher risk for incident AD dementia (p literacy measure had better predictive performance than the one with demographics and cognition only. Lower literacy also was associated with higher burden of AD pathology after controlling for cognition (β= 0.07, p = 0.035). Literacy predicts incident AD dementia and AD pathology in community-dwelling older persons, and the association is independent of traditional measures of cognition.
Financial and health literacy predict incident AD dementia and AD pathology
Yu, Lei; Wilson, Robert S.; Schneider, Julie A.; Bennett, David A.; Boyle, Patricia A.
2017-01-01
Background Domain specific literacy is a multidimensional construct that requires multiple resources including cognitive and non-cognitive factors. Objective We test the hypothesis that domain specific literacy is associated with AD dementia and AD pathology after controlling for cognition. Methods Participants were community based older persons who completed a baseline literacy assessment, underwent annual clinical evaluations for up to 8 years and agreed to organ donation after death. Financial and health literacy was measured using 32 questions and cognition was measured using 19 tests. Annual diagnosis of AD dementia followed standard criteria. AD pathology was examined post-mortem by quantifying plaques and tangles. Cox models examined the association of literacy with incident AD dementia. Performance of model prediction for incident AD dementia was assessed using indices for integrated discrimination improvement and continuous net reclassification improvement. Linear regression models examined the independent association of literacy with AD pathology in autopsied participants. Results All 805 participants were free of dementia at baseline and 102 (12.7%) developed AD dementia during the follow-up. Lower literacy was associated with higher risk for incident AD dementia (pliteracy measure had better predictive performance than the one with demographics and cognition only. Lower literacy also was associated with higher burden of AD pathology after controlling for cognition (β=0.07, p=0.035). Conclusion Literacy predicts incident AD dementia and AD pathology in community-dwelling older persons, and the association is independent of traditional measures of cognition. PMID:28157101
Quantitative trunk sway and prediction of incident falls in older adults.
Mahoney, Jeannette R; Oh-Park, Mooyeon; Ayers, Emmeline; Verghese, Joe
2017-10-01
Poor balance and balance impairments are major predictors of falls. The purpose of the current study was to determine the clinical validity of baseline quantitative static trunk sway measurements in predicting incident falls in a cohort of 287 community-dwelling non-demented older Americans (mean age 76.14±6.82years; 54% female). Trunk sway was measured using the SwayStar™ device, and quantified as angular displacement in degrees in anterior-posterior (pitch) and medio-lateral (roll) planes. Over a one-year follow-up period, 66 elders (23%) reported incident falls. Anterior-posterior angular displacement was a strong predictor of incident falls in older adults in Cox proportional hazards models (hazard ratio adjusted for age, gender, education, RBANS total score, medical comorbidities, geriatric depression scale score, sensory impairments, gait speed, and history of fall in the past 1year ((aHR)=1.59; p=0.033) whereas, angular displacement in the medio-lateral plane was not predictive of falls (aHR=1.35; p=0.276). Our results reveal the significance of quantitative trunk sway, specifically anterior-posterior angular displacement, in predicting incident falls in older adults. Copyright © 2017 Elsevier B.V. All rights reserved.
McCarthy, Linda C; Newcombe, Paul J; Whittaker, John C; Wurzelmann, John I; Fries, Michael A; Burnham, Nancy R; Cai, Gengqian; Stinnett, Sandra W; Trivedi, Trupti M; Xu, Chun-Fang
2012-09-01
To develop comprehensive predictive models for choroidal neovascularization (CNV) and geographic atrophy (GA) incidence within 3 years that can be applied realistically to clinical practice. Retrospective evaluation of data from a longitudinal study to develop and validate predictive models of CNV and GA. The predictive performance of clinical, environmental, demographic, and genetic risk factors was explored in regression models, using data from both eyes of 2011 subjects from the Age-Related Eye Disease Study (AREDS). The performance of predictive models was compared using 10-fold cross-validated receiver operating characteristic curves in the training data, followed by comparisons in an independent validation dataset (1410 AREDS subjects). Bayesian trial simulations were used to compare the usefulness of predictive models to screen patients for inclusion in prevention clinical trials. Logistic regression models that included clinical, demographic, and environmental factors had better predictive performance for 3-year CNV and GA incidence (area under the receiver operating characteristic curve of 0.87 and 0.89, respectively), compared with simple clinical criteria (AREDS simplified severity scale). Although genetic markers were associated significantly with 3-year CNV (CFH: Y402H; ARMS2: A69S) and GA incidence (CFH: Y402H), the inclusion of genetic factors in the models provided only marginal improvements in predictive performance. The logistic regression models combine good predictive performance with greater flexibility to optimize clinical trial design compared with simple clinical models (AREDS simplified severity scale). The benefit of including genetic factors to screen patients for recruitment to CNV prevention studies is marginal and is dependent on individual clinical trial economics. Copyright © 2012 Elsevier Inc. All rights reserved.
Effects of passengers on bus driver celeration behavior and incident prediction.
Af Wåhlberg, A E
2007-01-01
Driver celeration (speed change) behavior of bus drivers has previously been found to predict their traffic incident involvement, but it has also been ascertained that the level of celeration is influenced by the number of passengers carried as well as other traffic density variables. This means that the individual level of celeration is not as well estimated as could be the case. Another hypothesized influence of the number of passengers is that of differential quality of measurements, where high passenger density circumstances are supposed to yield better estimates of the individual driver component of celeration behavior. Comparisons were made between different variants of the celeration as predictor of traffic incidents of bus drivers. The number of bus passengers was held constant, and cases identified by their number of passengers per kilometer during measurement were excluded (in 12 samples of repeated measurements). After holding passengers constant, the correlations between celeration behavior and incident record increased very slightly. Also, the selective prediction of incident record of those drivers who had had many passengers when measured increased the correlations even more. The influence of traffic density variables like the number of passengers have little direct influence on the predictive power of celeration behavior, despite the impact upon absolute celeration level. Selective prediction on the other hand increased correlations substantially. This unusual effect was probably due to how the individual propensity for high or low celeration driving was affected by the number of stops made and general traffic density; differences between drivers in this respect were probably enhanced by the denser traffic, thus creating a better estimate of the theoretical celeration behavior parameter C. The new concept of selective prediction was discussed in terms of making estimates of the systematic differences in quality of the individual driver data.
Sahle, Berhe W; Owen, Alice J; Chin, Ken Lee; Reid, Christopher M
2017-09-01
Numerous models predicting the risk of incident heart failure (HF) have been developed; however, evidence of their methodological rigor and reporting remains unclear. This study critically appraises the methods underpinning incident HF risk prediction models. EMBASE and PubMed were searched for articles published between 1990 and June 2016 that reported at least 1 multivariable model for prediction of HF. Model development information, including study design, variable coding, missing data, and predictor selection, was extracted. Nineteen studies reporting 40 risk prediction models were included. Existing models have acceptable discriminative ability (C-statistics > 0.70), although only 6 models were externally validated. Candidate variable selection was based on statistical significance from a univariate screening in 11 models, whereas it was unclear in 12 models. Continuous predictors were retained in 16 models, whereas it was unclear how continuous variables were handled in 16 models. Missing values were excluded in 19 of 23 models that reported missing data, and the number of events per variable was models. Only 2 models presented recommended regression equations. There was significant heterogeneity in discriminative ability of models with respect to age (P prediction models that had sufficient discriminative ability, although few are externally validated. Methods not recommended for the conduct and reporting of risk prediction modeling were frequently used, and resulting algorithms should be applied with caution. Copyright © 2017 Elsevier Inc. All rights reserved.
Applications of Machine learning in Prediction of Breast Cancer Incidence and Mortality
International Nuclear Information System (INIS)
Helal, N.; Sarwat, E.
2012-01-01
Breast cancer is one of the leading causes of cancer deaths for the female population in both developed and developing countries. In this work we have used the baseline descriptive data about the incidence (new cancer cases) of in situ breast cancer among Wisconsin females. The documented data were from the most recent 12-years period for which data are available. Wiscons in cancer incidence and mortality (deaths due to cancer) that occurred were also considered in this work. Artificial Neural network (ANN) have been successfully applied to problems in the prediction of the number of new cancer cases and mortality. Using artificial intelligence (AI) in this study, the numbers of new cancer cases and mortality that may occur are predicted.
Warnke, Ingeborg; Gamma, Alex; Buadze, Maria; Schleifer, Roman; Canela, Carlos; Strebel, Bernd; Tényi, Tamás; Rössler, Wulf; Rüsch, Nicolas; Liebrenz, Michael
2018-01-01
Psychiatry as a medical discipline is becoming increasingly important due to the high and increasing worldwide burden associated with mental disorders. Surprisingly, however, there is a lack of young academics choosing psychiatry as a career. Previous evidence on medical students’ perspectives is abundant but has methodological shortcomings. Therefore, by attempting to avoid previous shortcomings, we aimed to contribute to a better understanding of the predictors of the following three outcome variables: current medical students’ attitudes toward psychiatry, interest in psychiatry, and estimated likelihood of working in psychiatry. The sample consisted of N = 1,356 medical students at 45 medical schools in Germany and Austria as well as regions of Switzerland and Hungary with a German language curriculum. We used snowball sampling via Facebook with a link to an online questionnaire as recruitment procedure. Snowball sampling is based on referrals made among people. This questionnaire included a German version of the Attitudes Toward Psychiatry Scale (ATP-30-G) and further variables related to outcomes and potential predictors in terms of sociodemography (e.g., gender) or medical training (e.g., curriculum-related experience with psychiatry). Data were analyzed by linear mixed models and further regression models. On average, students had a positive attitude to and high general interest in, but low professional preference for, psychiatry. A neutral attitude to psychiatry was partly related to the discipline itself, psychiatrists, or psychiatric patients. Female gender and previous experience with psychiatry, particularly curriculum-related and personal experience, were important predictors of all outcomes. Students in the first years of medical training were more interested in pursuing psychiatry as a career. Furthermore, the country of the medical school was related to the outcomes. However, statistical models explained only a small proportion of variance
Directory of Open Access Journals (Sweden)
Ingeborg Warnke
2018-03-01
Full Text Available Psychiatry as a medical discipline is becoming increasingly important due to the high and increasing worldwide burden associated with mental disorders. Surprisingly, however, there is a lack of young academics choosing psychiatry as a career. Previous evidence on medical students’ perspectives is abundant but has methodological shortcomings. Therefore, by attempting to avoid previous shortcomings, we aimed to contribute to a better understanding of the predictors of the following three outcome variables: current medical students’ attitudes toward psychiatry, interest in psychiatry, and estimated likelihood of working in psychiatry. The sample consisted of N = 1,356 medical students at 45 medical schools in Germany and Austria as well as regions of Switzerland and Hungary with a German language curriculum. We used snowball sampling via Facebook with a link to an online questionnaire as recruitment procedure. Snowball sampling is based on referrals made among people. This questionnaire included a German version of the Attitudes Toward Psychiatry Scale (ATP-30-G and further variables related to outcomes and potential predictors in terms of sociodemography (e.g., gender or medical training (e.g., curriculum-related experience with psychiatry. Data were analyzed by linear mixed models and further regression models. On average, students had a positive attitude to and high general interest in, but low professional preference for, psychiatry. A neutral attitude to psychiatry was partly related to the discipline itself, psychiatrists, or psychiatric patients. Female gender and previous experience with psychiatry, particularly curriculum-related and personal experience, were important predictors of all outcomes. Students in the first years of medical training were more interested in pursuing psychiatry as a career. Furthermore, the country of the medical school was related to the outcomes. However, statistical models explained only a small
Directory of Open Access Journals (Sweden)
David A McAllister
Full Text Available Emphysema on CT is common in older smokers. We hypothesised that emphysema on CT predicts acute episodes of care for chronic lower respiratory disease among older smokers.Participants in a lung cancer screening study age ≥ 60 years were recruited into a prospective cohort study in 2001-02. Two radiologists independently visually assessed the severity of emphysema as absent, mild, moderate or severe. Percent emphysema was defined as the proportion of voxels ≤ -910 Hounsfield Units. Participants completed a median of 5 visits over a median of 6 years of follow-up. The primary outcome was hospitalization, emergency room or urgent office visit for chronic lower respiratory disease. Spirometry was performed following ATS/ERS guidelines. Airflow obstruction was defined as FEV1/FVC ratio <0.70 and FEV1<80% predicted.Of 521 participants, 4% had moderate or severe emphysema, which was associated with acute episodes of care (rate ratio 1.89; 95% CI: 1.01-3.52 adjusting for age, sex and race/ethnicity, as was percent emphysema, with similar associations for hospitalisation. Emphysema on visual assessment also predicted incident airflow obstruction (HR 5.14; 95% CI 2.19-21.1.Visually assessed emphysema and percent emphysema on CT predicted acute episodes of care for chronic lower respiratory disease, with the former predicting incident airflow obstruction among older smokers.
International Nuclear Information System (INIS)
Reinaldo Gonzalez; Scott R. Reeves; Eric Eslinger
2007-01-01
Accurate, high-resolution, three-dimensional (3D) reservoir characterization can provide substantial benefits for effective oilfield management. By doing so, the predictive reliability of reservoir flow models, which are routinely used as the basis for significant investment decisions designed to recover millions of barrels of oil, can be substantially improved. This is particularly true when Secondary Oil Recovery (SOR) or Enhanced Oil Recovery (EOR) operations are planned. If injectants such as water, hydrocarbon gases, steam, CO2, etc. are to be used; an understanding of fluid migration paths can mean the difference between economic success and failure. SOR/EOR projects will increasingly take place in heterogeneous reservoirs where interwell complexity is high and difficult to understand. Although reasonable reservoir characterization information often exists at the wellbore, the only economical way to sample the interwell region is with seismic methods which makes today's standard practice for developing a 3D reservoir description to resort to the use of seismic inversion techniques. However, the application of these methods brings other technical drawbacks than can render them inefficient. The industry therefore needs improved reservoir characterization approaches that are quicker, more accurate, and less expensive than today's standard methods. To achieve this objective, the Department of Energy (DOE) has been promoting some studies with the goal of evaluating whether robust relationships between data at vastly different scales of measurement could be established using advanced pattern recognition (soft computing) methods. Advanced Resources International (ARI) has performed two of these projects with encouraging results showing the feasibility of establishing critical relationships between data at different measurement scales to create high-resolution reservoir characterization. In this third study performed by ARI and also funded by the DOE, a model
Directory of Open Access Journals (Sweden)
Shuman Yang
Full Text Available International Classification of Diseases (ICD codes have been used to ascertain individuals who are obese. There has been limited research about the predictive value of ICD-coded obesity for major chronic conditions at the population level. We tested the utility of ICD-coded obesity versus measured obesity for predicting incident major osteoporotic fracture (MOF, after adjusting for covariates (i.e., age and sex. In this historical cohort study (2001-2015, we selected 61,854 individuals aged 50 years and older from the Manitoba Bone Mineral Density Database, Canada. Body mass index (BMI ≥30 kg/m2 was used to define measured obesity. Hospital and physician ICD codes were used to ascertain ICD-coded obesity and incident MOF. Average cohort age was 66.3 years and 90.3% were female. The sensitivity, specificity and positive predictive value for ICD-coded obesity using measured obesity as the reference were 0.11 (95% confidence interval [CI]: 0.10, 0.11, 0.99 (95% CI: 0.99, 0.99 and 0.79 (95% CI: 0.77, 0.81, respectively. ICD-coded obesity (adjusted hazard ratio [HR] 0.83; 95% CI: 0.70, 0.99 and measured obesity (adjusted HR 0.83; 95% CI: 0.78, 0.88 were associated with decreased MOF risk. Although the area under the receiver operating characteristic curve (AUROC estimates for incident MOF were not significantly different for ICD-coded obesity versus measured obesity (0.648 for ICD-coded obesity versus 0.650 for measured obesity; P = 0.056 for AUROC difference, the category-free net reclassification index for ICD-coded obesity versus measured obesity was -0.08 (95% CI: -0.11, -0.06 for predicting incident MOF. ICD-coded obesity predicted incident MOF, though it had low sensitivity and reclassified MOF risk slightly less well than measured obesity.
Lix, Lisa M.; Yan, Lin; Hinds, Aynslie M.; Leslie, William D.
2017-01-01
International Classification of Diseases (ICD) codes have been used to ascertain individuals who are obese. There has been limited research about the predictive value of ICD-coded obesity for major chronic conditions at the population level. We tested the utility of ICD-coded obesity versus measured obesity for predicting incident major osteoporotic fracture (MOF), after adjusting for covariates (i.e., age and sex). In this historical cohort study (2001–2015), we selected 61,854 individuals aged 50 years and older from the Manitoba Bone Mineral Density Database, Canada. Body mass index (BMI) ≥30 kg/m2 was used to define measured obesity. Hospital and physician ICD codes were used to ascertain ICD-coded obesity and incident MOF. Average cohort age was 66.3 years and 90.3% were female. The sensitivity, specificity and positive predictive value for ICD-coded obesity using measured obesity as the reference were 0.11 (95% confidence interval [CI]: 0.10, 0.11), 0.99 (95% CI: 0.99, 0.99) and 0.79 (95% CI: 0.77, 0.81), respectively. ICD-coded obesity (adjusted hazard ratio [HR] 0.83; 95% CI: 0.70, 0.99) and measured obesity (adjusted HR 0.83; 95% CI: 0.78, 0.88) were associated with decreased MOF risk. Although the area under the receiver operating characteristic curve (AUROC) estimates for incident MOF were not significantly different for ICD-coded obesity versus measured obesity (0.648 for ICD-coded obesity versus 0.650 for measured obesity; P = 0.056 for AUROC difference), the category-free net reclassification index for ICD-coded obesity versus measured obesity was -0.08 (95% CI: -0.11, -0.06) for predicting incident MOF. ICD-coded obesity predicted incident MOF, though it had low sensitivity and reclassified MOF risk slightly less well than measured obesity. PMID:29216254
The phylogenetic likelihood library.
Flouri, T; Izquierdo-Carrasco, F; Darriba, D; Aberer, A J; Nguyen, L-T; Minh, B Q; Von Haeseler, A; Stamatakis, A
2015-03-01
We introduce the Phylogenetic Likelihood Library (PLL), a highly optimized application programming interface for developing likelihood-based phylogenetic inference and postanalysis software. The PLL implements appropriate data structures and functions that allow users to quickly implement common, error-prone, and labor-intensive tasks, such as likelihood calculations, model parameter as well as branch length optimization, and tree space exploration. The highly optimized and parallelized implementation of the phylogenetic likelihood function and a thorough documentation provide a framework for rapid development of scalable parallel phylogenetic software. By example of two likelihood-based phylogenetic codes we show that the PLL improves the sequential performance of current software by a factor of 2-10 while requiring only 1 month of programming time for integration. We show that, when numerical scaling for preventing floating point underflow is enabled, the double precision likelihood calculations in the PLL are up to 1.9 times faster than those in BEAGLE. On an empirical DNA dataset with 2000 taxa the AVX version of PLL is 4 times faster than BEAGLE (scaling enabled and required). The PLL is available at http://www.libpll.org under the GNU General Public License (GPL). © The Author(s) 2014. Published by Oxford University Press, on behalf of the Society of Systematic Biologists.
Does cannabis use predict the first incidence of mood and anxiety disorders in the adult population?
van Laar, Margriet; van Dorsselaer, Saskia; Monshouwer, Karin; de Graaf, Ron
2007-08-01
To investigate whether cannabis use predicted the first incidence of mood and anxiety disorders in adults during a 3-year follow-up period. Data were derived from the Netherlands Mental Health Survey and Incidence Study (NEMESIS), a prospective study in the adult population of 18-64 years. The analysis was carried out on 3881 people who had no life-time mood disorders and on 3854 people who had no life-time anxiety disorders at baseline. Life-time cannabis use and DSM-III-R mood and anxiety disorders, assessed with the Composite International Diagnostic Interview (CIDI). After adjustment for strong confounders, any use of cannabis at baseline predicted a modest increase in the risk of a first major depression (odds ratio 1.62; 95% confidence interval 1.06-2.48) and a stronger increase in the risk of a first bipolar disorder (odds ratio 4.98; 95% confidence interval 1.80-13.81). The risk of 'any mood disorder' was elevated for weekly and almost daily users but not for less frequent use patterns. However, dose-response relationships were less clear for major depression and bipolar disorder separately. None of the associations between cannabis use and anxiety disorders remained significant after adjustment for confounders. The associations between cannabis use and the first incidence of depression and bipolar disorder, which remained significant after adjustment for strong confounders, warrant research into the underlying mechanisms.
Wavelength prediction of laser incident on amorphous silicon detector by neural network
International Nuclear Information System (INIS)
Esmaeili Sani, V.; Moussavi-Zarandi, A.; Kafaee, M.
2011-01-01
In this paper we present a method based on artificial neural networks (ANN) and the use of only one amorphous semiconductor detector to predict the wavelength of incident laser. Amorphous semiconductors and especially amorphous hydrogenated silicon, a-Si:H, are now widely used in many electronic devices, such as solar cells, many types of position sensitive detectors and X-ray imagers for medical applications. In order to study the electrical properties and detection characteristics of thin films of a-Si:H, n-i-p structures have been simulated by SILVACO software. The basic electronic properties of most of the materials used are known, but device modeling depends on a large number of parameters that are not all well known. In addition, the relationship between the shape of the induced anode current and the wavelength of the incident laser leads to complicated calculations. Soft data-based computational methods can model multidimensional non-linear processes and represent the complex input-output relation between the form of the output signal and the wavelength of incident laser.
Wavelength prediction of laser incident on amorphous silicon detector by neural network
Energy Technology Data Exchange (ETDEWEB)
Esmaeili Sani, V., E-mail: vaheed_esmaeely80@yahoo.com [Amirkabir University of Technology, Faculty of Physics, P.O. Box 4155-4494, Tehran (Iran, Islamic Republic of); Moussavi-Zarandi, A.; Kafaee, M. [Amirkabir University of Technology, Faculty of Physics, P.O. Box 4155-4494, Tehran (Iran, Islamic Republic of)
2011-10-21
In this paper we present a method based on artificial neural networks (ANN) and the use of only one amorphous semiconductor detector to predict the wavelength of incident laser. Amorphous semiconductors and especially amorphous hydrogenated silicon, a-Si:H, are now widely used in many electronic devices, such as solar cells, many types of position sensitive detectors and X-ray imagers for medical applications. In order to study the electrical properties and detection characteristics of thin films of a-Si:H, n-i-p structures have been simulated by SILVACO software. The basic electronic properties of most of the materials used are known, but device modeling depends on a large number of parameters that are not all well known. In addition, the relationship between the shape of the induced anode current and the wavelength of the incident laser leads to complicated calculations. Soft data-based computational methods can model multidimensional non-linear processes and represent the complex input-output relation between the form of the output signal and the wavelength of incident laser.
Directory of Open Access Journals (Sweden)
Jin Cao
2017-01-01
Full Text Available Background: Hyperuricemia (HUA contributes to gout and many other diseases. Many hyperuricemia-related risk factors have been discovered, which provided the possibility for building the hyperuricemia prediction model. In this study we aimed to explore the incidence of hyperuricemia and develop hyperuricemia prediction models based on the routine biomarkers for both males and females in urban Han Chinese adults. Methods: A cohort of 58,542 members of the urban population (34,980 males and 23,562 females aged 20–80 years old, free of hyperuricemia at baseline examination, was followed up for a median 2.5 years. The Cox proportional hazards regression model was used to develop gender-specific prediction models. Harrell’s C-statistics was used to evaluate the discrimination ability of the models, and the 10-fold cross-validation was used to validate the models. Results: In 7139 subjects (5585 males and 1554 females, hyperuricemia occurred during a median of 2.5 years of follow-up, leading to a total incidence density of 49.63/1000 person years (64.62/1000 person years for males and 27.12/1000 person years for females. The predictors of hyperuricemia were age, body mass index (BMI systolic blood pressure, serum uric acid for males, and BMI, systolic blood pressure, serum uric acid, triglycerides for females. The models’ C statistics were 0.783 (95% confidence interval (CI, 0.779–0.786 for males and 0.784 (95% CI, 0.778–0.789 for females. After 10-fold cross-validation, the C statistics were still steady, with 0.782 for males and 0.783 for females. Conclusions: In this study, gender-specific prediction models for hyperuricemia for urban Han Chinese adults were developed and performed well.
Maximum likelihood scaling (MALS)
Hoefsloot, Huub C. J.; Verouden, Maikel P. H.; Westerhuis, Johan A.; Smilde, Age K.
2006-01-01
A filtering procedure is introduced for multivariate data that does not suffer from noise amplification by scaling. A maximum likelihood principal component analysis (MLPCA) step is used as a filter that partly removes noise. This filtering can be used prior to any subsequent scaling and
DEFF Research Database (Denmark)
Orsted, David D; Nordestgaard, Børge G; Jensen, Gorm B
2012-01-01
It is largely unknown whether prostate-specific antigen (PSA) level at first date of testing predicts long-term risk of prostate cancer (PCa) incidence and mortality in the general population.......It is largely unknown whether prostate-specific antigen (PSA) level at first date of testing predicts long-term risk of prostate cancer (PCa) incidence and mortality in the general population....
Likelihood estimators for multivariate extremes
Huser, Raphaël
2015-11-17
The main approach to inference for multivariate extremes consists in approximating the joint upper tail of the observations by a parametric family arising in the limit for extreme events. The latter may be expressed in terms of componentwise maxima, high threshold exceedances or point processes, yielding different but related asymptotic characterizations and estimators. The present paper clarifies the connections between the main likelihood estimators, and assesses their practical performance. We investigate their ability to estimate the extremal dependence structure and to predict future extremes, using exact calculations and simulation, in the case of the logistic model.
[Apnoea in infants with bronchiolitis: Incidence and risk factors for a prediction model].
Ramos-Fernández, José Miguel; Moreno-Pérez, David; Gutiérrez-Bedmar, Mario; Ramírez-Álvarez, María; Martínez García, Yasmina; Artacho-González, Lourdes; Urda-Cardona, Antonio
2017-05-04
The presence of apnoea in acute bronchiolitis (AB) varies between 1.2% and 28.8%, depending on the series, and is one of its most fearsome complications. The aim of this study is to determine the incidence of apnoea in hospitalised patients diagnosed with AB, and to define their associated risk factors in order to construct a prediction model. A retrospective observational study of patients admitted to a tertiary hospital in the last 5 years with a diagnosis of AB, according to the classic criteria. Data was collected on the frequency of apnoea and related clinical variables to find risk factors in a binary logistic regression model for the prediction of apnoea. A ROC curve was developed with the model. Apnoea was recorded during the admission of 53 (4.4%) patients out of a total 1,197 cases found. The risk factors included in the equation were: Female (OR 0.6, 95% CI: 0.27-1.37), Caesarean delivery (OR: 3.44, 95% CI: 1.5-7.7), Postmenstrual age ≤43 weeks (OR: 6.62, 95% CI: 2.38-18.7), Fever (OR: 0.33, 95% CI: 0.09-1.97), Low birth weight (OR: 5.93, 95% CI: 2.23-7.67), Apnoea observed by caregivers before admission (OR: 5.93, 95% CI: 2.64-13.3), and severe bacterial infection (OR: 3.98, 95% CI: 1.68-9.46). The optimal sensitivity and specificity of the model in the ROC curve was 0.842 and 0.846, respectively (P<.001). The incidence of apnoea during admission was 4.4 per 100 admissions of AB and year. The estimated prediction model equation may be of help to the clinician in order to classify patients with increased risk of apnoea during admission due to AB. Copyright © 2017. Publicado por Elsevier España, S.L.U.
PREDICTION OF THE INCIDENCE OF PROSTATE CANCER IN THE URAL ECONOMIC REGION OF THE RUSSIAN FEDERATION
Directory of Open Access Journals (Sweden)
K. A. Ilyin
2016-01-01
Full Text Available Objective. To quantify and to forecast the dynamics of registered cases of prostate cancer (PC in the Ural economic region.Material and methods. The study used official statistics on the incidence of prostate cancer in the Russian Federation for the period since 2004 to 2013. inclusive. For the predictive calculation we used the upgraded Hurst method, wich is also called the method of normalized range (R/S. All calculations and the resulting graphs are made with specialized software.Results. Based on available statistic data for a specified period of time, we constructed the graphs of the figure of registered cases of prostate cancer for each subject the Ural economic region and for Russia as a whole. After 2013. graphics were built on the basis of the calculated forecast data. The forecast was built with the assumption of constant further factors contributing to identifying patients with prostate cancer in the study area. The results indicate the inhomogeneous statistics of the indicator for the study area is subject to subjective economic division. Overall, on the territory of the Ural economic region the increase of the incidence of prostate cancer is expected. The incidence rate in Russia is characterized by stable growth, which is expected in the future (a projection until 2018.Conclusions. In recent years, the development of medical technology has led to the expansion of the arsenal of diagnostic and therapeutic opportunities in prostate cancer leading to the emergence of alternative choice of activities in the preparation of individual treatment plan of the patient with newly diagnosed. The increase in the share of the costs in this section of Oncology is due both to the increase in the absolute number of detected cases of the disease, and with changes in the quality of aid. In this regard, the observed and projected increase in the recorded incidence of prostate cancer naturally raises the question of the continued availability of
Maximum Likelihood Fusion Model
2014-08-09
Symposium of Robotics Re- search. Sienna, Italy: Springer, 2003. [12] D. Hall and J. Llinas, “An introduction to multisensor data fusion ,” Proceed- ings of...a data fusion approach for combining Gaussian metric models of an environment constructed by multiple agents that operate outside of a global... data fusion , hypothesis testing,maximum likelihood estimation, mobile robot navigation REPORT DOCUMENTATION PAGE 11. SPONSOR/MONITOR’S REPORT
Likelihood Inflating Sampling Algorithm
Entezari, Reihaneh; Craiu, Radu V.; Rosenthal, Jeffrey S.
2016-01-01
Markov Chain Monte Carlo (MCMC) sampling from a posterior distribution corresponding to a massive data set can be computationally prohibitive since producing one sample requires a number of operations that is linear in the data size. In this paper, we introduce a new communication-free parallel method, the Likelihood Inflating Sampling Algorithm (LISA), that significantly reduces computational costs by randomly splitting the dataset into smaller subsets and running MCMC methods independently ...
Quantitative prediction of shrimp disease incidence via the profiles of gut eukaryotic microbiota.
Xiong, Jinbo; Yu, Weina; Dai, Wenfang; Zhang, Jinjie; Qiu, Qiongfen; Ou, Changrong
2018-04-01
One common notion is emerging that gut eukaryotes are commensal or beneficial, rather than detrimental. To date, however, surprisingly few studies have been taken to discern the factors that govern the assembly of gut eukaryotes, despite growing interest in the dysbiosis of gut microbiota-disease relationship. Herein, we firstly explored how the gut eukaryotic microbiotas were assembled over shrimp postlarval to adult stages and a disease progression. The gut eukaryotic communities changed markedly as healthy shrimp aged, and converged toward an adult-microbiota configuration. However, the adult-like stability was distorted by disease exacerbation. A null model untangled that the deterministic processes that governed the gut eukaryotic assembly tended to be more important over healthy shrimp development, whereas this trend was inverted as the disease progressed. After ruling out the baseline of gut eukaryotes over shrimp ages, we identified disease-discriminatory taxa (species level afforded the highest accuracy of prediction) that characteristic of shrimp health status. The profiles of these taxa contributed an overall 92.4% accuracy in predicting shrimp health status. Notably, this model can accurately diagnose the onset of shrimp disease. Interspecies interaction analysis depicted how the disease-discriminatory taxa interacted with one another in sustaining shrimp health. Taken together, our findings offer novel insights into the underlying ecological processes that govern the assembly of gut eukaryotes over shrimp postlarval to adult stages and a disease progression. Intriguingly, the established model can quantitatively and accurately predict the incidences of shrimp disease.
Incidence and predictive factors of irritable bowel syndrome after acute diverticulitis in Korea.
Jung, Sungmo; Lee, Hyuk; Chung, Hyunsoo; Park, Jun Chul; Shin, Sung Kwan; Lee, Sang Kil; Lee, Yong Chan
2014-11-01
Evidence indicates that irritable bowel syndrome can occur after gastroenteritis. However, little is known about its incidence after diverticulitis. This study was designed to identify the incidence and risk factors of irritable bowel syndrome after diverticulitis in Korea. A survey regarding irritable bowel syndrome was performed in patients allocated to the cases hospitalized for acute diverticulitis and controls hospitalized for non-gastrointestinal disorders between January 2007 and June 2012. Patients meeting criteria for irritable bowel syndrome before hospitalization or with a history of bowel resection were excluded for analysis. Response rate of telephone interviews was 28.1 % (139 of 494) and 73.3 % (220 of 300) in cases and controls, respectively. After exclusion, 102 patients in the cases and 205 patients in the controls were analyzed. At 31 months median follow-up, irritable bowel syndrome had developed in 13 patients (12.8 %) in the cases and 11 patients (5.4 %) in the controls with significant statistical difference (p = 0.02). No clinical difference was seen between the two groups. No clinical factor was significant for the development of irritable bowel syndrome after diverticulitis, and no independent factor was associated with the development of irritable bowel syndrome. Among the 13 patients who developed post-diverticulitis irritable bowel syndrome, the diarrhea-predominant type (53.9 %) was most common. A higher incidence of irritable bowel syndrome after diverticulitis was evident in this study. However, no clinical feature for prediction of its development after diverticulitis was found. Further large-scale analysis will be needed to generalize this result.
International Nuclear Information System (INIS)
Kim, Jin Hyoung; Shin, Ji Hoon; Yoon, Hyun Ki; Chae, Eun Young; Myung, Seung Jae; Ko, Gi Young; Gwon, Dong Il; Sung, Kyu Bo
2009-01-01
To evaluate the incidence, predictive factors, and clinical outcomes of angiographically negative acute arterial upper and lower gastrointestinal (GI) bleeding. From 2001 to 2008, 143 consecutive patients who underwent an angiography for acute arterial upper or lower GI bleeding were examined. The angiographies revealed a negative bleeding focus in 75 of 143 (52%) patients. The incidence of an angiographically negative outcome was significantly higher in patients with a stable hemodynamic status (p < 0.001), or in patients with lower GI bleeding (p = 0.032). A follow-up of the 75 patients (range: 0-72 months, mean: 8 ± 14 months) revealed that 60 of the 75 (80%) patients with a negative bleeding focus underwent conservative management only, and acute bleeding was controlled without rebleeding. Three of the 75 (4%) patients underwent exploratory surgery due to prolonged bleeding; however, no bleeding focus was detected. Rebleeding occurred in 12 of 75 (16%) patients. Of these, six patients experienced massive rebleeding and died of disseminated intravascular coagulation within four to nine hours after the rebleeding episode. Four of the 16 patients underwent a repeat angiography and the two remaining patients underwent a surgical intervention to control the bleeding. Angiographically negative results are relatively common in patients with acute GI bleeding, especially in patients with a stable hemodynamic status or lower GI bleeding. Most patients with a negative bleeding focus have experienced spontaneous resolution of their condition
Hanks, E.M.; Hooten, M.B.; Baker, F.A.
2011-01-01
Ecological spatial data often come from multiple sources, varying in extent and accuracy. We describe a general approach to reconciling such data sets through the use of the Bayesian hierarchical framework. This approach provides a way for the data sets to borrow strength from one another while allowing for inference on the underlying ecological process. We apply this approach to study the incidence of eastern spruce dwarf mistletoe (Arceuthobium pusillum) in Minnesota black spruce (Picea mariana). A Minnesota Department of Natural Resources operational inventory of black spruce stands in northern Minnesota found mistletoe in 11% of surveyed stands, while a small, specific-pest survey found mistletoe in 56% of the surveyed stands. We reconcile these two surveys within a Bayesian hierarchical framework and predict that 35-59% of black spruce stands in northern Minnesota are infested with dwarf mistletoe. ?? 2011 by the Ecological Society of America.
Gibby, Jacob T; Njeru, Dennis K; Cvetko, Steven T; Merrill, Ray M; Bikman, Benjamin T; Gibby, Wendell A
2015-01-01
Central adipose tissue is appreciated as a risk factor for cardiometabolic disorders. The purpose of this study was to determine the efficacy of a volumetric 3D analysis of central adipose tissue in predicting disease. Full body computerized tomography (CT) scans were obtained from 1225 female (518) and male (707) subjects, aged 18-88. Percent central body fat (%cBF) was determined by quantifying the adipose tissue volume from the dome of the liver to the pubic symphysis. Calcium score was determined from the calcium content of coronary arteries. Relationships between %cBF, BMI, and several cardiometabolic disorders were assessed controlling for age, sex, and race. Higher %cBF was significantly greater for those with type 2 diabetes and hypertension, but not stroke or hypercholesterolemia. Simple anthropometric determination of BMI equally correlated with diabetes and hypertension as central body fat. Calcium scoring significantly correlated with all measurements of cardiovascular health, including hypertension, hypercholesterolemia, and heart disease. Central body fat and BMI equally and highly predict incidence of hypertension and type 2 diabetes.
He, Fei; Hu, Zhi-jian; Zhang, Wen-chang; Cai, Lin; Cai, Guo-xi; Aoyagi, Kiyoshi
2017-01-01
It remains challenging to forecast local, seasonal outbreaks of influenza. The goal of this study was to construct a computational model for predicting influenza incidence. We built two computational models including an Autoregressive Distributed Lag (ARDL) model and a hybrid model integrating ARDL with a Generalized Regression Neural Network (GRNN), to assess meteorological factors associated with temporal trends in influenza incidence. The modelling and forecasting performance of these two ...
Li, Jian; Gu, Jun-zhong; Mao, Sheng-hua; Xiao, Wen-jia; Jin, Hui-ming; Zheng, Ya-xu; Wang, Yong-ming; Hu, Jia-yu
2013-12-01
To establish BP artificial neural network predicting model regarding the daily cases of infectious diarrhea in Shanghai. Data regarding both the incidence of infectious diarrhea from 2005 to 2008 in Shanghai and meteorological factors including temperature, relative humidity, rainfall, atmospheric pressure, duration of sunshine and wind speed within the same periods were collected and analyzed with the MatLab R2012b software. Meteorological factors that were correlated with infectious diarrhea were screened by Spearman correlation analysis. Principal component analysis (PCA) was used to remove the multi-colinearities between meteorological factors. Back-Propagation (BP) neural network was employed to establish related prediction models regarding the daily infectious diarrhea incidence, using artificial neural networks toolbox. The established models were evaluated through the fitting, predicting and forecasting processes. Data from Spearman correlation analysis indicated that the incidence of infectious diarrhea had a highly positive correlation with factors as daily maximum temperature, minimum temperature, average temperature, minimum relative humidity and average relative humidity in the previous two days (P neural network model were established under the input of 4 meteorological principal components, extracted by PCA and used for training and prediction. Then appeared to be 4.7811, 6.8921,0.7918,0.8418 and 5.8163, 7.8062,0.7202,0.8180, respectively. The rate on mean error regarding the predictive value to actual incidence in 2008 was 5.30% and the forecasting precision reached 95.63% . Temperature and air pressure showed important impact on the incidence of infectious diarrhea. The BP neural network model had the advantages of low simulation forecasting errors and high forecasting hit rate that could ideally predict and forecast the effects on the incidence of infectious diarrhea.
Takase, Hiroyuki; Sugiura, Tomonori; Kimura, Genjiro; Ohte, Nobuyuki; Dohi, Yasuaki
2015-01-01
Background Although there is a close relationship between dietary sodium and hypertension, the concept that persons with relatively high dietary sodium are at increased risk of developing hypertension compared with those with relatively low dietary sodium has not been studied intensively in a cohort. Methods and Results We conducted an observational study to investigate whether dietary sodium intake predicts future blood pressure and the onset of hypertension in the general population. Individual sodium intake was estimated by calculating 24-hour urinary sodium excretion from spot urine in 4523 normotensive participants who visited our hospital for a health checkup. After a baseline examination, they were followed for a median of 1143 days, with the end point being development of hypertension. During the follow-up period, hypertension developed in 1027 participants (22.7%). The risk of developing hypertension was higher in those with higher rather than lower sodium intake (hazard ratio 1.25, 95% CI 1.04 to 1.50). In multivariate Cox proportional hazards regression analysis, baseline sodium intake and the yearly change in sodium intake during the follow-up period (as continuous variables) correlated with the incidence of hypertension. Furthermore, both the yearly increase in sodium intake and baseline sodium intake showed significant correlations with the yearly increase in systolic blood pressure in multivariate regression analysis after adjustment for possible risk factors. Conclusions Both relatively high levels of dietary sodium intake and gradual increases in dietary sodium are associated with future increases in blood pressure and the incidence of hypertension in the Japanese general population. PMID:26224048
Takase, Hiroyuki; Sugiura, Tomonori; Kimura, Genjiro; Ohte, Nobuyuki; Dohi, Yasuaki
2015-07-29
Although there is a close relationship between dietary sodium and hypertension, the concept that persons with relatively high dietary sodium are at increased risk of developing hypertension compared with those with relatively low dietary sodium has not been studied intensively in a cohort. We conducted an observational study to investigate whether dietary sodium intake predicts future blood pressure and the onset of hypertension in the general population. Individual sodium intake was estimated by calculating 24-hour urinary sodium excretion from spot urine in 4523 normotensive participants who visited our hospital for a health checkup. After a baseline examination, they were followed for a median of 1143 days, with the end point being development of hypertension. During the follow-up period, hypertension developed in 1027 participants (22.7%). The risk of developing hypertension was higher in those with higher rather than lower sodium intake (hazard ratio 1.25, 95% CI 1.04 to 1.50). In multivariate Cox proportional hazards regression analysis, baseline sodium intake and the yearly change in sodium intake during the follow-up period (as continuous variables) correlated with the incidence of hypertension. Furthermore, both the yearly increase in sodium intake and baseline sodium intake showed significant correlations with the yearly increase in systolic blood pressure in multivariate regression analysis after adjustment for possible risk factors. Both relatively high levels of dietary sodium intake and gradual increases in dietary sodium are associated with future increases in blood pressure and the incidence of hypertension in the Japanese general population. © 2015 The Authors. Published on behalf of the American Heart Association, Inc., by Wiley Blackwell.
Ayers, Emmeline; Shapiro, Miriam; Holtzer, Roee; Barzilai, Nir; Milman, Sofiya; Verghese, Joe
2017-05-01
Although depressive symptoms are widely recognized as a predictor of functional decline among older adults, little is known about the predictive utility of apathy in this population. We prospectively examined apathy symptoms as predictors of incident slow gait, frailty, and disability among non-demented, community-dwelling older adults. We examined 2 independent prospective cohort studies-the LonGenity study (N = 625, 53% women, mean age = 75.2 years) and the Central Control of Mobility in Aging (CCMA) study (N = 312, 57% women, mean age = 76.4 years). Individuals were recruited from 2008 to 2014. Apathy was assessed using 3 items from the Geriatric Depression Scale. Slow gait was defined as 1 standard deviation or more below age- and sex-adjusted mean values, frailty was defined using the Cardiovascular Health Study criteria, and disability was assessed with a well-validated disability scale. The prevalence of apathy was 20% in the LonGenity cohort and 26% in the CCMA cohort. The presence of apathy at baseline, independent of depressive symptoms (besides apathy), increased the risk of developing incident slow gait (hazard ratio [HR] = 2.10; 95% CI, 1.36-3.24; P = .001), frailty (HR = 2.86; 95% CI, 1.96-4.16; P Apathy is associated with increased risk of developing slow gait, frailty, and disability, independent of other established risk factors, in non-demented older adults. Apathy should be screened for as a potentially preventable cause of functional decline in clinical psychiatric settings. © Copyright 2017 Physicians Postgraduate Press, Inc.
Spontaneous regression of retinopathy of prematurity:incidence and predictive factors
Directory of Open Access Journals (Sweden)
Rui-Hong Ju
2013-08-01
Full Text Available AIM:To evaluate the incidence of spontaneous regression of changes in the retina and vitreous in active stage of retinopathy of prematurity(ROP and identify the possible relative factors during the regression.METHODS: This was a retrospective, hospital-based study. The study consisted of 39 premature infants with mild ROP showed spontaneous regression (Group A and 17 with severe ROP who had been treated before naturally involuting (Group B from August 2008 through May 2011. Data on gender, single or multiple pregnancy, gestational age, birth weight, weight gain from birth to the sixth week of life, use of oxygen in mechanical ventilation, total duration of oxygen inhalation, surfactant given or not, need for and times of blood transfusion, 1,5,10-min Apgar score, presence of bacterial or fungal or combined infection, hyaline membrane disease (HMD, patent ductus arteriosus (PDA, duration of stay in the neonatal intensive care unit (NICU and duration of ROP were recorded.RESULTS: The incidence of spontaneous regression of ROP with stage 1 was 86.7%, and with stage 2, stage 3 was 57.1%, 5.9%, respectively. With changes in zone Ⅲ regression was detected 100%, in zoneⅡ 46.2% and in zoneⅠ 0%. The mean duration of ROP in spontaneous regression group was 5.65±3.14 weeks, lower than that of the treated ROP group (7.34±4.33 weeks, but this difference was not statistically significant (P=0.201. GA, 1min Apgar score, 5min Apgar score, duration of NICU stay, postnatal age of initial screening and oxygen therapy longer than 10 days were significant predictive factors for the spontaneous regression of ROP (P＜0.05. Retinal hemorrhage was the only independent predictive factor the spontaneous regression of ROP (OR 0.030, 95%CI 0.001-0.775, P=0.035.CONCLUSION:This study showed most stage 1 and 2 ROP and changes in zone Ⅲ can spontaneously regression in the end. Retinal hemorrhage is weakly inversely associated with the spontaneous regression.
Ding, L; Li, J; Wang, C; Li, X; Su, Q; Zhang, G; Xue, F
2017-09-01
Prediction models of atrial fibrillation (AF) have been developed; however, there was no AF prediction model validated in Chinese population. Therefore, we aimed to investigate the incidence of AF in urban Han Chinese health check-up population, as well as to develop AF prediction models using behavioral, anthropometric, biochemical, electrocardiogram (ECG) markers, as well as visit-to-visit variability (VVV) in blood pressures available in the routine health check-up. A total of 33 186 participants aged 45-85 years and free of AF at baseline were included in this cohort, to follow up for incident AF with an annually routine health check-up. Cox regression models were used to develop AF prediction model and 10-fold cross-validation was used to test the discriminatory accuracy of prediction model. We developed three prediction models, with age, sex, history of coronary heart disease (CHD), hypertension as predictors for simple model, with left high-amplitude waves, premature beats added for ECG model, and with age, sex, history of CHD and VVV in systolic and diabolic blood pressures as predictors for VVV model, to estimate risk of incident AF. The calibration of our models ranged from 1.001 to 1.004 (P for Hosmer Lemeshow test >0.05). The area under receiver operator characteristics curve were 78%, 80% and 82%, respectively, for predicting risk of AF. In conclusion, we have identified predictors of incident AF and developed prediction models for AF with variables readily available in routine health check-up.
Luo, Yi; Zhang, Tao; Li, Xiao-song
2016-05-01
To explore the application of fuzzy time series model based on fuzzy c-means clustering in forecasting monthly incidence of Hepatitis E in mainland China. Apredictive model (fuzzy time series method based on fuzzy c-means clustering) was developed using Hepatitis E incidence data in mainland China between January 2004 and July 2014. The incidence datafrom August 2014 to November 2014 were used to test the fitness of the predictive model. The forecasting results were compared with those resulted from traditional fuzzy time series models. The fuzzy time series model based on fuzzy c-means clustering had 0.001 1 mean squared error (MSE) of fitting and 6.977 5 x 10⁻⁴ MSE of forecasting, compared with 0.0017 and 0.0014 from the traditional forecasting model. The results indicate that the fuzzy time series model based on fuzzy c-means clustering has a better performance in forecasting incidence of Hepatitis E.
Incidence and predictive factors of spinal cord stimulation treatment after lumbar spine surgery
Directory of Open Access Journals (Sweden)
Vakkala M
2017-10-01
Full Text Available Merja Vakkala,1 Voitto Järvimäki,1 Hannu Kautiainen,2,3 Maija Haanpää,4,5 Seppo Alahuhta1 1Department of Anaesthesiology, Medical Research Center Oulu (MRC Oulu, Oulu University Hospital and University of Oulu, Oulu, 2Primary Health Care Unit, Kuopio University Hospital, Kuopio, 3Folkhälsan Research Center, Helsinki, 4Department of Neurosurgery, Helsinki University Hospital, 5Mutual Insurance Company Etera, Helsinki, Finland Introduction: Spinal cord stimulation (SCS is recommended for the treatment of postsurgical chronic back and leg pain refractory to other treatments. We wanted to estimate the incidence and predictive factors of SCS treatment in our lumbar surgery cohort.Patients and methods: Three questionnaires (a self-made questionnaire, the Oswestry Low Back Pain Disability Questionnaire, and the Beck Depression Inventory were sent to patients aged 18–65 years with no contraindications for the use of SCS, and who had undergone non-traumatic lumbar spine surgery in the Oulu University Hospital between June 2005 and May 2008. Patients who had a daily pain intensity of ≥5/10 with predominant radicular component were interviewed by telephone.Results: After exclusions, 814 patients remained in this cohort. Of those, 21 patients had received SCS by the end of June 2015. Fifteen (71% of these received benefit and continued with the treatment. Complications were rare. The number of patients who replied to the postal survey were 537 (66%. Eleven of them had undergone SCS treatment after their reply. Features predicting SCS implantation were daily or continuous pain, higher intensities of pain with predominant radicular pain, more severe pain-related functional disability, a higher prevalence of depressive symptoms, and reduced benefit from pain medication. The mean waiting time was 65 months (26–93 months. One hundred patients were interviewed by telephone. Fourteen seemed to be potential SCS candidates. From the eleven patients who
Holm, H; Nägga, K; Nilsson, E D; Ricci, F; Melander, O; Hansson, O; Bachus, E; Magnusson, M; Fedorowski, A
2017-07-01
Cerebral endothelial dysfunction occurs in a spectrum of neurodegenerative diseases. Whether biomarkers of microvascular endothelial dysfunction can predict dementia is largely unknown. We explored the longitudinal association of midregional pro-atrial natriuretic peptide (MR-proANP), C-terminal endothelin-1 (CT-proET-1) and midregional proadrenomedullin (MR-proADM) with dementia and subtypes amongst community-dwelling older adults. A population-based cohort of 5347 individuals (men, 70%; age, 69 ± 6 years) without prevalent dementia provided plasma for determination of MR-proANP, CT-proET-1 and MR-proADM. Three-hundred-and-seventy-three patients (7%) were diagnosed with dementia (120 Alzheimer's disease, 83 vascular, 102 mixed, and 68 other aetiology) over a period of 4.6 ± 1.3 years. Relations between baseline biomarker plasma concentrations and incident dementia were assessed using multivariable Cox regression analysis. Higher levels of MR-proANP were significantly associated with increased risk of all-cause and vascular dementia (hazard ratio [HR] per 1 SD: 1.20, 95% confidence interval [CI], 1.07-1.36; P = 0.002, and 1.52; 1.21-1.89; P dementia increased across the quartiles of MR-proANP (p for linear trend = 0.004; Q4, 145-1681 pmol L -1 vs. Q1, 22-77 pmol L -1 : HR: 1.83; 95%CI: 1.23-2.71) and was most pronounced for vascular type (p for linear trend = 0.005: HR: 2.71; 95%CI: 1.14-6.46). Moreover, the two highest quartiles of CT-proET-1 predicted vascular dementia with a cut-off value at 68 pmol L -1 (Q3-Q4, 68-432 pmol L -1 vs. Q1-Q2,4-68 pmol L -1 ; HR: 1.94; 95%CI: 1.12-3.36). Elevated levels of MR-proADM indicated no increased risk of developing dementia after adjustment for traditional risk factors. Elevated plasma concentration of MR-proANP is an independent predictor of all-cause and vascular dementia. Pronounced increase in CT-proET-1 indicates higher risk of vascular dementia. © 2017 The Association for the Publication of the
Dikshit, Rajesh P; Yeole, B B; Nagrani, Rajini; Dhillon, P; Badwe, R; Bray, Freddie
2012-08-01
Increasing trends in the incidence of breast cancer have been observed in India, including Mumbai. These have likely stemmed from an increasing adoption of lifestyle factors more akin to those commonly observed in westernized countries. Analyses of breast cancer trends and corresponding estimation of the future burden are necessary to better plan rationale cancer control programmes within the country. We used data from the population-based Mumbai Cancer Registry to study time trends in breast cancer incidence rates 1976-2005 and stratified them according to younger (25-49) and older age group (50-74). Age-period-cohort models were fitted and the net drift used as a measure of the estimated annual percentage change (EAPC). Age-period-cohort models and population projections were used to predict the age-adjusted rates and number of breast cancer cases circa 2025. Breast cancer incidence increased significantly among older women over three decades (EAPC = 1.6%; 95% CI 1.1-2.0), while lesser but significant 1% increase in incidence among younger women was observed (EAPC = 1.0; 95% CI 0.2-1.8). Non-linear period and cohort effects were observed; a trends-based model predicted a close-to-doubling of incident cases by 2025 from 1300 mean cases per annum in 2001-2005 to over 2500 cases in 2021-2025. The incidence of breast cancer has increased in Mumbai during last two to three decades, with increases greater among older women. The number of breast cancer cases is predicted to double to over 2500 cases, the vast majority affecting older women. Copyright © 2012 Elsevier Ltd. All rights reserved.
Soluble CD163 predicts incident chronic lung, kidney and liver disease in HIV infection
DEFF Research Database (Denmark)
Kirkegaard-Klitbo, Ditte M; Mejer, Niels; Knudsen, Troels B
2017-01-01
.46] and incident chronic kidney disease (aHR, 10.94; 95% CI: 2.32; 51.35), when compared with lowest quartiles. Further, (every 1 mg) increase in plasma sCD163 was positively correlated with incident liver disease (aHR, 1.12; 95% CI: 1.05; 1.19). The sCD163 level was not associated with incident cancer......, cardiovascular disease or diabetes mellitus. CONCLUSION: sCD163 was independently associated with incident chronic kidney disease, chronic lung disease and liver disease in treated HIV-1-infected individuals, suggesting that monocyte/macrophage activation may be involved in the pathogenesis of non...
Nine-year incident diabetes is predicted by fatty liver indices: the French D.E.S.I.R. study.
Balkau, Beverley; Lange, Celine; Vol, Sylviane; Fumeron, Frederic; Bonnet, Fabrice
2010-06-07
Fatty liver is known to be linked with insulin resistance, alcohol intake, diabetes and obesity. Biopsy and even scan-assessed fatty liver are not always feasible in clinical practice. This report evaluates the predictive ability of two recently published markers of fatty liver: the Fatty Liver Index (FLI) and the NAFLD fatty liver score (NAFLD-FLS), for 9-year incident diabetes, in the French general-population cohort: Data from an Epidemiological Study on the Insulin Resistance syndrome (D.E.S.I.R). At baseline, there were 1861 men and 1950 women, non-diabetic, aged 30 to 65 years. Over the follow-up, 203 incident diabetes cases (140 men, 63 women) were identified by diabetes-treatment or fasting plasma glucose > or = 7.0 mmol/l. The FLI includes: BMI, waist circumference, triglycerides and gamma glutamyl transferase, and the NAFLD-FLS: the metabolic syndrome, diabetes, insulin, alanine aminotransferase, and asparate aminotransferase. Logistic regression was used to determine the odds ratios for incident diabetes associated with categories of the fatty liver indices. In comparison to those with a FLI or = 70 was 9.33 (5.05-17.25) for men and 36.72 (17.12-78.76) for women; these were attenuated to 3.43 (1.61-7.28) and 11.05 (4.09 29.81), after adjusting on baseline glucose, insulin, hypertension, alcohol intake, physical activity, smoking and family antecedents of diabetes; odds ratios increased to 4.71 (1.68-13.16) and 22.77 (6.78-76.44) in those without an excessive alcohol intake. The NAFLD-FLS also predicted incident diabetes, but with odds ratios much lower in women, similar in men. These fatty liver indexes are simple clinical tools for evaluating the extent of liver fat and they are predictive of incident diabetes. Physicians should screen for diabetes in patients with fatty liver.
Behrendt, Silke; Bühringer, Gerhard; Höfler, Michael; Lieb, Roselind; Beesdo-Baum, Katja
2017-10-01
Comorbid internalizing mental disorders in alcohol use disorders (AUD) can be understood as putative independent risk factors for AUD or as expressions of underlying shared psychopathology vulnerabilities. However, it remains unclear whether: 1) specific latent internalizing psychopathology risk-profiles predict AUD-incidence and 2) specific latent internalizing comorbidity-profiles in AUD predict AUD-stability. To investigate baseline latent internalizing psychopathology risk profiles as predictors of subsequent AUD-incidence and -stability in adolescents and young adults. Data from the prospective-longitudinal EDSP study (baseline age 14-24 years) were used. The study-design included up to three follow-up assessments in up to ten years. DSM-IV mental disorders were assessed with the DIA-X/M-CIDI. To investigate risk-profiles and their associations with AUD-outcomes, latent class analysis with auxiliary outcome variables was applied. AUD-incidence: a 4-class model (N=1683) was identified (classes: normative-male [45.9%], normative-female [44.2%], internalizing [5.3%], nicotine dependence [4.5%]). Compared to the normative-female class, all other classes were associated with a higher risk of subsequent incident alcohol dependence (p<0.05). AUD-stability: a 3-class model (N=1940) was identified with only one class (11.6%) with high probabilities for baseline AUD. This class was further characterized by elevated substance use disorder (SUD) probabilities and predicted any subsequent AUD (OR 8.5, 95% CI 5.4-13.3). An internalizing vulnerability may constitute a pathway to AUD incidence in adolescence and young adulthood. In contrast, no indication for a role of internalizing comorbidity profiles in AUD-stability was found, which may indicate a limited importance of such profiles - in contrast to SUD-related profiles - in AUD stability. Copyright © 2017 Elsevier B.V. All rights reserved.
Nine-year incident diabetes is predicted by fatty liver indices: the French D.E.S.I.R. study
Directory of Open Access Journals (Sweden)
Vol Sylviane
2010-06-01
Full Text Available Abstract Background Fatty liver is known to be linked with insulin resistance, alcohol intake, diabetes and obesity. Biopsy and even scan-assessed fatty liver are not always feasible in clinical practice. This report evaluates the predictive ability of two recently published markers of fatty liver: the Fatty Liver Index (FLI and the NAFLD fatty liver score (NAFLD-FLS, for 9-year incident diabetes, in the French general-population cohort: Data from an Epidemiological Study on the Insulin Resistance syndrome (D.E.S.I.R. Methods At baseline, there were 1861 men and 1950 women, non-diabetic, aged 30 to 65 years. Over the follow-up, 203 incident diabetes cases (140 men, 63 women were identified by diabetes-treatment or fasting plasma glucose ≥ 7.0 mmol/l. The FLI includes: BMI, waist circumference, triglycerides and gamma glutamyl transferase, and the NAFLD-FLS: the metabolic syndrome, diabetes, insulin, alanine aminotransferase, and asparate aminotransferase. Logistic regression was used to determine the odds ratios for incident diabetes associated with categories of the fatty liver indices. Results In comparison to those with a FLI Conclusions These fatty liver indexes are simple clinical tools for evaluating the extent of liver fat and they are predictive of incident diabetes. Physicians should screen for diabetes in patients with fatty liver.
Castro, Clara; Antunes, Luís; Lunet, Nuno; Bento, Maria José
2016-09-01
Decision making towards cancer prevention and control requires monitoring of trends in cancer incidence and accurate estimation of its burden in different settings. We aimed to estimate the number of incident cases in northern Portugal for 2015 and 2020 (all cancers except nonmelanoma skin and for the 15 most frequent tumours). Cancer cases diagnosed in 1994-2009 were collected by the North Region Cancer Registry of Portugal (RORENO) and corresponding population figures were obtained from Statistics Portugal. JoinPoint regression was used to analyse incidence trends. Population projections until 2020 were derived by RORENO. Predictions were performed using the Poisson regression models proposed by Dyba and Hakulinen. The number of incident cases is expected to increase by 18.7% in 2015 and by 37.6% in 2020, with lower increments among men than among women. For most cancers considered, the number of cases will keep rising up to 2020, although decreasing trends of age-standardized rates are expected for some tumours. Cervix was the only cancer with a decreasing number of incident cases in the entire period. Thyroid and lung cancers were among those with the steepest increases in the number of incident cases expected for 2020, especially among women. In 2020, the top five cancers are expected to account for 82 and 62% of all cases diagnosed in men and women, respectively. This study contributes to a broader understanding of cancer burden in the north of Portugal and provides the basis for keeping population-based incidence estimates up to date.
Han, Xu; Wang, Jing; Li, Yaru; Hu, Hua; Li, Xiulou; Yuan, Jing; Yao, Ping; Miao, Xiaoping; Wei, Sheng; Wang, Youjie; Liang, Yuan; Zhang, Xiaomin; Guo, Huan; Pan, An; Yang, Handong; Wu, Tangchun; He, Meian
2018-01-01
The aim of this study was to develop a new risk score system to predict 5-year incident diabetes risk among middle-aged and older Chinese population. This prospective study included 17,690 individuals derived from the Dongfeng-Tongji cohort. Participants were recruited in 2008 and were followed until October 2013. Incident diabetes was defined as self-reported clinician diagnosed diabetes, fasting glucose ≥7.0 mmol/l, or the use of insulin or oral hypoglycemic agent. A total of 1390 incident diabetic cases were diagnosed during the follow-up period. β-Coefficients were derived from Cox proportional hazard regression model and were used to calculate the risk score. The diabetes risk score includes BMI, fasting glucose, hypertension, hyperlipidemia, current smoking status, and family history of diabetes. The β-coefficients of these variables ranged from 0.139 to 1.914, and the optimal cutoff value was 1.5. The diabetes risk score was calculated by multiplying the β-coefficients of the significant variables by 10 and rounding to the nearest integer. The score ranges from 0 to 36. The area under the receiver operating curve of the score was 0.751. At the optimal cutoff value of 15, the sensitivity and specificity were 65.6 and 72.9%, respectively. Based upon these risk factors, this model had the highest discrimination compared with several commonly used diabetes prediction models. The newly established diabetes risk score with six parameters appears to be a reliable screening tool to predict 5-year risk of incident diabetes in a middle-aged and older Chinese population.
Liu, L; Luan, R S; Yin, F; Zhu, X P; Lü, Q
2016-01-01
Hand, foot and mouth disease (HFMD) is an infectious disease caused by enteroviruses, which usually occurs in children aged <5 years. In China, the HFMD situation is worsening, with increasing number of cases nationwide. Therefore, monitoring and predicting HFMD incidence are urgently needed to make control measures more effective. In this study, we applied an autoregressive integrated moving average (ARIMA) model to forecast HFMD incidence in Sichuan province, China. HFMD infection data from January 2010 to June 2014 were used to fit the ARIMA model. The coefficient of determination (R 2), normalized Bayesian Information Criterion (BIC) and mean absolute percentage of error (MAPE) were used to evaluate the goodness-of-fit of the constructed models. The fitted ARIMA model was applied to forecast the incidence of HMFD from April to June 2014. The goodness-of-fit test generated the optimum general multiplicative seasonal ARIMA (1,0,1) × (0,1,0)12 model (R 2 = 0·692, MAPE = 15·982, BIC = 5·265), which also showed non-significant autocorrelations in the residuals of the model (P = 0·893). The forecast incidence values of the ARIMA (1,0,1) × (0,1,0)12 model from July to December 2014 were 4103-9987, which were proximate forecasts. The ARIMA model could be applied to forecast HMFD incidence trend and provide support for HMFD prevention and control. Further observations should be carried out continually into the time sequence, and the parameters of the models could be adjusted because HMFD incidence will not be absolutely stationary in the future.
Improving predictions of swash dynamics in XBeach : The role of groupiness and incident-band runup
Roelvink, D.; McCall, Robert; Mehvar, Seyedabdolhossein; Nederhoff, Kees; Dastgheib, Ali
2018-01-01
In predicting storm impacts on sandy coasts, possibly with structures, accurate runup and overtopping simulation is an important aspect. Recent investigations (Stockdon et al., 2014; Palmsten and Splinter, 2016) show that despite accurate predictions of the morphodynamics of dissipative sandy
Abbasi, Ali; Bakker, Stephan J. L.; Corpeleijn, Eva; van der A, Daphne L.; Gansevoort, Ron T.; Gans, Rijk O. B.; Peelen, Linda M.; van der Schouw, Yvonne T.; Stolk, Ronald P.; Navis, Gerjan; Spijkerman, Annemieke M. W.; Beulens, Joline W. J.
2012-01-01
Background: Liver function tests might predict the risk of type 2 diabetes. An independent study evaluating utility of these markers compared with an existing prediction model is yet lacking. Methods and Findings: We performed a case-cohort study, including random subcohort (6.5%) from 38,379
Takiyama, Aki; Tanaka, Toshiaki; Yamamoto, Yoko; Hata, Keisuke; Ishihara, Soichiro; Nozawa, Hiroaki; Kawai, Kazushige; Kiyomatsu, Tomomichi; Nishikawa, Takeshi; Otani, Kensuke; Sasaki, Kazuhito; Watanabe, Toshiaki
2017-10-01
Few studies have evaluated the risk of postoperative colorectal neoplasms stratified by the nature of primary colorectal cancer (CRC). In this study, we revealed it on the basis of the microsatellite (MS) status of primary CRC. We retrospectively reviewed 338 patients with CRC and calculated the risk of neoplasms during postoperative surveillance colonoscopy in association with the MS status of primary CRC. A propensity score method was applied. We identified a higher incidence of metachronous rectal neoplasms after the resection of MS stable CRC than MS instable CRC (adjusted HR 5.74, p=0.04). We also observed a higher incidence of colorectal tubular adenoma in patients with MSS CRC (adjusted hazard ratio 7.09, pcolorectal cancer influenced the risk of postoperative colorectal neoplasms. Copyright© 2017, International Institute of Anticancer Research (Dr. George J. Delinasios), All rights reserved.
Shin, Yung C.; Bailey, Neil; Katinas, Christopher; Tan, Wenda
2018-01-01
This paper presents an overview of vertically integrated comprehensive predictive modeling capabilities for directed energy deposition processes, which have been developed at Purdue University. The overall predictive models consist of vertically integrated several modules, including powder flow model, molten pool model, microstructure prediction model and residual stress model, which can be used for predicting mechanical properties of additively manufactured parts by directed energy deposition processes with blown powder as well as other additive manufacturing processes. Critical governing equations of each model and how various modules are connected are illustrated. Various illustrative results along with corresponding experimental validation results are presented to illustrate the capabilities and fidelity of the models. The good correlations with experimental results prove the integrated models can be used to design the metal additive manufacturing processes and predict the resultant microstructure and mechanical properties.
Saldivia, Sandra; Vicente, Benjamin; Marston, Louise; Melipillán, Roberto; Nazareth, Irwin; Bellón-Saameño, Juan; Xavier, Miguel; Maaroos, Heidi Ingrid; Svab, Igor; Geerlings, M-I; King, Michael
2014-03-01
The reduction of major depression incidence is a public health challenge. To develop an algorithm to estimate the risk of occurrence of major depression in patients attending primary health centers (PHC). Prospective cohort study of a random sample of 2832 patients attending PHC centers in Concepción, Chile, with evaluations at baseline, six and twelve months. Thirty nine known risk factors for depression were measured to build a model, using a logistic regression. The algorithm was developed in 2,133 patients not depressed at baseline and compared with risk algorithms developed in a sample of 5,216 European primary care attenders. The main outcome was the incidence of major depression in the follow-up period. The cumulative incidence of depression during the 12 months follow up in Chile was 12%. Eight variables were identified. Four corresponded to the patient (gender, age, depression background and educational level) and four to patients' current situation (physical and mental health, satisfaction with their situation at home and satisfaction with the relationship with their partner). The C-Index, used to assess the discriminating power of the final model, was 0.746 (95% confidence intervals (CI = 0,707-0,785), slightly lower than the equation obtained in European (0.790 95% CI = 0.767-0.813) and Spanish attenders (0.82; 95% CI = 0.79-0.84). Four of the factors identified in the risk algorithm are not modifiable. The other two factors are directly associated with the primary support network (family and partner). This risk algorithm for the incidence of major depression provides a tool that can guide efforts towards design, implementation and evaluation of effectiveness of interventions to prevent major depression.
Obtaining reliable Likelihood Ratio tests from simulated likelihood functions
DEFF Research Database (Denmark)
Andersen, Laura Mørch
It is standard practice by researchers and the default option in many statistical programs to base test statistics for mixed models on simulations using asymmetric draws (e.g. Halton draws). This paper shows that when the estimated likelihood functions depend on standard deviations of mixed...... of the quasirandom draws in the simulation of the restricted likelihood. Again this is not standard in research or statistical programs. The paper therefore recommends using fully antithetic draws replicating the relevant dimensions of the quasi-random draws in the simulation of the restricted likelihood...... parameters this practice is very likely to cause misleading test results for the number of draws usually used today. The paper shows that increasing the number of draws is a very inefficient solution strategy requiring very large numbers of draws to ensure against misleading test statistics. The paper shows...
Directory of Open Access Journals (Sweden)
Jiangping Wen
Full Text Available To develop a new non-invasive risk score for predicting incident diabetes in a rural Chinese population.Data from the Handan Eye Study conducted from 2006-2013 were utilized as part of this analysis. The present study utilized data generated from 4132 participants who were ≥30 years of age. A non-invasive risk model was derived using two-thirds of the sample cohort (selected randomly using stepwise logistic regression. The model was subsequently validated using data from individuals from the final third of the sample cohort. In addition, a simple point system for incident diabetes was generated according to the procedures described in the Framingham Study. Incident diabetes was defined as follows: (1 fasting plasma glucose (FPG ≥ 7.0 mmol/L; or (2 hemoglobin A1c (HbA1c ≥ 6.5%; or (3 self-reported diagnosis of diabetes or use of anti-diabetic medications during the follow-up period.The simple non-invasive risk score included age (8 points, Body mass index (BMI (3 points, waist circumference (WC (7 points, and family history of diabetes (9 points. The score ranged from 0 to 27 and the area under the receiver operating curve (AUC of the score was 0.686 in the validation sample. At the optimal cutoff value (which was 9, the sensitivity and specificity were 74.32% and 58.82%, respectively.Using information based upon age, BMI, WC, and family history of diabetes, we developed a simple new non-invasive risk score for predicting diabetes onset in a rural Chinese population, using information from individuals aged 30 years of age and older. The new risk score proved to be more optimal in the prediction of incident diabetes than most of the existing risk scores developed in Western and Asian countries. This score system will aid in the identification of individuals who are at risk of developing incident diabetes in rural China.
Stratification by interferon-γ release assay level predicts risk of incident TB.
Winje, Brita Askeland; White, Richard; Syre, Heidi; Skutlaberg, Dag Harald; Oftung, Fredrik; Mengshoel, Anne Torunn; Blix, Hege Salvesen; Brantsæter, Arne Broch; Holter, Ellen Kristine; Handal, Nina; Simonsen, Gunnar Skov; Afset, Jan Egil; Bakken Kran, Anne Marte
2018-04-05
Targeted testing and treatment of latent TB infection (LTBI) are priorities on the global health agenda, but LTBI management remains challenging. We aimed to evaluate the prognostic value of the QuantiFERON TB-Gold (QFT) test for incident TB, focusing on the interferon (IFN)-γ level, when applied in routine practice in a low TB incidence setting. In this large population-based prospective cohort, we linked QFT results in Norway (1 January 2009-30 June 2014) with national registry data (Norwegian Surveillance System for Infectious Diseases, Norwegian Prescription Database, Norwegian Patient Registry and Statistics Norway) to assess the prognostic value of QFT for incident TB. Participants were followed until 30 June 2016. We used restricted cubic splines to model non-linear relationships between IFN-γ levels and TB, and applied these findings to a competing risk model. The prospective analyses included 50 389 QFT results from 44 875 individuals, of whom 257 developed TB. Overall, 22% (n=9878) of QFT results were positive. TB risk increased with the IFN-γ level until a plateau level, above which further increase was not associated with additional prognostic information. The HRs for TB were 8.8 (95% CI 4.7 to 16.5), 19.2 (95% CI 11.6 to 31.6) and 31.3 (95% CI 19.8 to 49.5) times higher with IFN-γ levels of 0.35 to 4.00 IU/mL, respectively, compared with negative tests (TB with rising IFN-γ concentrations, indicating that IFN-γ levels may be used to guide targeted treatment of LTBI. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
Higher levels of albuminuria within the normal range predict incident hypertension.
Forman, John P; Fisher, Naomi D L; Schopick, Emily L; Curhan, Gary C
2008-10-01
Higher levels of albumin excretion within the normal range are associated with cardiovascular disease in high-risk individuals. Whether incremental increases in urinary albumin excretion, even within the normal range, are associated with the development of hypertension in low-risk individuals is unknown. This study included 1065 postmenopausal women from the first Nurses' Health Study and 1114 premenopausal women from the second Nurses' Health Study who had an albumin/creatinine ratio who did not have diabetes or hypertension. Among the older women, 271 incident cases of hypertension occurred during 4 yr of follow-up, and among the younger women, 296 incident cases of hypertension occurred during 8 yr of follow-up. Cox proportional hazards regression was used to examine prospectively the association between the albumin/creatinine ratio and incident hypertension after adjustment for age, body mass index, estimated GFR, baseline BP, physical activity, smoking, and family history of hypertension. Participants who had an albumin/creatinine ratio in the highest quartile (4.34 to 24.17 mg/g for older women and 3.68 to 23.84 mg/g for younger women) were more likely to develop hypertension than those who had an albumin/creatinine ratio in the lowest quartile (hazard ratio 1.76 [95% confidence interval 1.21 to 2.56] and hazard ratio 1.35 [95% confidence interval 0.97 to 1.91] for older and younger women, respectively). Higher albumin/creatinine ratios, even within the normal range, are independently associated with increased risk for development of hypertension among women without diabetes. The definition of normal albumin excretion should be reevaluated.
Chronic Bronchitis Before Age 50 Years Predicts Incident Airflow Limitation and Mortality Risk
Guerra, Stefano; Sherrill, Duane L.; Venker, Claire; Ceccato, Christina M.; Halonen, Marilyn; Martinez, Fernando D.
2015-01-01
Background Previous studies on the relation of chronic bronchitis to incident airflow limitation and all-cause mortality have provided conflicting results, with positive findings reported mainly by studies that included populations of young adults. We sought to determine whether having chronic cough and sputum production in the absence of airflow limitation is associated with onset of airflow limitation, all-cause mortality, and serum levels of CRP and IL-8, and whether subjects’ age influences these relations. Methods We identified 1412 participants in the long-term Tucson Epidemiological Study of Airway Obstructive Disease who at enrollment (1972–73) were 21–80 years old and had FEV1/FVC≥70% and no asthma. Chronic bronchitis was defined as cough and phlegm production on most days for ≥three months in ≥two consecutive years. Incidence of airflow limitation was defined as the first follow-up survey with FEV1/FVC<70%. Serum IL-8 and CRP levels were measured in cryopreserved samples from the enrollment survey. Results After adjusting for covariates, chronic bronchitis at enrollment increased significantly the risk for incident airflow limitation and all-cause mortality among subjects <50 years old (Hazard Ratios, 95% CI: 2.2, 1.3–3.8; and 2.2, 1.3–3.8; respectively), but not among subjects ≥50 years old (0.9, 0.6–1.4; and 1.0, 0.7–1.3). Chronic bronchitis was associated with increased IL-8 and CRP serum levels only among subjects <50 years old. Conclusions Among adults <50 years old, chronic bronchitis unaccompanied by airflow limitation may represent an early marker of susceptibility to the effects of cigarette smoking on systemic inflammation and long-term risk for chronic obstructive pulmonary disease and all-cause mortality. PMID:19581277
Vasunilashorn, Sarinnapha M; Dillon, Simon T; Inouye, Sharon K; Ngo, Long H; Fong, Tamara G; Jones, Richard N; Travison, Thomas G; Schmitt, Eva M; Alsop, David C; Freedman, Steven D; Arnold, Steven E; Metzger, Eran D; Libermann, Towia A; Marcantonio, Edward R
2017-08-01
To examine associations between the inflammatory marker C-reactive protein (CRP) measured preoperatively and on postoperative day 2 (POD2) and delirium incidence, duration, and feature severity. Prospective cohort study. Two academic medical centers. Adults aged 70 and older undergoing major noncardiac surgery (N = 560). Plasma CRP was measured using enzyme-linked immunosorbent assay. Delirium was assessed from Confusion Assessment Method (CAM) interviews and chart review. Delirium duration was measured according to number of hospital days with delirium. Delirium feature severity was defined as the sum of CAM-Severity (CAM-S) scores on all postoperative hospital days. Generalized linear models were used to examine independent associations between CRP (preoperatively and POD2 separately) and delirium incidence, duration, and feature severity; prolonged hospital length of stay (LOS, >5 days); and discharge disposition. Postoperative delirium occurred in 24% of participants, 12% had 2 or more delirium days, and the mean ± standard deviation sum CAM-S was 9.3 ± 11.4. After adjusting for age, sex, surgery type, anesthesia route, medical comorbidities, and postoperative infectious complications, participants with preoperative CRP of 3 mg/L or greater had a risk of delirium that was 1.5 times as great (95% confidence interval (CI) = 1.1-2.1) as that of those with CRP less than 3 mg/L, 0.4 more delirium days (P delirium (3.6 CAM-S points higher, P delirium (95% CI = 1.0-2.4) as those in the lowest quartile (≤127.53 mg/L), had 0.2 more delirium days (P delirium (4.5 CAM-S points higher, P delirium incidence, duration, and feature severity. CRP may be useful to identify individuals who are at risk of developing delirium. © 2017, Copyright the Authors Journal compilation © 2017, The American Geriatrics Society.
David, Vlad Laurentiu; Ercisli, Muhammed Furkan; Rogobete, Alexandru Florin; Boia, Eugen S; Horhat, Razvan; Nitu, Razvan; Diaconu, Mircea M; Pirtea, Laurentiu; Ciuca, Ioana; Horhat, Delia; Horhat, Florin George; Licker, Monica; Popovici, Sonia Elena; Tanasescu, Sonia; Tataru, Calin
2017-06-01
Several diagnostic methods for the evaluation and monitoring were used to find out the pro-inflammatory status, as well as incidence of sepsis in critically ill patients. One such recent method is based on investigating the genetic polymorphisms and determining the molecular and genetic links between them, as well as other sepsis-associated pathophysiologies. Identification of genetic polymorphisms in critical patients with sepsis can become a revolutionary method for evaluating and monitoring these patients. Similarly, the complications, as well as the high costs associated with the management of patients with sepsis, can be significantly reduced by early initiation of intensive care.
He, Fei; Hu, Zhi-Jian; Zhang, Wen-Chang; Cai, Lin; Cai, Guo-Xi; Aoyagi, Kiyoshi
2017-08-03
It remains challenging to forecast local, seasonal outbreaks of influenza. The goal of this study was to construct a computational model for predicting influenza incidence. We built two computational models including an Autoregressive Distributed Lag (ARDL) model and a hybrid model integrating ARDL with a Generalized Regression Neural Network (GRNN), to assess meteorological factors associated with temporal trends in influenza incidence. The modelling and forecasting performance of these two models were compared using observations collected between 2006 and 2015 in Nagasaki Prefecture, Japan. In both the training and forecasting stages, the hybrid model showed lower error rates, including a lower residual mean square error (RMSE) and mean absolute error (MAE) than the ARDL model. The lag of log-incidence, weekly average barometric pressure, and weekly average of air temperature were 4, 1, and 3, respectively in the ARDL model. The ARDL-GRNN hybrid model can serve as a tool to better understand the characteristics of influenza epidemic, and facilitate their prevention and control.
Djossou, Félix; Vesin, Guillaume; Walter, Gaelle; Epelboin, Loïc; Mosnier, Emilie; Bidaud, Bastien; Abboud, Philippe; Okandze, Antoine; Mattheus, Severine; Elenga, Narcisse; Demar, Magalie; Malvy, Denis; Nacher, Mathieu
2016-02-01
The objective of the study was to determine the incidence of transaminase elevation during dengue, and its predictive factors. In 2013, a longitudinal study was performed using data from all cases of dengue seen in Cayenne Hospital. Cox proportional modeling was used. Signs of major transaminase elevation were defined as an increase in aspartate amino transferase (AST) or alanine amino transferase (ALT) concentration over 10 times the normal value (10N). There were 1574 patients and 13 249 person-days of follow-up. The incidence rate for signs of transaminase elevation (10N) was 0.55 per 100 person-days. Six patients had major transaminase elevation with AST>1000 units (0.43 per 1000 patient-days), and 73 patients (4.6%) developed transaminase elevation with AST >10N. The variables independently associated with major transaminase elevation were hyponatremia, low platelets, dehydration, hematocrit increase, food intolerance, positive nonstructural protein 1 (NS1), age over 15 years and the notion of paracetamol intake. Although very frequent, the incidence of major transaminase elevation was lower than reported elsewhere perhaps because of good access to care, or of the particular serotype causing this epidemic. The patients with transaminase elevation tended to be older, more severe and taking paracetamol. . © The Author 2016. Published by Oxford University Press on behalf of Royal Society of Tropical Medicine and Hygiene. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Jagodzinski, Annika; Havulinna, Aki S; Appelbaum, Sebastian; Zeller, Tanja; Jousilahti, Pekka; Skytte-Johanssen, Silke; Hughes, Maria F; Blankenberg, Stefan; Salomaa, Veikko
2015-08-01
Galectin-3 is an emerging biomarker playing an important, complex role in intracellular pathways of cardiovascular diseases and heart failure. We aimed therefore to investigate the predictive value of galectin-3 for incident cardiovascular disease and heart failure. Galectin-3 levels were measured in 8444 participants of the general population-based FINRISK 1997 cohort. Cox proportional hazards regression analyses, adjusting for traditional Framingham risk factors, prevalent valvular heart disease, eGFR (estimated glomerular filtration rate) as well as NT-proBNP, were used to examine the predictive power of galectin-3. Measurements of discrimination and reclassification using 10-fold cross-validation were performed to control for over-optimism. Cardiovascular death (CD), all-cause mortality, myocardial infarction (MI), ischemic stroke (hemorrhagic strokes were excluded) and heart failure (HF) were used as endpoints. During the follow-up of up to 15 years there were in total 1136 deaths from any cause, 383 cardiac deaths, 359 myocardial infarctions, 401 ischemic strokes and 641 cases of incident heart failure. Hazard ratios (HR) were statistically significant for all-cause mortality (1.12, p gender except for all-cause mortality. No significant improvements were observed in model discrimination or overall reclassification upon inclusion of galectin-3. Compared to NT-proBNP, the predictive power of galectin-3 was weaker but both remained significant, independently of each other. Galectin-3 levels were predictive for future cardiovascular events but improvements in discrimination and reclassifications were modest. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Alcohol-use disorder severity predicts first-incidence of depressive disorders
Boschloo, L.; van den Brink, W.; Penninx, B. W. J. H.; Wall, M. M.; Hasin, D. S.
2012-01-01
Background. Previous studies suggest that alcohol-use disorder severity, defined by the number of criteria met, provides a more informative phenotype than dichotomized DSM-IV diagnostic measures of alcohol use disorders. Therefore, this study examined whether alcohol-use disorder severity predicted
Alcohol-use disorder severity predicts first-incidence of depressive disorders
Boschloo, L.; van den Brink, W.; Penninx, B.W.J.H.; Wall, M.M.; Hasin, D.S.
2012-01-01
Background Previous studies suggest that alcohol-use disorder severity, defined by the number of criteria met, provides a more informative phenotype than dichotomized DSM-IV diagnostic measures of alcohol use disorders. Therefore, this study examined whether alcohol-use disorder severity predicted
A risk prediction model of the incidence of occupational low back pain among mining workers
Directory of Open Access Journals (Sweden)
Fikry Effendi
2011-08-01
Full Text Available Background: Low Back Pain (LBP is the most frequently reported musculoskeletal disorder in workers. This study was aimed to develop risk prediction model of low back pain that can be used to prevent the recurring low back pain attack.Methods: The study was case-control design based on the industrial community by using ergonomical approach. Total samples were 91 workers for cases and 91 workers for controls. Workers suffering for low back pain in the last 6 months served as cases, and those from the same age group and receiving the same amount of exposure without any symptoms of low back pain served as controls. Risk factors include socio-demographic factors, socio-ocupational factors, physical working environmental factors, non-physical environmental factors, and biomechanics factors. Receiver Operating Characreistics (ROC was used to describe relationship between true positive value (in vertical axis and false positive value (in horizontal axis in order to discover a risk predictive value of LBP.Results: The determinant risk factors for low back pain (LBP were bending work postures, waist rotation movement, manual lifting, unnatural work postures, those who had worked for more than 18 years, and irregular sport activities. By using ROC with 91.20% senstivity and 87.90% spesifi city, the calculated prediction value was 0.35. This is the cut-off point to discriminate workers with and without LBP. The risk predictors value of work-induced LBP calculated by linear equation of logistic regression varied between 0-11.25.Conclusion: The prediction model of work-induced LBP can be used for early detection of LBP to reduce the risk and prevent the recurrence of LBP. (Med J Indones. 2011; 20:212-6Keywords: Ergonomy, low back pain, prediction model, work-induced LBP
Ferrara, L A; Wang, H; Umans, J G; Franceschini, N; Jolly, S; Lee, E T; Yeh, J; Devereux, R B; Howard, B V; de Simone, G
2014-12-01
To evaluate whether uric acid (UA) predicts 4-yr incidence of metabolic syndrome (MetS) in non-diabetic participants of the Strong Heart Study (SHS) cohort. In this population-based prospective study we analyzed 1499 American Indians (890 women), without diabetes or MetS, controlled during the 4th SHS exam and re-examined 4 years later during the 5th SHS exam. Participants were divided into sex-specific tertiles of UA and the first two tertiles (group N) were compared with the third tertile (group H). Body mass index (BMI = 28.3 ± 7 vs. 31.1 ± 7 kg/m(2)), fat-free mass (FFM = 52.0 ± 14 vs. 54.9 ± 11 kg), waist-to-hip ratio, HOMA-IR (3.66 vs. 4.26), BP and indices of inflammation were significantly higher in group H than in group N (all p < 0.001). Incident MetS at the time of the 5th exam was more frequent in group H than group N (35 vs. 28%, OR 1.44 (95% CI = 1.10-1.91; p < 0.01). This association was still significant (OR = 1.13, p = 0.04) independently of family relatedness, sex, history of hypertension, HOMA-IR, central adiposity and renal function, but disappeared when fat-free mass was included in the model. In the SHS, UA levels are associated to parameters of insulin resistance and to indices of inflammation. UA levels, however, do not predict incident MetS independently of the initial obesity-related increased FFM. Copyright © 2014 Elsevier B.V. All rights reserved.
Validation of a multi-marker model for the prediction of incident type 2 diabetes mellitus
DEFF Research Database (Denmark)
Lyssenko, Valeriya; Jørgensen, Torben; Gerwien, Robert W
2012-01-01
.0001). In time to event analysis, rates of conversion to diabetes in low, moderate, and high DRS groups were significantly different (p 2DM risk than other methods.......Purpose: To assess performance of a biomarker-based score that predicts the five-year risk of diabetes (Diabetes Risk Score, DRS) in an independent cohort that included 15-year follow-up. Method: DRS was developed on the Inter99 cohort, and validated on the Botnia cohort. Performance...
Chang, Juhea; Kim, Hae-Young
2014-11-01
The aim of this study was to correlate the caries-related variables of special needs patients to the incidence of new caries. Data for socio-demographic information and dental and general health status were obtained from 110 patients treated under general anesthesia because of their insufficient co-operation. The Cariogram program was used for risk assessment and other caries-related variables were also analyzed. Within a defined follow-up period (16.3 ± 9.5 months), 64 patients received dental examinations to assess newly developed caries. At baseline, the mean (SD) values of the DMFT (decayed, missing and filled teeth) and DT (decayed teeth) for the total patients were 9.2 (6.5) and 5.8 (5.3), respectively. During the follow-up period, new caries occurred in 48.4% of the patients and the mean value (SD) of the increased DMFT (iDMFT) was 2.1 (4.2). The patients with a higher increment of caries (iDMFT ≥3) showed significantly different caries risk profiles compared to the other patients (iDMFT dentistry. Past caries experience and inadequate oral hygiene maintenance were largely related to caries development in special needs patients.
Obtaining reliable likelihood ratio tests from simulated likelihood functions
DEFF Research Database (Denmark)
Andersen, Laura Mørch
2014-01-01
programs - to base test statistics for mixed models on simulations using asymmetric draws (e.g. Halton draws). Problem 1: Inconsistent LR tests due to asymmetric draws: This paper shows that when the estimated likelihood functions depend on standard deviations of mixed parameters this practice is very......Mixed models: Models allowing for continuous heterogeneity by assuming that value of one or more parameters follow a specified distribution have become increasingly popular. This is known as ‘mixing’ parameters, and it is standard practice by researchers - and the default option in many statistical...... are used, models reducing the dimension of the mixing distribution must replicate the relevant dimensions of the quasi-random draws in the simulation of the restricted likelihood. Again this is not standard in research or statistical programs. The paper therefore recommends using fully antithetic draws...
Directory of Open Access Journals (Sweden)
Dhananjay Yadav
Full Text Available The ratio of aspartate aminotransferase (AST to alanine aminotransferase (ALT is of great interest as a possible novel marker of metabolic syndrome. However, longitudinal studies emphasizing the incremental predictive value of the AST-to-ALT ratio in diagnosing individuals at higher risk of developing metabolic syndrome are very scarce. Therefore, our study aimed to evaluate the AST-to-ALT ratio as an incremental predictor of new onset metabolic syndrome in a population-based cohort study.The population-based cohort study included 2276 adults (903 men and 1373 women aged 40-70 years, who participated from 2005-2008 (baseline without metabolic syndrome and were followed up from 2008-2011. Metabolic syndrome was defined according to the harmonized definition of metabolic syndrome. Serum concentrations of AST and ALT were determined by enzymatic methods.During an average follow-up period of 2.6-years, 395 individuals (17.4% developed metabolic syndrome. In a multivariable adjusted model, the odds ratio (95% confidence interval for new onset of metabolic syndrome, comparing the fourth quartile to the first quartile of the AST-to-ALT ratio, was 0.598 (0.422-0.853. The AST-to-ALT ratio also improved the area under the receiver operating characteristic curve (AUC for predicting new cases of metabolic syndrome (0.715 vs. 0.732, P = 0.004. The net reclassification improvement of prediction models including the AST-to-ALT ratio was 0.23 (95% CI: 0.124-0.337, P<0.001, and the integrated discrimination improvement was 0.0094 (95% CI: 0.0046-0.0143, P<0.001.The AST-to-ALT ratio independently predicted the future development of metabolic syndrome and had incremental predictive value for incident metabolic syndrome.
LRP5 gene polymorphisms predict bone mass and incident fractures in elderly Australian women.
Bollerslev, J; Wilson, S G; Dick, I M; Islam, F M A; Ueland, T; Palmer, L; Devine, A; Prince, R L
2005-04-01
Postmenopausal osteoporosis and bone mass are influenced by multiple factors including genetic variation. The importance of LDL receptor-related protein 5 (LRP5) for the regulation of bone mass has recently been established, where loss of function mutations is followed by severe osteoporosis and gain of function is related to increased bone mass. The aim of this study was to evaluate the role of polymorphisms in the LRP5 gene in regulating bone mass and influencing prospective fracture frequency in a well-described, large cohort of normal, ambulatory Australian women. A total of 1301 women were genotyped for seven different single nucleotide polymorphisms (SNPs) within the LRP5 gene of which five were potentially informative. The effects of these gene polymorphisms on calcaneal quantitative ultrasound measurements (QUS), osteodensitometry of the hip and bone-related biochemistry was examined. One SNP located in exon 15 was found to be associated with fracture rate and bone mineral density. Homozygosity for the less frequent allele of c.3357 A > G was associated with significant reduction in bone mass at most femoral sites. The subjects with the GG genotype, compared to the AA/AG genotypes showed a significant reduction in BUA and total hip, femoral neck and trochanter BMD (1.5% P = 0.032; 2.7% P = 0.047; 3.6% P = 0.008; 3.1% P = 0.050, respectively). In the 5-year follow-up period, 227 subjects experienced a total of 290 radiologically confirmed fractures. The incident fracture rate was significantly increased in subjects homozygous for the GG polymorphism (RR of fracture = 1.61, 95% CI [1.06-2.45], P = 0.027). After adjusting for total hip BMD, the fracture rate was still increased (RR = 1.67 [1.02-2.78], P = 0.045), indicating factors other than bone mass are of importance for bone strength. In conclusion, genetic variation in LRP5 seems to be of importance for regulation of bone mass and osteoporotic fractures.
Incorporating Nuisance Parameters in Likelihoods for Multisource Spectra
Conway, J.S.
2011-01-01
We describe here the general mathematical approach to constructing likelihoods for fitting observed spectra in one or more dimensions with multiple sources, including the effects of systematic uncertainties represented as nuisance parameters, when the likelihood is to be maximized with respect to these parameters. We consider three types of nuisance parameters: simple multiplicative factors, source spectra "morphing" parameters, and parameters representing statistical uncertainties in the predicted source spectra.
van der Heijde, Désirée; Keystone, Edward C.; Curtis, Jeffrey R.; Landewé, Robert B.; Schiff, Michael H.; Khanna, Dinesh; Kvien, Tore K.; Ionescu, Lucian; Gervitz, Leon M.; Davies, Owen R.; Luijtens, Kristel; Furst, Daniel E.
2012-01-01
Objective. To determine the relationship between timing and magnitude of Disease Activity Score [DAS28(ESR)] nonresponse (DAS28 improvement thresholds not reached) during the first 12 weeks of treatment with certolizumab pegol (CZP) plus methotrexate, and the likelihood of achieving low disease
Tan, Ting; Chen, Lizhang; Liu, Fuqiang
2014-11-01
To establish multiple seasonal autoregressive integrated moving average model (ARIMA) according to the hand-foot-mouth disease incidence in Changsha, and to explore the feasibility of the multiple seasonal ARIMA in predicting the hand-foot-mouth disease incidence. EVIEWS 6.0 was used to establish multiple seasonal ARIMA according to the hand-foot- mouth disease incidence from May 2008 to August 2013 in Changsha, and the data of the hand- foot-mouth disease incidence from September 2013 to February 2014 were served as the examined samples of the multiple seasonal ARIMA, then the errors were compared between the forecasted incidence and the real value. Finally, the incidence of hand-foot-mouth disease from March 2014 to August 2014 was predicted by the model. After the data sequence was handled by smooth sequence, model identification and model diagnosis, the multiple seasonal ARIMA (1, 0, 1)×(0, 1, 1)12 was established. The R2 value of the model fitting degree was 0.81, the root mean square prediction error was 8.29 and the mean absolute error was 5.83. The multiple seasonal ARIMA is a good prediction model, and the fitting degree is good. It can provide reference for the prevention and control work in hand-foot-mouth disease.
Lantos, Paul M; Branda, John A; Boggan, Joel C; Chudgar, Saumil M; Wilson, Elizabeth A; Ruffin, Felicia; Fowler, Vance; Auwaerter, Paul G; Nigrovic, Lise E
2015-11-01
Lyme disease is diagnosed by 2-tiered serologic testing in patients with a compatible clinical illness, but the significance of positive test results in low-prevalence regions has not been investigated. We reviewed the medical records of patients who tested positive for Lyme disease with standardized 2-tiered serologic testing between 2005 and 2010 at a single hospital system in a region with little endemic Lyme disease. Based on clinical findings, we calculated the positive predictive value of Lyme disease serology. Next, we reviewed the outcome of serologic testing in patients with select clinical syndromes compatible with disseminated Lyme disease (arthritis, cranial neuropathy, or meningitis). During the 6-year study period 4723 patients were tested for Lyme disease, but only 76 (1.6%) had positive results by established laboratory criteria. Among 70 seropositive patients whose medical records were available for review, 12 (17%; 95% confidence interval, 9%-28%) were found to have Lyme disease (6 with documented travel to endemic regions). During the same time period, 297 patients with a clinical illness compatible with disseminated Lyme disease underwent 2-tiered serologic testing. Six of them (2%; 95% confidence interval, 0.7%-4.3%) were seropositive, 3 with documented travel and 1 who had an alternative diagnosis that explained the clinical findings. In this low-prevalence cohort, fewer than 20% of positive Lyme disease tests are obtained from patients with clinically likely Lyme disease. Positive Lyme disease test results may have little diagnostic value in this setting. © The Author 2015. Published by Oxford University Press on behalf of the Infectious Diseases Society of America. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Ahn, Song Vogue; Baik, Soon Koo; Cho, Youn zoo; Koh, Sang Baek; Huh, Ji Hye; Chang, Yoosoo; Sung, Ki-Chul; Kim, Jang Young
2016-01-01
Aims The ratio of aspartate aminotransferase (AST) to alanine aminotransferase (ALT) is of great interest as a possible novel marker of metabolic syndrome. However, longitudinal studies emphasizing the incremental predictive value of the AST-to-ALT ratio in diagnosing individuals at higher risk of developing metabolic syndrome are very scarce. Therefore, our study aimed to evaluate the AST-to-ALT ratio as an incremental predictor of new onset metabolic syndrome in a population-based cohort study. Material and Methods The population-based cohort study included 2276 adults (903 men and 1373 women) aged 40–70 years, who participated from 2005–2008 (baseline) without metabolic syndrome and were followed up from 2008–2011. Metabolic syndrome was defined according to the harmonized definition of metabolic syndrome. Serum concentrations of AST and ALT were determined by enzymatic methods. Results During an average follow-up period of 2.6-years, 395 individuals (17.4%) developed metabolic syndrome. In a multivariable adjusted model, the odds ratio (95% confidence interval) for new onset of metabolic syndrome, comparing the fourth quartile to the first quartile of the AST-to-ALT ratio, was 0.598 (0.422–0.853). The AST-to-ALT ratio also improved the area under the receiver operating characteristic curve (AUC) for predicting new cases of metabolic syndrome (0.715 vs. 0.732, P = 0.004). The net reclassification improvement of prediction models including the AST-to-ALT ratio was 0.23 (95% CI: 0.124–0.337, Pmetabolic syndrome and had incremental predictive value for incident metabolic syndrome. PMID:27560931
Yadav, Dhananjay; Choi, Eunhee; Ahn, Song Vogue; Baik, Soon Koo; Cho, Youn Zoo; Koh, Sang Baek; Huh, Ji Hye; Chang, Yoosoo; Sung, Ki-Chul; Kim, Jang Young
2016-01-01
The ratio of aspartate aminotransferase (AST) to alanine aminotransferase (ALT) is of great interest as a possible novel marker of metabolic syndrome. However, longitudinal studies emphasizing the incremental predictive value of the AST-to-ALT ratio in diagnosing individuals at higher risk of developing metabolic syndrome are very scarce. Therefore, our study aimed to evaluate the AST-to-ALT ratio as an incremental predictor of new onset metabolic syndrome in a population-based cohort study. The population-based cohort study included 2276 adults (903 men and 1373 women) aged 40-70 years, who participated from 2005-2008 (baseline) without metabolic syndrome and were followed up from 2008-2011. Metabolic syndrome was defined according to the harmonized definition of metabolic syndrome. Serum concentrations of AST and ALT were determined by enzymatic methods. During an average follow-up period of 2.6-years, 395 individuals (17.4%) developed metabolic syndrome. In a multivariable adjusted model, the odds ratio (95% confidence interval) for new onset of metabolic syndrome, comparing the fourth quartile to the first quartile of the AST-to-ALT ratio, was 0.598 (0.422-0.853). The AST-to-ALT ratio also improved the area under the receiver operating characteristic curve (AUC) for predicting new cases of metabolic syndrome (0.715 vs. 0.732, P = 0.004). The net reclassification improvement of prediction models including the AST-to-ALT ratio was 0.23 (95% CI: 0.124-0.337, Pmetabolic syndrome and had incremental predictive value for incident metabolic syndrome.
Kilner, T M; Brace, S J; Cooke, M W; Stallard, N; Bleetman, A; Perkins, G D
2011-05-01
The term "big bang" major incidents is used to describe sudden, usually traumatic,catastrophic events, involving relatively large numbers of injured individuals, where demands on clinical services rapidly outstrip the available resources. Triage tools support the pre-hospital provider to prioritise which patients to treat and/or transport first based upon clinical need. The aim of this review is to identify existing triage tools and to determine the extent to which their reliability and validity have been assessed. A systematic review of the literature was conducted to identify and evaluate published data validating the efficacy of the triage tools. Studies using data from trauma patients that report on the derivation, validation and/or reliability of the specific pre-hospital triage tools were eligible for inclusion.Purely descriptive studies, reviews, exercises or reports (without supporting data) were excluded. The search yielded 1982 papers. After initial scrutiny of title and abstract, 181 papers were deemed potentially applicable and from these 11 were identified as relevant to this review (in first figure). There were two level of evidence one studies, three level of evidence two studies and six level of evidence three studies. The two level of evidence one studies were prospective validations of Clinical Decision Rules (CDR's) in children in South Africa, all the other studies were retrospective CDR derivation, validation or cohort studies. The quality of the papers was rated as good (n=3), fair (n=7), poor (n=1). There is limited evidence for the validity of existing triage tools in big bang major incidents.Where evidence does exist it focuses on sensitivity and specificity in relation to prediction of trauma death or severity of injury based on data from single or small number patient incidents. The Sacco system is unique in combining survivability modelling with the degree by which the system is overwhelmed in the triage decision system. The
Kawamoto, R; Ninomiya, D; Kasai, Y; Senzaki, K; Kusunoki, T; Ohtsuka, N; Kumagi, T
2018-02-19
Metabolic syndrome (MetS) is associated with an increased risk of major cardiovascular events. In women, increased serum uric acid (SUA) levels are associated with MetS and its components. However, whether baseline and changes in SUA predict incidence of MetS and its components remains unclear. The subjects comprised 407 women aged 71 ± 8 years from a rural village. We have identified participants who underwent a similar examination 11 years ago, and examined the relationship between baseline and changes in SUA, and MetS based on the modified criteria of the National Cholesterol Education Program's Adult Treatment Panel (NCEP-ATP) III report. Of these subjects, 83 (20.4%) women at baseline and 190 (46.7%) women at follow-up had MetS. Multiple linear regression analysis was performed to evaluate the contribution of each confounding factor for MetS; both baseline and changes in SUA as well as history of cardiovascular disease, low-density lipoprotein cholesterol, and estimated glomerular filtration ratio (eGFR) were independently and significantly associated with the number of MetS components during an 11-year follow-up. The adjusted odds ratios (ORs) (95% confidence interval) for incident MetS across tertiles of baseline SUA and changes in SUA were 1.00, 1.47 (0.82-2.65), and 3.11 (1.66-5.83), and 1.00, 1.88 (1.03-3.40), and 2.49 (1.38-4.47), respectively. In addition, the combined effect between increased baseline and changes in SUA was also a significant and independent determinant for the accumulation of MetS components (F = 20.29, p < 0.001). The ORs for incident MetS were significant only in subjects with age ≥ 55 years, decline in eGFR, and no baseline MetS. These results suggested that combined assessment of baseline and changes in SUA levels provides increased information for incident MetS, independent of other confounding factors in community-dwelling women.
Bello-Chavolla, Omar Yaxmehen; Almeda-Valdes, Paloma; Gomez-Velasco, Donaji; Viveros-Ruiz, Tannia; Cruz-Bautista, Ivette; Romo-Romo, Alonso; Sánchez-Lázaro, Daniel; Meza-Oviedo, Dushan; Vargas-Vázquez, Arsenio; Campos, Olimpia Arellano; Sevilla-González, Magdalena Del Rocío; Martagón, Alexandro J; Hernández, Liliana Muñoz; Mehta, Roopa; Caballeros-Barragán, César Rodolfo; Aguilar-Salinas, Carlos A
2018-05-01
We developed a novel non-insulin-based fasting score to evaluate insulin sensitivity validated against the euglycemic-hyperinsulinemic clamp (EHC). We also evaluated its correlation with ectopic fact accumulation and its capacity to predict incident type 2 diabetes mellitus (T2D). The discovery sample was composed by 125 subjects (57 without and 68 with T2D) that underwent an EHC. We defined METS-IR as Ln((2*G 0 )+TG 0 )*BMI)/(Ln(HDL-c)) (G 0 : fasting glucose, TG 0 : fasting triglycerides, BMI: body mass index, HDL-c: high-density lipoprotein cholesterol), and compared its diagnostic performance against the M-value adjusted by fat-free mass (MFFM) obtained by an EHC. METS-IR was validated in a sample with EHC data, a sample with modified frequently sampled intravenous glucose tolerance test (FSIVGTT) data and a large cohort against HOMA-IR. We evaluated the correlation of the score with intrahepatic and intrapancreatic fat measured using magnetic resonance spectroscopy. Subsequently, we evaluated its ability to predict incident T2D cases in a prospective validation cohort of 6144 subjects. METS-IR demonstrated the better correlation with the MFFM ( ρ = -0.622, P index obtained from the FSIVGTT (AUC: 0.67, 95% CI: 0.53-0.81). METS-IR significantly correlated with intravisceral, intrahepatic and intrapancreatic fat and fasting insulin levels ( P 50.39) had the highest adjusted risk to develop T2D (HR: 3.91, 95% CI: 2.25-6.81). Furthermore, subjects with incident T2D had higher baseline METS-IR compared to healthy controls (50.2 ± 10.2 vs 44.7 ± 9.2, P < 0.001). METS-IR is a novel score to evaluate cardiometabolic risk in healthy and at-risk subjects and a promising tool for screening of insulin sensitivity. © 2018 European Society of Endocrinology.
Directory of Open Access Journals (Sweden)
W Cairns S Smith
Full Text Available BACKGROUND: Leprosy is a disease of skin and peripheral nerves. The process of nerve injury occurs gradually through the course of the disease as well as acutely in association with reactions. The INFIR (ILEP Nerve Function Impairment and Reactions Cohort was established to identify clinically relevant neurological and immunological predictors for nerve injury and reactions. METHODOLOGY/PRINCIPAL FINDINGS: The study, in two centres in India, recruited 188 new, previously untreated patients with multi-bacillary leprosy who had no recent nerve damage. These patients underwent a series of novel blood tests and nerve function testing including motor and sensory nerve conduction, warm and cold detection thresholds, vibrometry, dynamometry, monofilament sensory testing and voluntary muscle testing at diagnosis and at monthly follow up for the first year and every second month for the second year. During the 2 year follow up a total of 74 incident events were detected. Sub-clinical changes to nerve function at diagnosis and during follow-up predicted these new nerve events. Serological assays at baseline and immediately before an event were not predictive; however, change in TNF alpha before an event was a statistically significant predictor of that event. CONCLUSIONS/SIGNIFICANCE: These findings increase our understanding of the processes of nerve damage in leprosy showing that nerve function impairment is more widespread than previously appreciated. Any nerve involvement, including sub-clinical changes, is predictive of further nerve function impairment. These new factors could be used to identify patients at high risk of developing impairment and disability.
DEFF Research Database (Denmark)
Patterson, Christopher C; Dahlquist, Gisela G; Gyürüs, Eva
2009-01-01
BACKGROUND: The incidence of type 1 diabetes in children younger than 15 years is increasing. Prediction of future incidence of this disease will enable adequate fund allocation for delivery of care to be planned. We aimed to establish 15-year incidence trends for childhood type 1 diabetes...... in European centres, and thereby predict the future burden of childhood diabetes in Europe. METHODS: 20 population-based EURODIAB registers in 17 countries registered 29 311 new cases of type 1 diabetes, diagnosed in children before their 15th birthday during a 15-year period, 1989-2003. Age-specific log...... distribution across age-groups than at present (29%, 37%, and 34%, respectively). Prevalence under age 15 years is predicted to rise from 94 000 in 2005, to 160 000 in 2020. INTERPRETATION: If present trends continue, doubling of new cases of type 1 diabetes in European children younger than 5 years...
Directory of Open Access Journals (Sweden)
Renee Heffron
Full Text Available HIV-1 prevention programs targeting HIV-1 serodiscordant couples need to identify couples that are likely to become pregnant to facilitate discussions about methods to minimize HIV-1 risk during pregnancy attempts (i.e. safer conception or effective contraception when pregnancy is unintended. A clinical prediction tool could be used to identify HIV-1 serodiscordant couples with a high likelihood of pregnancy within one year.Using standardized clinical prediction methods, we developed and validated a tool to identify heterosexual East African HIV-1 serodiscordant couples with an increased likelihood of becoming pregnant in the next year. Datasets were from three prospectively followed cohorts, including nearly 7,000 couples from Kenya and Uganda participating in HIV-1 prevention trials and delivery projects.The final score encompassed the age of the woman, woman's number of children living, partnership duration, having had condomless sex in the past month, and non-use of an effective contraceptive. The area under the curve (AUC for the probability of the score to correctly predict pregnancy was 0.74 (95% CI 0.72-0.76. Scores ≥ 7 predicted a pregnancy incidence of >17% per year and captured 78% of the pregnancies. Internal and external validation confirmed the predictive ability of the score.A pregnancy likelihood score encompassing basic demographic, clinical and behavioral factors defined African HIV-1 serodiscordant couples with high one-year pregnancy incidence rates. This tool could be used to engage African HIV-1 serodiscordant couples in counseling discussions about fertility intentions in order to offer services for safer conception or contraception that align with their reproductive goals.
Model fit after pairwise maximum likelihood
Barendse, M. T.; Ligtvoet, R.; Timmerman, M. E.; Oort, F. J.
2016-01-01
Maximum likelihood factor analysis of discrete data within the structural equation modeling framework rests on the assumption that the observed discrete responses are manifestations of underlying continuous scores that are normally distributed. As maximizing the likelihood of multivariate response
Model fit after pairwise maximum likelihood
Barendse, M.T.; Ligtvoet, R.; Timmerman, M.E.; Oort, F.J.
Maximum likelihood factor analysis of discrete data within the structural equation modeling framework rests on the assumption that the observed discrete responses are manifestations of underlying continuous scores that are normally distributed. As maximizing the likelihood of multivariate response
Factors Influencing the Intended Likelihood of Exposing Sexual Infidelity.
Kruger, Daniel J; Fisher, Maryanne L; Fitzgerald, Carey J
2015-08-01
There is a considerable body of literature on infidelity within romantic relationships. However, there is a gap in the scientific literature on factors influencing the likelihood of uninvolved individuals exposing sexual infidelity. Therefore, we devised an exploratory study examining a wide range of potentially relevant factors. Based in part on evolutionary theory, we anticipated nine potential domains or types of influences on the likelihoods of exposing or protecting cheaters, including kinship, strong social alliances, financial support, previous relationship behaviors (including infidelity and abuse), potential relationship transitions, stronger sexual and emotional aspects of the extra-pair relationship, and disease risk. The pattern of results supported these predictions (N = 159 men, 328 women). In addition, there appeared to be a small positive bias for participants to report infidelity when provided with any additional information about the situation. Overall, this study contributes a broad initial description of factors influencing the predicted likelihood of exposing sexual infidelity and encourages further studies in this area.
Durand, Eric; Doutriaux, Maxime; Bettinger, Nicolas; Tron, Christophe; Fauvel, Charles; Bauer, Fabrice; Dacher, Jean-Nicolas; Bouhzam, Najime; Litzler, Pierre-Yves; Cribier, Alain; Eltchaninoff, Hélène
2017-12-11
The aim of this study was to assess the incidence, prognostic impact, and predictive factors of readmission for congestive heart failure (CHF) in patients with severe aortic stenosis treated by transcatheter aortic valve replacement (TAVR). TAVR is indicated in patients with severe symptomatic aortic stenosis in whom surgery is considered high risk or is contraindicated. Readmission for CHF after TAVR remains a challenge, and data on prognostic and predictive factors are lacking. All patients who underwent TAVR from January 2010 to December 2014 were included. Follow-up was achieved for at least 1 year and included clinical and echocardiographic data. Readmission for CHF was analyzed retrospectively. This study included 546 patients, 534 (97.8%) of whom were implanted with balloon-expandable valves preferentially via the transfemoral approach in 87.8% of cases. After 1 year, 285 patients (52.2%) had been readmitted at least once, 132 (24.1%) for CHF. Patients readmitted for CHF had an increased risk for death (p < 0.0001) and cardiac death (p < 0.0001) compared with those not readmitted for CHF. On multivariate analysis, aortic mean gradient (hazard ratio [HR]: 0.88; 95% confidence interval [CI]: 0.79 to 0.99; p = 0.03), post-procedural blood transfusion (HR: 2.27; 95% CI: 1.13 to 5.56; p = 0.009), severe post-procedural pulmonary hypertension (HR: 1.04; 95% CI: 1.00 to 1.07; p < 0.0001), and left atrial diameter (HR: 1.47; 95% CI: 1.08 to 2.01; p = 0.02) were independently associated with CHF readmission at 1 year. Readmission for CHF after TAVR was frequent and was strongly associated with 1-year mortality. Low gradient, persistent pulmonary hypertension, left atrial dilatation, and transfusions were predictive of readmission for CHF. Copyright © 2017 American College of Cardiology Foundation. Published by Elsevier Inc. All rights reserved.
Dissociating response conflict and error likelihood in anterior cingulate cortex.
Yeung, Nick; Nieuwenhuis, Sander
2009-11-18
Neuroimaging studies consistently report activity in anterior cingulate cortex (ACC) in conditions of high cognitive demand, leading to the view that ACC plays a crucial role in the control of cognitive processes. According to one prominent theory, the sensitivity of ACC to task difficulty reflects its role in monitoring for the occurrence of competition, or "conflict," between responses to signal the need for increased cognitive control. However, a contrasting theory proposes that ACC is the recipient rather than source of monitoring signals, and that ACC activity observed in relation to task demand reflects the role of this region in learning about the likelihood of errors. Response conflict and error likelihood are typically confounded, making the theories difficult to distinguish empirically. The present research therefore used detailed computational simulations to derive contrasting predictions regarding ACC activity and error rate as a function of response speed. The simulations demonstrated a clear dissociation between conflict and error likelihood: fast response trials are associated with low conflict but high error likelihood, whereas slow response trials show the opposite pattern. Using the N2 component as an index of ACC activity, an EEG study demonstrated that when conflict and error likelihood are dissociated in this way, ACC activity tracks conflict and is negatively correlated with error likelihood. These findings support the conflict-monitoring theory and suggest that, in speeded decision tasks, ACC activity reflects current task demands rather than the retrospective coding of past performance.
Boschloo, Lynn; Vogelzangs, Nicole; van den Brink, Wim; Smit, Johannes H.; Veltman, Dick J.; Beekman, Aartjan T. F.; Penninx, Brenda W. J. H.
2013-01-01
Introduction: Depressive and anxiety disorders may predict first incidence of alcohol abuse and alcohol dependence. This study aims to identify those persons who are at an increased risk of developing alcohol abuse or alcohol dependence by considering the heterogeneity of depressive and anxiety
Kerkhoff, Andrew D.; Wood, Robin; Cobelens, Frank G.; Gupta-Wright, Ankur; Bekker, Linda-Gail; Lawn, Stephen D.
2015-01-01
Low haemoglobin concentrations may be predictive of incident tuberculosis (TB) and death in HIV-infected patients receiving antiretroviral therapy (ART), but data are limited and inconsistent. We examined these relationships retrospectively in a long-term South African ART cohort with multiple
Boschloo, L.; Vogelzangs, N.; van den Brink, W.; Smit, J.H.; Veltman, D.J.; Beekman, A.T.F.; Penninx, B.W.J.H.
2013-01-01
Introduction: Depressive and anxiety disorders may predict first incidence of alcohol abuse and alcohol dependence. This study aims to identify those persons who are at an increased risk of developing alcohol abuse or alcohol dependence by considering the heterogeneity of depressive and anxiety
Adaptive Unscented Kalman Filter using Maximum Likelihood Estimation
DEFF Research Database (Denmark)
Mahmoudi, Zeinab; Poulsen, Niels Kjølstad; Madsen, Henrik
2017-01-01
The purpose of this study is to develop an adaptive unscented Kalman filter (UKF) by tuning the measurement noise covariance. We use the maximum likelihood estimation (MLE) and the covariance matching (CM) method to estimate the noise covariance. The multi-step prediction errors generated by the ...
Phylogenetics, likelihood, evolution and complexity.
de Koning, A P Jason; Gu, Wanjun; Castoe, Todd A; Pollock, David D
2012-11-15
Phylogenetics, likelihood, evolution and complexity (PLEX) is a flexible and fast Bayesian Markov chain Monte Carlo software program for large-scale analysis of nucleotide and amino acid data using complex evolutionary models in a phylogenetic framework. The program gains large speed improvements over standard approaches by implementing 'partial sampling of substitution histories', a data augmentation approach that can reduce data analysis times from months to minutes on large comparative datasets. A variety of nucleotide and amino acid substitution models are currently implemented, including non-reversible and site-heterogeneous mixture models. Due to efficient algorithms that scale well with data size and model complexity, PLEX can be used to make inferences from hundreds to thousands of taxa in only minutes on a desktop computer. It also performs probabilistic ancestral sequence reconstruction. Future versions will support detection of co-evolutionary interactions between sites, probabilistic tests of convergent evolution and rigorous testing of evolutionary hypotheses in a Bayesian framework. PLEX v1.0 is licensed under GPL. Source code and documentation will be available for download at www.evolutionarygenomics.com/ProgramsData/PLEX. PLEX is implemented in C++ and supported on Linux, Mac OS X and other platforms supporting standard C++ compilers. Example data, control files, documentation and accessory Perl scripts are available from the website. David.Pollock@UCDenver.edu. Supplementary data are available at Bioinformatics online.
Xu, Tian; Zhong, Chongke; Wang, Aili; Guo, Zhirong; Bu, Xiaoqing; Zhou, Yipeng; Tian, Yunfan; HuangFu, Xinfeng; Zhu, Zhengbao; Zhang, Yonghong
2017-09-01
The previous study suggested that Human cartilage glycoprotein-39 (YKL-40) was positively associated with hypertension incidence in certain high-risk groups of hypertension. We aimed to investigate that whether YKL-40 is an effective biomarker for predicting hypertension incidence among prehypertensive subjects. In a 1:1 matched case-control study of 700 pairs with available YKL-40 levels nested in a prospective cohort of initially healthy Chinese subjects, 294 pairs additionally have matched baseline BP status (prehypertensive or normotensive). Multivariable conditional logistic regression analyses were used to calculate the odds ratios (95% confidential intervals) of hypertension associated with higher levels of YKL-40 in both prehypertensive and normotensive subgroups, respectively. In the prehypertensive subgroup, the subjects in the highest quartile of plasma YKL-40 levels had a significantly higher risk of hypertension incidence, compared with those in the lowest quartile. The odds ratio (95% confidential intervals) is 2.01 (1.05-3.85). A positive association between YKL-40 levels and hypertension incidence was found (P for trend40 levels at baseline were positively associated with hypertension incidence among prehypertensive subjects. YKL-40 may represent a novel biomarker for predicting hypertension risk in prehypertension population. Copyright © 2017. Published by Elsevier B.V.
Solomon, Marc M.; Mayer, Kenneth H.; Glidden, David V.; Liu, Albert Y.; McMahan, Vanessa M.; Guanira, Juan V.; Chariyalertsak, Suwat; Fernandez, Telmo; Grant, Robert M.; Bekker, Linda-Gail; Buchbinder, Susan; Casapia, Martin; Chariyalertsak, Suwat; Guanira, Juan; Kallas, Esper; Lama, Javier; Mayer, Kenneth; Montoya, Orlando; Schechter, Mauro; Veloso, Valdiléa
2014-01-01
Background. Syphilis infection may potentiate transmission of human immunodeficiency virus (HIV). We sought to determine the extent to which HIV acquisition was associated with syphilis infection within an HIV preexposure prophylaxis (PrEP) trial and whether emtricitabine/tenofovir (FTC/TDF) modified that association. Methods. The Preexposure Prophylaxis Initiative (iPrEx) study randomly assigned 2499 HIV-seronegative men and transgender women who have sex with men (MSM) to receive oral daily FTC/TDF or placebo. Syphilis prevalence at screening and incidence during follow-up were measured. Hazard ratios for the effect of incident syphilis on HIV acquisition were calculated. The effect of FTC/TDF on incident syphilis and HIV acquisition was assessed. Results. Of 2499 individuals, 360 (14.4%) had a positive rapid plasma reagin test at screening; 333 (92.5%) had a positive confirmatory test, which did not differ between the arms (FTC/TDF vs placebo, P = .81). The overall syphilis incidence during the trial was 7.3 cases per 100 person-years. There was no difference in syphilis incidence between the study arms (7.8 cases per 100 person-years for FTC/TDF vs 6.8 cases per 100 person-years for placebo, P = .304). HIV incidence varied by incident syphilis (2.8 cases per 100 person-years for no syphilis vs 8.0 cases per 100 person-years for incident syphilis), reflecting a hazard ratio of 2.6 (95% confidence interval, 1.6–4.4; P syphilis on HIV incidence. Conclusions. In HIV-seronegative MSM, syphilis infection was associated with HIV acquisition in this PrEP trial; a syphilis diagnosis should prompt providers to offer PrEP unless otherwise contraindicated. PMID:24928295
The Laplace Likelihood Ratio Test for Heteroscedasticity
Directory of Open Access Journals (Sweden)
J. Martin van Zyl
2011-01-01
Full Text Available It is shown that the likelihood ratio test for heteroscedasticity, assuming the Laplace distribution, gives good results for Gaussian and fat-tailed data. The likelihood ratio test, assuming normality, is very sensitive to any deviation from normality, especially when the observations are from a distribution with fat tails. Such a likelihood test can also be used as a robust test for a constant variance in residuals or a time series if the data is partitioned into groups.
Matson, Pamela A; Fortenberry, J Dennis; Chung, Shang-En; Gaydos, Charlotte A; Ellen, Jonathan M
2018-03-24
Feelings of intimacy, perceptions of partner concurrency (PPC) and perceptions of risk for an STD (PRSTD) are meaningful and dynamic attributes of adolescent sexual relationships. Our objective was to examine whether variations in these STI-associated feelings and perceptions predicted incident Chlamydia trachomatis and/or Neisseriagonorrhoeae infection within a prospective cohort of urban adolescent women. A cohort of clinic-recruited women aged 16-19 completed daily surveys on feelings and risk perceptions about each current sex partner on a smartphone continuously for up to 18 months. Urine was tested for C. trachomatis and N. gonorrhoeae every 3 months. Daily responses were averaged across the week. As overall means for trust, closeness and commitment were high, data were coded to indicate any decrease in feelings from the previous week. PRSTD and PPC were reverse coded to indicate any increase from the previous week. An index was created to examine the cumulative effect of variation in these feelings and perceptions. Generalised linear models were used to account for correlation among repeated measures within relationships. For each week that there was a decrease in trust, there was a 45% increase in the risk of being infected with an STI at follow-up (relative risk (RR) 1.45, 95% CI 1.18 to 1.78, P=0.004). Neither a decrease in closeness or commitment, nor an increase in PRSTD or PPC was associated with an STI outcome. Cumulatively, the index measure indicated that a change in an additional feeling or perception over the week increased the odds of an STI by 14% (RR 1.14, 95% CI 1.02 to 1.29, P=0.026). A decrease in feelings of trust towards a main partner may be a more sensitive indicator of STI risk than PRSTD, PPC or commitment. The next generation of behavioural interventions for youth will need strategies to address feelings of intimacy within adolescent romantic relationships. © Article author(s) (or their employer(s) unless otherwise stated in the
Li, Huan; Li, Yan; Li, Cheng; Li, Wenshan; Wang, Guosong; Zhang, Song
2017-08-01
Marine oil spill has deep negative effect on both marine ecosystem and human activities. In recent years, due to China’s high-speed economic development, the demand for crude oil is increasing year by year in China, and leading to the high risk of marine oil spill. Therefore, it is necessary that promoting emergency response on marine oil spill in China and improving oil spill prediction techniques. In this study, based on oil spill model and GIS platform, we have developed the Bohai and Yellow sea oil spill prediction system. Combining with high-resolution meteorological and oceanographic forecast results, the system was applied to predict the drift and diffusion process of Huangdao ‘11.22’ oil spill incident. Although the prediction can’t be validated by some SAR images due to the lack of satellite observations, it still provided effective and referable oil spill behavior information to Maritime Safety Administration.
Essays on empirical likelihood in economics
Gao, Z.
2012-01-01
This thesis intends to exploit the roots of empirical likelihood and its related methods in mathematical programming and computation. The roots will be connected and the connections will induce new solutions for the problems of estimation, computation, and generalization of empirical likelihood.
Composite likelihood method for inferring local pedigrees
DEFF Research Database (Denmark)
Ko, Amy; Nielsen, Rasmus
2017-01-01
such as polygamous families, multi-generational families, and pedigrees in which many of the member individuals are missing. Computational speed is greatly enhanced by the use of a composite likelihood function which approximates the full likelihood. We validate our method on simulated data and show that it can...
DEFF Research Database (Denmark)
Nørskov, M S; Frikke-Schmidt, R; Bojesen, S E
2011-01-01
were, respectively, 1.5 (0.7-3.2) and 2.0 (0.9-4.3) for GSTM1*1/0 and GSTM1*0/0 versus GSTM1*1/1. The HR for death after bladder cancer diagnosis was 1.9 (1.0-3.7) for GSTM1*0/0 versus GSTM1*1/0. In conclusion, exact CNV in GSTT1 and GSTM1 predict incidence and 5-year survival from prostate and bladder...... and for death after prostate cancer diagnosis were, respectively, 1.2 (0.8-1.8) and 1.2 (0.6-2.1) for GSTT1*1/0, and 1.8 (1.1-3.0) and 2.2 (1.1-4.4) for GSTT1*0/0 versus GSTT1*1/1. In women, the cumulative incidence of corpus uteri cancer increased with decreasing GSTT1 copy numbers (trend=0.04). The HRs...... for corpus uteri cancer were, respectively, 1.8 (1.0-3.2) and 2.2 (1.0-4.6) for GSTT1*1/0 and GSTT1*0/0 versus GSTT1*1/1. Finally, the cumulative incidence of bladder cancer increased, and the cumulative 5-year survival decreased, with decreasing GSTM1 copy numbers (P=0.03-0.05). The HRs for bladder cancer...
Directory of Open Access Journals (Sweden)
Élise Fortin
Full Text Available The optimal way to measure antimicrobial use in hospital populations, as a complement to surveillance of resistance is still unclear. Using respiratory isolates and antimicrobial prescriptions of nine intensive care units (ICUs, this study aimed to identify the indicator of antimicrobial use that predicted prevalence and incidence rates of resistance with the best accuracy.Retrospective cohort study including all patients admitted to three neonatal (NICU, two pediatric (PICU and four adult ICUs between April 2006 and March 2010. Ten different resistance/antimicrobial use combinations were studied. After adjustment for ICU type, indicators of antimicrobial use were successively tested in regression models, to predict resistance prevalence and incidence rates, per 4-week time period, per ICU. Binomial regression and Poisson regression were used to model prevalence and incidence rates, respectively. Multiplicative and additive models were tested, as well as no time lag and a one 4-week-period time lag. For each model, the mean absolute error (MAE in prediction of resistance was computed. The most accurate indicator was compared to other indicators using t-tests.Results for all indicators were equivalent, except for 1/20 scenarios studied. In this scenario, where prevalence of carbapenem-resistant Pseudomonas sp. was predicted with carbapenem use, recommended daily doses per 100 admissions were less accurate than courses per 100 patient-days (p = 0.0006.A single best indicator to predict antimicrobial resistance might not exist. Feasibility considerations such as ease of computation or potential external comparisons could be decisive in the choice of an indicator for surveillance of healthcare antimicrobial use.
Shen, Fuhai; Liu, Hongbo; Yuan, Juxiang; Han, Bing; Cui, Kai; Ding, Yu; Fan, Xueyun; Cao, Hong; Yao, Sanqiao; Suo, Xia; Sun, Zhiqian; Yun, Xiang; Hua, Zhengbing; Chen, Jie
2015-01-01
We aimed to estimate the economic losses currently caused by coal workers' pneumoconiosis (CWP) and, on the basis of these measurements, confirm the economic benefit of preventive measures. Our cohort study included 1,847 patients with CWP and 43,742 coal workers without CWP who were registered in the employment records of the Datong Coal Mine Group. We calculated the cumulative incidence rate of pneumoconiosis using the life-table method. We used the dose-response relationship between cumulative incidence density and cumulative dust exposure to predict the future trend in the incidence of CWP. We calculate the economic loss caused by CWP and economic effectiveness of CWP prevention by a step-wise model. The cumulative incidence rates of CWP in the tunneling, mining, combining, and helping cohorts were 58.7%, 28.1%, 21.7%, and 4.0%, respectively. The cumulative incidence rates increased gradually with increasing cumulative dust exposure (CDE). We predicted 4,300 new CWP cases, assuming the dust concentrations remained at the levels of 2011. If advanced dustproof equipment was adopted, 537 fewer people would be diagnosed with CWP. In all, losses of 1.207 billion Renminbi (RMB, official currency of China) would be prevented and 4,698.8 healthy life years would be gained. Investments in advanced dustproof equipment would be total 843 million RMB, according to our study; the ratio of investment to restored economic losses was 1:1.43. Controlling workplace dust concentrations is critical to reduce the onset of pneumoconiosis and to achieve economic benefits.
Yuan, Juxiang; Han, Bing; Cui, Kai; Ding, Yu; Fan, Xueyun; Cao, Hong; Yao, Sanqiao; Suo, Xia; Sun, Zhiqian; Yun, Xiang; Hua, Zhengbing; Chen, Jie
2015-01-01
We aimed to estimate the economic losses currently caused by coal workers’ pneumoconiosis (CWP) and, on the basis of these measurements, confirm the economic benefit of preventive measures. Our cohort study included 1,847 patients with CWP and 43,742 coal workers without CWP who were registered in the employment records of the Datong Coal Mine Group. We calculated the cumulative incidence rate of pneumoconiosis using the life-table method. We used the dose-response relationship between cumulative incidence density and cumulative dust exposure to predict the future trend in the incidence of CWP. We calculate the economic loss caused by CWP and economic effectiveness of CWP prevention by a step-wise model. The cumulative incidence rates of CWP in the tunneling, mining, combining, and helping cohorts were 58.7%, 28.1%, 21.7%, and 4.0%, respectively. The cumulative incidence rates increased gradually with increasing cumulative dust exposure (CDE). We predicted 4,300 new CWP cases, assuming the dust concentrations remained at the levels of 2011. If advanced dustproof equipment was adopted, 537 fewer people would be diagnosed with CWP. In all, losses of 1.207 billion Renminbi (RMB, official currency of China) would be prevented and 4,698.8 healthy life years would be gained. Investments in advanced dustproof equipment would be total 843 million RMB, according to our study; the ratio of investment to restored economic losses was 1:1.43. Controlling workplace dust concentrations is critical to reduce the onset of pneumoconiosis and to achieve economic benefits. PMID:26098706
Gaussian copula as a likelihood function for environmental models
Wani, O.; Espadas, G.; Cecinati, F.; Rieckermann, J.
2017-12-01
Parameter estimation of environmental models always comes with uncertainty. To formally quantify this parametric uncertainty, a likelihood function needs to be formulated, which is defined as the probability of observations given fixed values of the parameter set. A likelihood function allows us to infer parameter values from observations using Bayes' theorem. The challenge is to formulate a likelihood function that reliably describes the error generating processes which lead to the observed monitoring data, such as rainfall and runoff. If the likelihood function is not representative of the error statistics, the parameter inference will give biased parameter values. Several uncertainty estimation methods that are currently being used employ Gaussian processes as a likelihood function, because of their favourable analytical properties. Box-Cox transformation is suggested to deal with non-symmetric and heteroscedastic errors e.g. for flow data which are typically more uncertain in high flows than in periods with low flows. Problem with transformations is that the results are conditional on hyper-parameters, for which it is difficult to formulate the analyst's belief a priori. In an attempt to address this problem, in this research work we suggest learning the nature of the error distribution from the errors made by the model in the "past" forecasts. We use a Gaussian copula to generate semiparametric error distributions . 1) We show that this copula can be then used as a likelihood function to infer parameters, breaking away from the practice of using multivariate normal distributions. Based on the results from a didactical example of predicting rainfall runoff, 2) we demonstrate that the copula captures the predictive uncertainty of the model. 3) Finally, we find that the properties of autocorrelation and heteroscedasticity of errors are captured well by the copula, eliminating the need to use transforms. In summary, our findings suggest that copulas are an
Snijders, Cathelijne; Kollen, Boudewijn J.; van Lingen, Richard A.; Fetter, Willem P. F.; Molendijk, Harry; Kok, J. H.; te Pas, E.; Pas, H.; van der Starre, C.; Bloemendaal, E.; Lopes Cardozo, R. H.; Molenaar, A. M.; Giezen, A.; van Lingen, R. A.; Maat, H. E.; Molendijk, A.; Snijders, C.; Lavrijssen, S.; Mulder, A. L. M.; de Kleine, M. J. K.; Koolen, A. M. P.; Schellekens, M.; Verlaan, W.; Vrancken, S.; Fetter, W. P. F.; Schotman, L.; van der Zwaan, A.; van der Tuijn, Y.; Tibboel, D.; van der Schaaf, T. W.; Klip, H.; Kollen, B. J.
2009-01-01
OBJECTIVES: Safety culture assessments are increasingly used to evaluate patient-safety programs. However, it is not clear which aspects of safety culture are most relevant in understanding incident reporting behavior, and ultimately improving patient safety. The objective of this study was to
Korin, Maya Rom; Chaplin, William F.; Shaffer, Jonathan A.; Butler, Mark J.; Ojie, Mary-Jane; Davidson, Karina W.
2013-01-01
Objective: To examine gender differences in the association between beliefs in heart disease preventability and 10-year incidence of coronary heart disease (CHD) in a population-based sample. Methods: A total of 2,688 Noninstitutionalized Nova Scotians without prior CHD enrolled in the Nova Scotia Health Study (NSHS95) and were followed for 10…
Lee, Tsair-Fwu; Chao, Pei-Ju; Chang, Liyun; Ting, Hui-Min; Huang, Yu-Jie
2015-01-01
Symptomatic radiation pneumonitis (SRP), which decreases quality of life (QoL), is the most common pulmonary complication in patients receiving breast irradiation. If it occurs, acute SRP usually develops 4-12 weeks after completion of radiotherapy and presents as a dry cough, dyspnea and low-grade fever. If the incidence of SRP is reduced, not only the QoL but also the compliance of breast cancer patients may be improved. Therefore, we investigated the incidence SRP in breast cancer patients after hybrid intensity modulated radiotherapy (IMRT) to find the risk factors, which may have important effects on the risk of radiation-induced complications. In total, 93 patients with breast cancer were evaluated. The final endpoint for acute SRP was defined as those who had density changes together with symptoms, as measured using computed tomography. The risk factors for a multivariate normal tissue complication probability model of SRP were determined using the least absolute shrinkage and selection operator (LASSO) technique. Five risk factors were selected using LASSO: the percentage of the ipsilateral lung volume that received more than 20-Gy (IV20), energy, age, body mass index (BMI) and T stage. Positive associations were demonstrated among the incidence of SRP, IV20, and patient age. Energy, BMI and T stage showed a negative association with the incidence of SRP. Our analyses indicate that the risk of SPR following hybrid IMRT in elderly or low-BMI breast cancer patients is increased once the percentage of the ipsilateral lung volume receiving more than 20-Gy is controlled below a limitation. We suggest to define a dose-volume percentage constraint of IV20radiation therapy treatment planning to maintain the incidence of SPR below 20%, and pay attention to the sequelae especially in elderly or low-BMI breast cancer patients. (AIV20: the absolute ipsilateral lung volume that received more than 20 Gy (cc).
Likelihood inference for unions of interacting discs
DEFF Research Database (Denmark)
Møller, Jesper; Helisova, K.
2010-01-01
This is probably the first paper which discusses likelihood inference for a random set using a germ-grain model, where the individual grains are unobservable, edge effects occur and other complications appear. We consider the case where the grains form a disc process modelled by a marked point...... with respect to a given marked Poisson model (i.e. a Boolean model). We show how edge effects and other complications can be handled by considering a certain conditional likelihood. Our methodology is illustrated by analysing Peter Diggle's heather data set, where we discuss the results of simulation......-based maximum likelihood inference and the effect of specifying different reference Poisson models....
Du, Zhicheng; Xu, Lin; Zhang, Wangjian; Zhang, Dingmei; Yu, Shicheng; Hao, Yuantao
2017-10-06
Hand, foot, and mouth disease (HFMD) has caused a substantial burden in China, especially in Guangdong Province. Based on the enhanced surveillance system, we aimed to explore whether the addition of temperate and search engine query data improves the risk prediction of HFMD. Ecological study. Information on the confirmed cases of HFMD, climate parameters and search engine query logs was collected. A total of 1.36 million HFMD cases were identified from the surveillance system during 2011-2014. Analyses were conducted at aggregate level and no confidential information was involved. A seasonal autoregressive integrated moving average (ARIMA) model with external variables (ARIMAX) was used to predict the HFMD incidence from 2011 to 2014, taking into account temperature and search engine query data (Baidu Index, BDI). Statistics of goodness-of-fit and precision of prediction were used to compare models (1) based on surveillance data only, and with the addition of (2) temperature, (3) BDI, and (4) both temperature and BDI. A high correlation between HFMD incidence and BDI ( r =0.794, pinformation criterion (AIC) value of -345.332, whereas the model including both BDI and temperature had the most accurate prediction in terms of the mean absolute percentage error (MAPE) of 101.745%. An ARIMAX model incorporating search engine query data significantly improved the prediction of HFMD. Further studies are warranted to examine whether including search engine query data also improves the prediction of other infectious diseases in other settings. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
Lau, Joseph T F; Gross, Danielle L; Wu, Anise M S; Cheng, Kit-Man; Lau, Mason M C
2017-06-01
Internet use has global influences on all aspects of life and has become a growing concern. Cross-sectional studies on Internet addiction (IA) have been reported but causality is often unclear. More longitudinal studies are warranted. We investigated incidence and predictors of IA conversion among secondary school students. A 12-month longitudinal study was conducted among Hong Kong Chinese Secondary 1-4 students (N = 8286). Using the 26-item Chen Internet Addiction Scale (CIAS; cut-off >63), non-IA cases were identified at baseline. Conversion to IA during the follow-up period was detected, with incidence and predictors derived using multi-level models. Prevalence of IA was 16.0% at baseline and incidence of IA was 11.81 per 100 person-years (13.74 for males and 9.78 for females). Risk background factors were male sex, higher school forms, and living with only one parent, while protective background factors were having a mother/father with university education. Adjusted for all background factors, higher baseline CIAS score (ORa = 1.07), longer hours spent online for entertainment and social communication (ORa = 1.92 and 1.63 respectively), and Health Belief Model (HBM) constructs (except perceived severity of IA and perceived self-efficacy to reduce use) were significant predictors of conversion to IA (ORa = 1.07-1.45). Prevalence and incidence of IA conversion were high and need attention. Interventions should take into account risk predictors identified, such as those of the HBM, and time management skills should be enhanced. Screening is warranted to identify those at high risk (e.g. high CIAS score) and provide them with primary and secondary interventions.
Vazquez, Gabriela; Duval, Sue; Jacobs, David R; Silventoinen, Karri
2007-01-01
Body mass index, waist circumference, and waist/hip ratio have been shown to be associated with type 2 diabetes. From the clinical perspective, central obesity (approximated by waist circumference or waist/hip ratio) is known to generate diabetogenic substances and should therefore be more informative than general obesity (body mass index). Because of their high correlation, from the statistical perspective, body mass index and waist circumference are unlikely to yield different answers. To compare associations of diabetes incidence with general and central obesity indicators, the authors conducted a meta-analysis based on published studies from 1966 to 2004 retrieved from a PubMed search. The analysis was performed with 32 studies out of 432 publications initially identified. Measures of association were transformed to log relative risks per standard deviation (pooled across all studies) increase in the obesity indicator and pooled using random effects models. The pooled relative risks for incident diabetes were 1.87 (95% confidence interval (CI): 1.67, 2.10), 1.87 (95% CI: 1.58, 2.20), and 1.88 (95% CI: 1.61, 2.19) per standard deviation of body mass index, waist circumference, and waist/hip ratio, respectively, demonstrating that these three obesity indicators have similar associations with incident diabetes. Although the clinical perspective focusing on central obesity is appealing, further research is needed to determine the usefulness of waist circumference or waist/hip ratio over body mass index.
Andersson, Jonas; Wennberg, Patrik; Lundblad, Dan; Escher, Stefan A; Jansson, Jan-Håkan
2016-11-01
More than half of cardiovascular mortality occurs outside the hospital, mainly due to consistently low survival rates from out-of-hospital cardiac arrest. This is a prospective, nested, case-control study derived from the Västerbotten Intervention Programme and the World Health Organization's Multinational Monitoring of Trends and Determinants in Cardiovascular Disease study in northern Sweden (1986-2006). To determine predictors for sudden cardiac death risk factors for cardiovascular disease were compared between incident myocardial infarction with sudden cardiac death (n = 363) and survivors of incident myocardial infarction (n = 1998) using multivariate logistic regression analysis. Diabetes had the strongest association with sudden cardiac death out of all evaluated risk factors (odds ratio (OR) 1.83, 95% confidence interval (CI) 1.30-2.59), followed by low education (OR 1.55, 95% CI 1.19-2.01), high body mass index (OR 1.05, 95% CI 1.02-1.08) and male sex (OR 1.42, 95% CI 1.001-2.01). The pattern of risk factors for incident myocardial infarction is different among survivors and those who die within 24 hours. The risk factors that contribute the most to death within 24 hours are diabetes mellitus, high body mass index and low education level, and can be addressed at both the public health level and by general practitioners. © The European Society of Cardiology 2016.
Green, Richard; Macmillan, Mark T; Tikka, Theofano; Bruce, Lorna; Murchison, John T; Nixon, Iain J
2017-11-01
The management of pulmonary nodules is challenging; unfortunately, little is known about the incidence and significance of pulmonary nodules in patients with head and neck cancer. A review was conducted of 400 consecutive patients with head and neck cancer. Imaging was reviewed to identify the incidence of nodules and patient, tumor, and radiological factors associated with the risk of malignancy. Nodules were found in 58% of patients, with a malignant rate of 6%. Age was the only predictor of having a nodule and advanced-stage III + IV was a predictor of malignancy (P = .023; odds ratio [OR] 10.64; confidence interval 1.33-84.98). Patients presenting with head and neck cancer have a higher incidence of pulmonary nodules and a higher risk of malignancy. In contrast to the British Thoracic Society (BTS) guidelines, which use size to guide the need for serial scans, we would recommend follow-up imaging in all patients with head and neck cancer with nodules, irrespective of size. © 2017 Wiley Periodicals, Inc.
Directory of Open Access Journals (Sweden)
Tsair-Fwu Lee
Full Text Available Symptomatic radiation pneumonitis (SRP, which decreases quality of life (QoL, is the most common pulmonary complication in patients receiving breast irradiation. If it occurs, acute SRP usually develops 4-12 weeks after completion of radiotherapy and presents as a dry cough, dyspnea and low-grade fever. If the incidence of SRP is reduced, not only the QoL but also the compliance of breast cancer patients may be improved. Therefore, we investigated the incidence SRP in breast cancer patients after hybrid intensity modulated radiotherapy (IMRT to find the risk factors, which may have important effects on the risk of radiation-induced complications.In total, 93 patients with breast cancer were evaluated. The final endpoint for acute SRP was defined as those who had density changes together with symptoms, as measured using computed tomography. The risk factors for a multivariate normal tissue complication probability model of SRP were determined using the least absolute shrinkage and selection operator (LASSO technique.Five risk factors were selected using LASSO: the percentage of the ipsilateral lung volume that received more than 20-Gy (IV20, energy, age, body mass index (BMI and T stage. Positive associations were demonstrated among the incidence of SRP, IV20, and patient age. Energy, BMI and T stage showed a negative association with the incidence of SRP. Our analyses indicate that the risk of SPR following hybrid IMRT in elderly or low-BMI breast cancer patients is increased once the percentage of the ipsilateral lung volume receiving more than 20-Gy is controlled below a limitation.We suggest to define a dose-volume percentage constraint of IV20< 37% (or AIV20< 310cc for the irradiated ipsilateral lung in radiation therapy treatment planning to maintain the incidence of SPR below 20%, and pay attention to the sequelae especially in elderly or low-BMI breast cancer patients. (AIV20: the absolute ipsilateral lung volume that received more than
Model fit after pairwise maximum likelihood
Directory of Open Access Journals (Sweden)
M. T. eBarendse
2016-04-01
Full Text Available Maximum likelihood factor analysis of discrete data within the structural equation modeling framework rests on the assumption that the observed discrete responses are manifestations of underlying continuous scores that are normally distributed. As maximizing the likelihood of multivariate response patterns is computationally very intensive, the sum of the log--likelihoods of the bivariate response patterns is maximized instead. Little is yet known about how to assess model fit when the analysis is based on such a pairwise maximum likelihood (PML of two--way contingency tables. We propose new fit criteria for the PML method and conduct a simulation study to evaluate their performance in model selection. With large sample sizes (500 or more, PML performs as well the robust weighted least squares analysis of polychoric correlations.
Asymptotic Likelihood Distribution for Correlated & Constrained Systems
Agarwal, Ujjwal
2016-01-01
It describes my work as summer student at CERN. The report discusses the asymptotic distribution of the likelihood ratio for total no. of parameters being h and 2 out of these being are constrained and correlated.
Du, Zhicheng; Xu, Lin; Zhang, Wangjian; Zhang, Dingmei; Yu, Shicheng; Hao, Yuantao
2017-01-01
Objectives Hand, foot, and mouth disease (HFMD) has caused a substantial burden in China, especially in Guangdong Province. Based on the enhanced surveillance system, we aimed to explore whether the addition of temperate and search engine query data improves the risk prediction of HFMD. Design Ecological study. Setting and participants Information on the confirmed cases of HFMD, climate parameters and search engine query logs was collected. A total of 1.36 million HFMD cases were identified from the surveillance system during 2011–2014. Analyses were conducted at aggregate level and no confidential information was involved. Outcome measures A seasonal autoregressive integrated moving average (ARIMA) model with external variables (ARIMAX) was used to predict the HFMD incidence from 2011 to 2014, taking into account temperature and search engine query data (Baidu Index, BDI). Statistics of goodness-of-fit and precision of prediction were used to compare models (1) based on surveillance data only, and with the addition of (2) temperature, (3) BDI, and (4) both temperature and BDI. Results A high correlation between HFMD incidence and BDI (r=0.794, pdiseases in other settings. PMID:28988169
2015-12-01
The goal of this research is to develop a machine learning framework to predict the spatiotemporal impact : of traffic accidents on the upstream traffic and surrounding region. The main objective of the framework : is, given a road accident, to forec...
Kerkhoff, Andrew D; Wood, Robin; Cobelens, Frank G; Gupta-Wright, Ankur; Bekker, Linda-Gail; Lawn, Stephen D
2015-04-02
Low haemoglobin concentrations may be predictive of incident tuberculosis (TB) and death in HIV-infected patients receiving antiretroviral therapy (ART), but data are limited and inconsistent. We examined these relationships retrospectively in a long-term South African ART cohort with multiple time-updated haemoglobin measurements. Prospectively collected clinical data on patients receiving ART for up to 8 years in a community-based cohort were analysed. Time-updated haemoglobin concentrations, CD4 counts and HIV viral loads were recorded, and TB diagnoses and deaths from all causes were ascertained. Anaemia severity was classified using World Health Organization criteria. TB incidence and mortality rates were calculated and Poisson regression models were used to identify independent predictors of incident TB and mortality, respectively. During a median follow-up of 5.0 years (IQR, 2.5-5.8) of 1,521 patients, 476 cases of incident TB and 192 deaths occurred during 6,459 person-years (PYs) of follow-up. TB incidence rates were strongly associated with time-updated anaemia severity; those without anaemia had a rate of 4.4 (95%CI, 3.8-5.1) cases/100 PYs compared to 10.0 (95%CI, 8.3-12.1), 26.6 (95%CI, 22.5-31.7) and 87.8 (95%CI, 57.0-138.2) cases/100 PYs in those with mild, moderate and severe anaemia, respectively. Similarly, mortality rates in those with no anaemia or mild, moderate and severe time-updated anaemia were 1.1 (95%CI, 0.8-1.5), 3.5 (95%CI, 2.7-4.8), 11.8 (95%CI, 9.5-14.8) and 28.2 (95%CI, 16.5-51.5) cases/100 PYs, respectively. Moderate and severe anaemia (time-updated) during ART were the strongest independent predictors for incident TB (adjusted IRR = 3.8 [95%CI, 3.0-4.8] and 8.2 [95%CI, 5.3-12.7], respectively) and for mortality (adjusted IRR = 6.0 [95%CI, 3.9-9.2] and adjusted IRR = 8.0 [95%CI, 3.9-16.4], respectively). Increasing severity of anaemia was associated with exceptionally high rates of both incident TB and mortality during
Pei, Zhiyong; Liu, Jielin; Liu, Manjiao; Zhou, Wenchao; Yan, Pengcheng; Wen, Shaojun; Chen, Yubao
2018-03-01
Essential hypertension (EH) has become a major chronic disease around the world. To build a risk-predicting model for EH can help to interpose people's lifestyle and dietary habit to decrease the risk of getting EH. In this study, we constructed a EH risk-predicting model considering both environmental and genetic factors with support vector machine (SVM). The data were collected through Epidemiological investigation questionnaire from Beijing Chinese Han population. After data cleaning, we finally selected 9 environmental factors and 12 genetic factors to construct the predicting model based on 1200 samples, including 559 essential hypertension patients and 641 controls. Using radial basis kernel function, predictive accuracy via SVM with function with only environmental factor and only genetic factor were 72.8 and 54.4%, respectively; after considering both environmental and genetic factor the accuracy improved to 76.3%. Using the model via SVM with Laplacian function, the accuracy with only environmental factor and only genetic factor were 76.9 and 57.7%, respectively; after combining environmental and genetic factor, the accuracy improved to 80.1%. The predictive accuracy of SVM model constructed based on Laplacian function was higher than radial basis kernel function, as well as sensitivity and specificity, which were 63.3 and 86.7%, respectively. In conclusion, the model based on SVM with Laplacian kernel function had better performance in predicting risk of hypertension. And SVM model considering both environmental and genetic factors had better performance than the model with environmental or genetic factors only.
Directory of Open Access Journals (Sweden)
Raed Alzghool
2017-01-01
Full Text Available For estimation of the stochastic volatility model (SVM, this paper suggests the quasi-likelihood (QL and asymptotic quasi-likelihood (AQL methods. The QL approach is quite simple and does not require full knowledge of the likelihood functions of the SVM. The AQL technique is based on the QL method and is used when the covariance matrix Σ is unknown. The AQL approach replaces the true variance–covariance matrix Σ by nonparametric kernel estimator of Σ in QL.
dr. Mueller, A.A.
2012-01-01
We consider the prediction of the flow around a square rod as a generic bluff body at low Mach number (below 0.3) and high Reynolds number (above 5000) and the corresponding tonal noise. Instability of such flow is crucial for potential mechanical vibrations and noise production. Due to the presence
Likelihoods for fixed rank nomination networks.
Hoff, Peter; Fosdick, Bailey; Volfovsky, Alex; Stovel, Katherine
2013-12-01
Many studies that gather social network data use survey methods that lead to censored, missing, or otherwise incomplete information. For example, the popular fixed rank nomination (FRN) scheme, often used in studies of schools and businesses, asks study participants to nominate and rank at most a small number of contacts or friends, leaving the existence of other relations uncertain. However, most statistical models are formulated in terms of completely observed binary networks. Statistical analyses of FRN data with such models ignore the censored and ranked nature of the data and could potentially result in misleading statistical inference. To investigate this possibility, we compare Bayesian parameter estimates obtained from a likelihood for complete binary networks with those obtained from likelihoods that are derived from the FRN scheme, and therefore accommodate the ranked and censored nature of the data. We show analytically and via simulation that the binary likelihood can provide misleading inference, particularly for certain model parameters that relate network ties to characteristics of individuals and pairs of individuals. We also compare these different likelihoods in a data analysis of several adolescent social networks. For some of these networks, the parameter estimates from the binary and FRN likelihoods lead to different conclusions, indicating the importance of analyzing FRN data with a method that accounts for the FRN survey design.
Chen, Y C; Dong, G H; Lin, K C; Lee, Y L
2013-03-01
The aims of our meta-analysis were (i) to quantify the predictability of childhood overweight and obesity on the risk of incident asthma; and (ii) to evaluate the gender difference on this relationship. The selection criteria included prospective cohort paediatric studies which use age- and sex-specific body mass index (BMI) as a measure of childhood overweight and the primary outcome of incident asthma. A total of 1,027 studies were initially identified through online database searches, and finally 6 studies met the inclusion criteria. The combined result of reported relative risk from the 6 included studies revealed that overweight children conferred increased risks of incident asthma as compared with non-overweight children (relative risk, 1.19; 95% confidence interval [CI], 1.03-1.37). The relationship was further elevated for obesity vs. non-obesity (relative risk, 2.02; 95% CI, 1.16-3.50). A dose-responsiveness of elevated BMI on asthma incidence was observed (P for trend, 0.004). Obese boys had a significantly larger effect than obese girls (relative risk, boys: 2.47; 95% CI, 1.57-3.87; girls: 1.25; 95% CI, 0.51-3.03), with significant dose-dependent effect. Proposed mechanisms of gender difference could be through pulmonary mechanics, sleep disordered breathing and leptin. Further research might be needed to better understand the exact mechanism of gender difference on the obesity-asthma relationship. © 2012 The Authors. obesity reviews © 2012 International Association for the Study of Obesity.
van den Broek, Jeroen J; van Ravesteyn, Nicolien T; Mandelblatt, Jeanne S; Huang, Hui; Ergun, Mehmet Ali; Burnside, Elizabeth S; Xu, Cong; Li, Yisheng; Alagoz, Oguzhan; Lee, Sandra J; Stout, Natasha K; Song, Juhee; Trentham-Dietz, Amy; Plevritis, Sylvia K; Moss, Sue M; de Koning, Harry J
2018-04-01
The UK Age trial compared annual mammography screening of women ages 40 to 49 years with no screening and found a statistically significant breast cancer mortality reduction at the 10-year follow-up but not at the 17-year follow-up. The objective of this study was to compare the observed Age trial results with the Cancer Intervention and Surveillance Modeling Network (CISNET) breast cancer model predicted results. Five established CISNET breast cancer models used data on population demographics, screening attendance, and mammography performance from the Age trial together with extant natural history parameters to project breast cancer incidence and mortality in the control and intervention arm of the trial. The models closely reproduced the effect of annual screening from ages 40 to 49 years on breast cancer incidence. Restricted to breast cancer deaths originating from cancers diagnosed during the intervention phase, the models estimated an average 15% (range across models, 13% to 17%) breast cancer mortality reduction at the 10-year follow-up compared with 25% (95% CI, 3% to 42%) observed in the trial. At the 17-year follow-up, the models predicted 13% (range, 10% to 17%) reduction in breast cancer mortality compared with the non-significant 12% (95% CI, -4% to 26%) in the trial. The models underestimated the effect of screening on breast cancer mortality at the 10-year follow-up. Overall, the models captured the observed long-term effect of screening from age 40 to 49 years on breast cancer incidence and mortality in the UK Age trial, suggesting that the model structures, input parameters, and assumptions about breast cancer natural history are reasonable for estimating the impact of screening on mortality in this age group.
Gus, M; Cichelero, F Tremea; Moreira, C Medaglia; Escobar, G Fortes; Moreira, L Beltrami; Wiehe, M; Fuchs, S Costa; Fuchs, F Danni
2009-01-01
Central obesity is a key component in the definition of the metabolic syndrome, but the cut-off values proposed to define abnormal values vary among different guidelines and are mostly based on cross-sectional studies. In this study, we identify the best cut-off values for waist circumference (WC) associated with the incidence of hypertension. Participants for this prospectively planned cohort study were 589 individuals who were free of hypertension and selected at random from the community of Porto Alegre, Brazil. Hypertension was defined by a blood pressure measurement >or= 140/90 mmHg or the use of blood pressure lowering drugs. A logistic regression model established the association between WC and the incidence of hypertension. A receiver operating characteristics (ROC) curve analysis was used to select the best WC cut-off point to predict the incidence of hypertension. During a mean follow-up of 5.5+/-0.9 years, 127 subjects developed hypertension. The hazard ratios for the development of hypertension, adjusted for age, baseline systolic blood pressure, alcohol consumption, gender and scholarship were 1.02 (95% CI; 1.00-1.04; P=0.02) for WC. The best cut-off WC values to predict hypertension were 87 cm in men and 80 cm in women, with an area under the curve of 0.56 (95% CI; 0.47-0.64; P=0.17) and 0.70 (95% CI; 0.63-0.77; Phypertension in individuals living in communities in Brazil, and this risk begins at lower values of WC that those recommended by some guidelines.
Likelihood inference for unions of interacting discs
DEFF Research Database (Denmark)
Møller, Jesper; Helisová, Katarina
To the best of our knowledge, this is the first paper which discusses likelihood inference or a random set using a germ-grain model, where the individual grains are unobservable edge effects occur, and other complications appear. We consider the case where the grains form a disc process modelled...... is specified with respect to a given marked Poisson model (i.e. a Boolean model). We show how edge effects and other complications can be handled by considering a certain conditional likelihood. Our methodology is illustrated by analyzing Peter Diggle's heather dataset, where we discuss the results...... of simulation-based maximum likelihood inference and the effect of specifying different reference Poisson models....
Liu, F; Zhu, N; Qiu, L; Wang, J J; Wang, W H
2016-08-10
To apply the ' auto-regressive integrated moving average product seasonal model' in predicting the number of hand, foot and mouth disease in Shaanxi province. In Shaanxi province, the trend of hand, foot and mouth disease was analyzed and tested, under the use of R software, between January 2009 and June 2015. Multiple seasonal ARIMA model was then fitted under time series to predict the number of hand, foot and mouth disease in 2016 and 2017. Seasonal effect was seen in hand, foot and mouth disease in Shaanxi province. A multiple seasonal ARIMA (2,1,0)×(1,1,0)12 was established, with the equation as (1 -B)(1 -B12)Ln (Xt) =((1-1.000B)/(1-0.532B-0.363B(2))*(1-0.644B12-0.454B12(2)))*Epsilont. The mean of absolute error and the relative error were 531.535 and 0.114, respectively when compared to the simulated number of patients from Jun to Dec in 2015. RESULTS under the prediction of multiple seasonal ARIMA model showed that the numbers of patients in both 2016 and 2017 were similar to that of 2015 in Shaanxi province. Multiple seasonal ARIMA (2,1,0)×(1,1,0)12 model could be used to successfully predict the incidence of hand, foot and mouth disease in Shaanxi province.
Brodaty, Henry; Aerts, Liesbeth; Crawford, John D; Heffernan, Megan; Kochan, Nicole A; Reppermund, Simone; Kang, Kristan; Maston, Kate; Draper, Brian; Trollor, Julian N; Sachdev, Perminder S
2017-05-01
Mild cognitive impairment (MCI) is considered an intermediate stage between normal aging and dementia. It is diagnosed in the presence of subjective cognitive decline and objective cognitive impairment without significant functional impairment, although there are no standard operationalizations for each of these criteria. The objective of this study is to determine which operationalization of the MCI criteria is most accurate at predicting dementia. Six-year longitudinal study, part of the Sydney Memory and Ageing Study. Community-based. 873 community-dwelling dementia-free adults between 70 and 90 years of age. Persons from a non-English speaking background were excluded. Seven different operationalizations for subjective cognitive decline and eight measures of objective cognitive impairment (resulting in 56 different MCI operational algorithms) were applied. The accuracy of each algorithm to predict progression to dementia over 6 years was examined for 618 individuals. Baseline MCI prevalence varied between 0.4% and 30.2% and dementia conversion between 15.9% and 61.9% across different algorithms. The predictive accuracy for progression to dementia was poor. The highest accuracy was achieved based on objective cognitive impairment alone. Inclusion of subjective cognitive decline or mild functional impairment did not improve dementia prediction accuracy. Not MCI, but objective cognitive impairment alone, is the best predictor for progression to dementia in a community sample. Nevertheless, clinical assessment procedures need to be refined to improve the identification of pre-dementia individuals. Copyright © 2016 American Association for Geriatric Psychiatry. Published by Elsevier Inc. All rights reserved.
Maximum likelihood estimation of fractionally cointegrated systems
DEFF Research Database (Denmark)
Lasak, Katarzyna
In this paper we consider a fractionally cointegrated error correction model and investigate asymptotic properties of the maximum likelihood (ML) estimators of the matrix of the cointe- gration relations, the degree of fractional cointegration, the matrix of the speed of adjustment to the equilib......In this paper we consider a fractionally cointegrated error correction model and investigate asymptotic properties of the maximum likelihood (ML) estimators of the matrix of the cointe- gration relations, the degree of fractional cointegration, the matrix of the speed of adjustment...
Composite likelihood estimation of demographic parameters
Directory of Open Access Journals (Sweden)
Garrigan Daniel
2009-11-01
Full Text Available Abstract Background Most existing likelihood-based methods for fitting historical demographic models to DNA sequence polymorphism data to do not scale feasibly up to the level of whole-genome data sets. Computational economies can be achieved by incorporating two forms of pseudo-likelihood: composite and approximate likelihood methods. Composite likelihood enables scaling up to large data sets because it takes the product of marginal likelihoods as an estimator of the likelihood of the complete data set. This approach is especially useful when a large number of genomic regions constitutes the data set. Additionally, approximate likelihood methods can reduce the dimensionality of the data by summarizing the information in the original data by either a sufficient statistic, or a set of statistics. Both composite and approximate likelihood methods hold promise for analyzing large data sets or for use in situations where the underlying demographic model is complex and has many parameters. This paper considers a simple demographic model of allopatric divergence between two populations, in which one of the population is hypothesized to have experienced a founder event, or population bottleneck. A large resequencing data set from human populations is summarized by the joint frequency spectrum, which is a matrix of the genomic frequency spectrum of derived base frequencies in two populations. A Bayesian Metropolis-coupled Markov chain Monte Carlo (MCMCMC method for parameter estimation is developed that uses both composite and likelihood methods and is applied to the three different pairwise combinations of the human population resequence data. The accuracy of the method is also tested on data sets sampled from a simulated population model with known parameters. Results The Bayesian MCMCMC method also estimates the ratio of effective population size for the X chromosome versus that of the autosomes. The method is shown to estimate, with reasonable
Nkanga, Mireille Solange Nganga; Longo-Mbenza, Benjamin; Adeniyi, Oladele Vincent; Ngwidiwo, Jacques Bikaula; Katawandja, Antoine Lufimbo; Kazadi, Paul Roger Beia; Nzonzila, Alain Nganga
2017-08-23
The global burden of hematologic malignancy (HM) is rapidly rising with aging, exposure to polluted environments, and global and local climate variability all being well-established conditions of oxidative stress. However, there is currently no information on the extent and predictors of HM at Kinshasa University Clinics (KUC), DR Congo (DRC). This study evaluated the impact of bio-clinical factors, exposure to polluted environments, and interactions between global climate changes (EL Nino and La Nina) and local climate (dry and rainy seasons) on the incidence of HM. This hospital-based prospective cohort study was conducted at Kinshasa University Clinics in DR Congo. A total of 105 black African adult patients with anaemia between 2009 and 2016 were included. HM was confirmed by morphological typing according to the French-American-British (FAB) Classification System. Gender, age, exposure to traffic pollution and garages/stations, global climate variability (El Nino and La Nina), and local climate (dry and rainy seasons) were potential independent variables to predict incident HM using Cox regression analysis and Kaplan Meier curves. Out of the total 105 patients, 63 experienced incident HM, with an incidence rate of 60%. After adjusting for gender, HIV/AIDS, and other bio-clinical factors, the most significant independent predictors of HM were age ≥ 55 years (HR = 2.4; 95% CI 1.4-4.3; P = 0.003), exposure to pollution and garages or stations (HR = 4.9; 95% CI 2-12.1; P pollution, combined local dry season + La Nina and combined local dry season + El Nino were the most significant predictors of incident hematologic malignancy. These findings highlight the importance of aging, pollution, the dry season, El Nino and La Nina as related to global warming as determinants of hematologic malignancies among African patients from Kinshasa, DR Congo. Cancer registries in DRC and other African countries will provide more robust database for future researches on
The likelihood for supernova neutrino analyses
Ianni, A; Strumia, A; Torres, F R; Villante, F L; Vissani, F
2009-01-01
We derive the event-by-event likelihood that allows to extract the complete information contained in the energy, time and direction of supernova neutrinos, and specify it in the case of SN1987A data. We resolve discrepancies in the previous literature, numerically relevant already in the concrete case of SN1987A data.
Likelihood analysis of the I(2) model
DEFF Research Database (Denmark)
Johansen, Søren
1997-01-01
The I(2) model is defined as a submodel of the general vector autoregressive model, by two reduced rank conditions. The model describes stochastic processes with stationary second difference. A parametrization is suggested which makes likelihood inference feasible. Consistency of the maximum...
Maintaining symmetry of simulated likelihood functions
DEFF Research Database (Denmark)
Andersen, Laura Mørch
This paper suggests solutions to two different types of simulation errors related to Quasi-Monte Carlo integration. Likelihood functions which depend on standard deviations of mixed parameters are symmetric in nature. This paper shows that antithetic draws preserve this symmetry and thereby...
Maximum likelihood estimation of exponential distribution under ...
African Journals Online (AJOL)
Maximum likelihood estimation of exponential distribution under type-ii censoring from imprecise data. ... Journal of Fundamental and Applied Sciences ... This paper deals with the estimation of exponential mean parameter under Type-II censoring scheme when the lifetime observations are fuzzy and are assumed to be ...
Efficient Bit-to-Symbol Likelihood Mappings
Moision, Bruce E.; Nakashima, Michael A.
2010-01-01
This innovation is an efficient algorithm designed to perform bit-to-symbol and symbol-to-bit likelihood mappings that represent a significant portion of the complexity of an error-correction code decoder for high-order constellations. Recent implementation of the algorithm in hardware has yielded an 8- percent reduction in overall area relative to the prior design.
Predicting Teacher Likelihood to Use School Gardens: A Case Study
Kincy, Natalie; Fuhrman, Nicholas E.; Navarro, Maria; Knauft, David
2016-01-01
A quantitative survey, built around the theory of planned behavior, was used to investigate elementary teachers' attitudes, school norms, perceived behavioral control, and intent in both current and ideal teaching situations toward using gardens in their curriculum. With positive school norms and teachers who garden in their personal time, 77% of…
Bodapati, Rohan K.; Kizer, Jorge R.; Kop, Willem J.; Kamel, Hooman; Stein, Phyllis K.
2018-01-01
Background Heart rate variability (HRV) characterizes cardiac autonomic functioning. The association of HRV with stroke is uncertain. We examined whether 24-hour HRV added predictive value to the Cardiovascular Health Study clinical stroke risk score (CHS-SCORE), previously developed at the baseline examination. Methods and Results N=884 stroke-free CHS participants (age 75.3 ± 4.6), with 24-hour Holters adequate for HRV analysis at the 1994–1995 examination, had 68 strokes over ≤8 year follow-up (median 7.3 [interquartile range 7.1–7.6] years). The value of adding HRV to the CHS-SCORE was assessed with stepwise Cox regression analysis. The CHS-SCORE predicted incident stroke (HR=1.06 per unit increment, P=0.005). Two HRV parameters, decreased coefficient of variance of NN intervals (CV%, P=0.031) and decreased power law slope (SLOPE, P=0.033) also entered the model, but these did not significantly improve the c-statistic (P=0.47). In a secondary analysis, dichotomization of CV% (LOWCV% ≤12.8%) was found to maximally stratify higher-risk participants after adjustment for CHS-SCORE. Similarly, dichotomizing SLOPE (LOWSLOPE <− 1.4) maximally stratified higher-risk participants. When these HRV categories were combined (eg, HIGHCV% with HIGHSLOPE), the c-statistic for the model with the CHS-SCORE and combined HRV categories was 0.68, significantly higher than 0.61 for the CHS-SCORE alone (P=0.02). Conclusions In this sample of older adults, 2 HRV parameters, CV% and power law slope, emerged as significantly associated with incident stroke when added to a validated clinical risk score. After each parameter was dichotomized based on its optimal cut point in this sample, their composite significantly improved prediction of incident stroke during ≤8-year follow-up. These findings will require validation in separate, larger cohorts. PMID:28396041
Phylogenetic analysis using parsimony and likelihood methods.
Yang, Z
1996-02-01
The assumptions underlying the maximum-parsimony (MP) method of phylogenetic tree reconstruction were intuitively examined by studying the way the method works. Computer simulations were performed to corroborate the intuitive examination. Parsimony appears to involve very stringent assumptions concerning the process of sequence evolution, such as constancy of substitution rates between nucleotides, constancy of rates across nucleotide sites, and equal branch lengths in the tree. For practical data analysis, the requirement of equal branch lengths means similar substitution rates among lineages (the existence of an approximate molecular clock), relatively long interior branches, and also few species in the data. However, a small amount of evolution is neither a necessary nor a sufficient requirement of the method. The difficulties involved in the application of current statistical estimation theory to tree reconstruction were discussed, and it was suggested that the approach proposed by Felsenstein (1981, J. Mol. Evol. 17: 368-376) for topology estimation, as well as its many variations and extensions, differs fundamentally from the maximum likelihood estimation of a conventional statistical parameter. Evidence was presented showing that the Felsenstein approach does not share the asymptotic efficiency of the maximum likelihood estimator of a statistical parameter. Computer simulations were performed to study the probability that MP recovers the true tree under a hierarchy of models of nucleotide substitution; its performance relative to the likelihood method was especially noted. The results appeared to support the intuitive examination of the assumptions underlying MP. When a simple model of nucleotide substitution was assumed to generate data, the probability that MP recovers the true topology could be as high as, or even higher than, that for the likelihood method. When the assumed model became more complex and realistic, e.g., when substitution rates were
Fear and perceived likelihood of victimization in the\\ud traditional and cyber settings
Maddison, Jessica; Jeske, Debora
2014-01-01
This study considers the influence of perceived likelihood, demographics (gender and education) and personality on fear of victimization and cyber-victimization using a survey design (N=159). The results suggest that perceived likelihood of victimization predicts fear of victimization in traditional contexts. Women tend to be more fearful of victimization in traditional and cyber contexts, confirming previous research. No group differences emerged in relation to education. Self-esteem and sel...
A systematic error in maximum likelihood fitting
International Nuclear Information System (INIS)
Bergmann, U.C.; Riisager, K.
2002-01-01
The maximum likelihood method is normally regarded as the safest method for parameter estimation. We show that this method will give a bias in the often occurring situation where a spectrum of counts is fitted with a theoretical function, unless the fit function is very simple. The bias can become significant when the spectrum contains less than about 100 counts or when the fit interval is too short
Transfer Entropy as a Log-Likelihood Ratio
Barnett, Lionel; Bossomaier, Terry
2012-09-01
Transfer entropy, an information-theoretic measure of time-directed information transfer between joint processes, has steadily gained popularity in the analysis of complex stochastic dynamics in diverse fields, including the neurosciences, ecology, climatology, and econometrics. We show that for a broad class of predictive models, the log-likelihood ratio test statistic for the null hypothesis of zero transfer entropy is a consistent estimator for the transfer entropy itself. For finite Markov chains, furthermore, no explicit model is required. In the general case, an asymptotic χ2 distribution is established for the transfer entropy estimator. The result generalizes the equivalence in the Gaussian case of transfer entropy and Granger causality, a statistical notion of causal influence based on prediction via vector autoregression, and establishes a fundamental connection between directed information transfer and causality in the Wiener-Granger sense.
Jonasson, Grethe; Sundh, Valter; Ahlqwist, Margareta; Hakeberg, Magnus; Björkelund, Cecilia; Lissner, Lauren
2011-10-01
Bone structure is the key to the understanding of fracture risk. The hypothesis tested in this prospective study is that dense mandibular trabeculation predicts low fracture risk, whereas sparse trabeculation is predictive of high fracture risk. Out of 731 women from the Prospective Population Study of Women in Gothenburg with dental examinations at baseline 1968, 222 had their first fracture in the follow-up period until 2006. Mandibular trabeculation was defined as dense, mixed dense plus sparse, and sparse based on panoramic radiographs from 1968 and/or 1980. Time to fracture was ascertained and used as the dependent variable in three Cox proportional hazards regression analyses. The first analysis covered 12 years of follow-up with self-reported endpoints; the second covered 26 years of follow-up with hospital verified endpoints; and the third combined the two follow-up periods, totaling 38 years. Mandibular trabeculation was the main independent variable predicting incident fractures, with age, physical activity, alcohol consumption and body mass index as covariates. The Kaplan-Meier curve indicated a graded association between trabecular density and fracture risk. During the whole period covered, the hazard ratio of future fracture for sparse trabeculation compared to mixed trabeculation was 2.9 (95% CI: 2.2-3.8, pfracture risk. Our findings imply that dentists, using ordinary dental radiographs, can identify women at high risk for future fractures at 38-54 years of age, often long before the first fracture occurs. Copyright © 2011 Elsevier Inc. All rights reserved.
Park, Hye Yin; Choi, Hyung Jin; Hong, Yun-Chul
2015-08-01
Contribution of genetic predisposition to risk prediction of type 2 diabetes mellitus (T2DM) was investigated using a prospective study in middle-aged adults in Korea. From a community cohort of 6,257 subjects with 8 yr' follow-up, genetic predisposition score with subsets of 3, 18, 36 selected single nucleotide polymorphisms (SNPs) (genetic predisposition score; GPS-3, GPS-18, GPS-36) in association with T2DM were determined, and their effect was evaluated using risk prediction models. Rs5215, rs10811661, and rs2237892 were in significant association with T2DM, and hazard ratios per risk allele score increase were 1.11 (95% confidence intervals: 1.06-1.17), 1.09 (1.01-1.05), 1.04 (1.02-1.07) with GPS-3, GPS-18, GPS-36, respectively. Changes in AUC upon addition of GPS were significant in simple and clinical models, but the significance disappeared in full clinical models with glycated hemoglobin (HbA1c). For net reclassification index (NRI), significant improvement observed in simple (range 5.1%-8.6%) and clinical (3.1%-4.4%) models were no longer significant in the full models. Influence of genetic predisposition in prediction ability of T2DM incidence was no longer significant when HbA1c was added in the models, confirming HbA1c as a strong predictor for T2DM risk. Also, the significant SNPs verified in our subjects warrant further research, e.g. gene-environmental interaction and epigenetic studies.
Yeboah, Joseph; Erbel, Raimund; Delaney, Joseph Chris; Nance, Robin; Guo, Mengye; Bertoni, Alain G; Budoff, Matthew; Moebus, Susanne; Jöckel, Karl-Heinz; Burke, Gregory L; Wong, Nathan D; Lehmann, Nils; Herrington, David M; Möhlenkamp, Stefan; Greenland, Philip
2014-10-01
We develop a new diabetes CHD risk estimator using traditional risk factors plus coronary artery calcium (CAC), ankle-brachial index (ABI), high sensitivity C-reactive protein, family history of CHD, and carotid intima-media thickness and compared it with United Kingdom Prospective Diabetes study (UKPDS), Framingham risk and the NCEP/ATP III risk scores in type 2 diabetes mellitus (T2DM). We combined data from T2DM without clinical CVD in the Multi-Ethnic Study of Atherosclerosis (MESA) and the Heinz Nixdorf Recall Study (N = 1343). After a mean follow-up of 8.5 years, 85 (6.3%) participants had incident CHD. Among the novel risk markers, CAC best predicted CHD independent of the FRS [hazard ratio: HR (95% CI): log (CAC +25):1.69 (1.45-1.97), p 25 and ≤125:2.29 (0.87-5.95), >125 and ≤400: 3.87 (1.57-9.57), >400: 5.97 (2.57-13.84), respectively). The MESA-HNR diabetes CHD risk score has better accuracy for the main outcome versus the FRS or UKPDS [area under curve (AUC) of 0.76 vs. 0.70 and 0.69, respectively; all p III guidelines, the MESA-HNR score has an NRI of 0.74 for the main outcome. This new CHD risk estimator has better discriminative ability for incident CHD than the FRS, UKPDS, and the ATP III/NCEP recommendations in a multi-ethnic cohort with T2DM. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Abdollahi, Fatemeh; Zarghami, Mehran; Sazlina, Shariff-Ghazali; Zain, Azhar Md; Mohammad, Asghari Jafarabadi; Lye, Munn-Sann
2016-10-01
Post-partum depression (PPD) is the most prevalent mental problem associated with childbirth. The purpose of the present study was to determine the incidence of early PPD and possible relevant risk factors among women attending primary health centers in Mazandaran province, Iran for the first time. A longitudinal cohort study was conducted among 2279 eligible women during weeks 32-42 of pregnancy to determine bio-psycho-socio-cultural risk factors of depression at 2 weeks post-partum using the Iranian version of the Edinburgh Postnatal Depression Scale (EPDS). Univariate and hierarchical multiple logistic regression models were used for data analysis. Among 1,739 mothers whose EPDS scores were ≤ 12 during weeks 32-42 of gestation and at the follow-up study, the cumulative incidence rate of depression was 6.9% (120/1,739) at 2 weeks post-partum. In the multivariate model the factor that predicted depression symptomatology at 2 weeks post-partum was having psychiatric distress in pregnancy based on the General Health Questionnaire (GHQ) (OR = 1.06, (95% CI: 1.04-1.09), p = 0.001). The risk of PPD also lower in those with sufficient parenting skills (OR = 0.78 (95% CI: 0.69-0.88), p = 0.001), increased marital satisfaction (OR = 0.94 (95% CI: 0.9-0.99), p = 0.03), increased frequency of practicing rituals (OR = 0.94 (95% CI: 0.89-0.99), p = 0.004) and in those whose husbands had better education (OR = 0.03 (95% CI: 0.88-0.99), p = 0.04). The findings indicated that a combination of demographic, sociological, psychological and cultural risk factors can make mothers vulnerable to PPD.
Schousboe, John T; Vo, Tien; Taylor, Brent C; Cawthon, Peggy M; Schwartz, Ann V; Bauer, Douglas C; Orwoll, Eric S; Lane, Nancy E; Barrett-Connor, Elizabeth; Ensrud, Kristine E
2016-03-01
Trabecular bone score (TBS) has been shown to predict major osteoporotic (clinical vertebral, hip, humerus, and wrist) and hip fractures in postmenopausal women and older men, but the association of TBS with these incident fractures in men independent of prevalent radiographic vertebral fracture is unknown. TBS was estimated on anteroposterior (AP) spine dual-energy X-ray absorptiometry (DXA) scans obtained at the baseline visit for 5979 men aged ≥65 years enrolled in the Osteoporotic Fractures in Men (MrOS) Study and its association with incident major osteoporotic and hip fractures estimated with proportional hazards models. Model discrimination was tested with Harrell's C-statistic and with a categorical net reclassification improvement index, using 10-year risk cutpoints of 20% for major osteoporotic and 3% for hip fractures. For each standard deviation decrease in TBS, there were hazard ratios of 1.27 (95% confidence interval [CI] 1.17 to 1.39) for major osteoporotic fracture, and 1.20 (95% CI 1.05 to 1.39) for hip fracture, adjusted for FRAX with bone mineral density (BMD) 10-year fracture risks and prevalent radiographic vertebral fracture. In the same model, those with prevalent radiographic vertebral fracture compared with those without prevalent radiographic vertebral fracture had hazard ratios of 1.92 (95% CI 1.49 to 2.48) for major osteoporotic fracture and 1.86 (95% CI 1.26 to 2.74) for hip fracture. There were improvements of 3.3%, 5.2%, and 6.2%, respectively, of classification of major osteoporotic fracture cases when TBS, prevalent radiographic vertebral fracture status, or both were added to FRAX with BMD and age, with minimal loss of correct classification of non-cases. Neither TBS nor prevalent radiographic vertebral fracture improved discrimination of hip fracture cases or non-cases. In conclusion, TBS and prevalent radiographic vertebral fracture are associated with incident major osteoporotic fractures in older men independent of each other
Dimension-Independent Likelihood-Informed MCMC
Cui, Tiangang
2015-01-07
Many Bayesian inference problems require exploring the posterior distribution of high-dimensional parameters, which in principle can be described as functions. By exploiting low-dimensional structure in the change from prior to posterior [distributions], we introduce a suite of MCMC samplers that can adapt to the complex structure of the posterior distribution, yet are well-defined on function space. Posterior sampling in nonlinear inverse problems arising from various partial di erential equations and also a stochastic differential equation are used to demonstrate the e ciency of these dimension-independent likelihood-informed samplers.
Approximate maximum parsimony and ancestral maximum likelihood.
Alon, Noga; Chor, Benny; Pardi, Fabio; Rapoport, Anat
2010-01-01
We explore the maximum parsimony (MP) and ancestral maximum likelihood (AML) criteria in phylogenetic tree reconstruction. Both problems are NP-hard, so we seek approximate solutions. We formulate the two problems as Steiner tree problems under appropriate distances. The gist of our approach is the succinct characterization of Steiner trees for a small number of leaves for the two distances. This enables the use of known Steiner tree approximation algorithms. The approach leads to a 16/9 approximation ratio for AML and asymptotically to a 1.55 approximation ratio for MP.
LIKEDM: Likelihood calculator of dark matter detection
Huang, Xiaoyuan; Tsai, Yue-Lin Sming; Yuan, Qiang
2017-04-01
With the large progress in searches for dark matter (DM) particles with indirect and direct methods, we develop a numerical tool that enables fast calculations of the likelihoods of specified DM particle models given a number of observational data, such as charged cosmic rays from space-borne experiments (e.g., PAMELA, AMS-02), γ-rays from the Fermi space telescope, and underground direct detection experiments. The purpose of this tool - LIKEDM, likelihood calculator for dark matter detection - is to bridge the gap between a particle model of DM and the observational data. The intermediate steps between these two, including the astrophysical backgrounds, the propagation of charged particles, the analysis of Fermi γ-ray data, as well as the DM velocity distribution and the nuclear form factor, have been dealt with in the code. We release the first version (v1.0) focusing on the constraints from indirect detection of DM with charged cosmic and gamma rays. Direct detection will be implemented in the next version. This manual describes the framework, usage, and related physics of the code.
Corporate brand extensions based on the purchase likelihood: governance implications
Directory of Open Access Journals (Sweden)
Spyridon Goumas
2018-03-01
Full Text Available This paper is examining the purchase likelihood of hypothetical service brand extensions from product companies focusing on consumer electronics based on sector categorization and perceptions of fit between the existing product category and image of the company. Prior research has recognized that levels of brand knowledge eases the transference of associations and affect to the new products. Similarity to the existing products of the parent company and perceived image also influence the success of brand extensions. However, sector categorization may interfere with this relationship. The purpose of this study is to examine Greek consumers’ attitudes towards hypothetical brand extensions, and how these are affected by consumers’ existing knowledge about the brand, sector categorization and perceptions of image and category fit of cross-sector extensions. This aim is examined in the context of technological categories, where less-known companies exhibited significance in purchase likelihood, and contradictory with the existing literature, service companies did not perform as positively as expected. Additional insights to the existing literature about sector categorization are provided. The effect of both image and category fit is also examined and predictions regarding the effect of each are made.
Simulation-based marginal likelihood for cluster strong lensing cosmology
Killedar, M.; Borgani, S.; Fabjan, D.; Dolag, K.; Granato, G.; Meneghetti, M.; Planelles, S.; Ragone-Figueroa, C.
2018-01-01
Comparisons between observed and predicted strong lensing properties of galaxy clusters have been routinely used to claim either tension or consistency with Λ cold dark matter cosmology. However, standard approaches to such cosmological tests are unable to quantify the preference for one cosmology over another. We advocate approximating the relevant Bayes factor using a marginal likelihood that is based on the following summary statistic: the posterior probability distribution function for the parameters of the scaling relation between Einstein radii and cluster mass, α and β. We demonstrate, for the first time, a method of estimating the marginal likelihood using the X-ray selected z > 0.5 Massive Cluster Survey clusters as a case in point and employing both N-body and hydrodynamic simulations of clusters. We investigate the uncertainty in this estimate and consequential ability to compare competing cosmologies, which arises from incomplete descriptions of baryonic processes, discrepancies in cluster selection criteria, redshift distribution and dynamical state. The relation between triaxial cluster masses at various overdensities provides a promising alternative to the strong lensing test.
Composite likelihood and two-stage estimation in family studies
DEFF Research Database (Denmark)
Andersen, Elisabeth Anne Wreford
2002-01-01
Composite likelihood; Two-stage estimation; Family studies; Copula; Optimal weights; All possible pairs......Composite likelihood; Two-stage estimation; Family studies; Copula; Optimal weights; All possible pairs...
Dishonestly increasing the likelihood of winning
Directory of Open Access Journals (Sweden)
Shaul Shalvi
2012-05-01
Full Text Available People not only seek to avoid losses or secure gains; they also attempt to create opportunities for obtaining positive outcomes. When distributing money between gambles with equal probabilities, people often invest in turning negative gambles into positive ones, even at a cost of reduced expected value. Results of an experiment revealed that (1 the preference to turn a negative outcome into a positive outcome exists when people's ability to do so depends on their performance levels (rather than merely on their choice, (2 this preference is amplified when the likelihood to turn negative into positive is high rather than low, and (3 this preference is attenuated when people can lie about their performance levels, allowing them to turn negative into positive not by performing better but rather by lying about how well they performed.
Subtracting and Fitting Histograms using Profile Likelihood
D'Almeida, F M L
2008-01-01
It is known that many interesting signals expected at LHC are of unknown shape and strongly contaminated by background events. These signals will be dif cult to detect during the rst years of LHC operation due to the initial low luminosity. In this work, one presents a method of subtracting histograms based on the pro le likelihood function when the background is previously estimated by Monte Carlo events and one has low statistics. Estimators for the signal in each bin of the histogram difference are calculated so as limits for the signals with 68.3% of Con dence Level in a low statistics case when one has a exponential background and a Gaussian signal. The method can also be used to t histograms when the signal shape is known. Our results show a good performance and avoid the problem of negative values when subtracting histograms.
Higher Order Bootstrap likelihood | Ogbonmwam | Journal of the ...
African Journals Online (AJOL)
In this work, higher order optimal window width is used to generate bootstrap kernel density likelihood. A simulated study is conducted to compare the distributions of the higher order bootstrap likelihoods with the exact (empirical) bootstrap likelihood. Our results indicate that the optimal window width of orders 2 and 4 ...
Drongelen AW van; Roszek B; Hilbers-Modderman ESM; Kallewaard M; Wassenaar C; LGM
2002-01-01
This RIVM study was performed to gain insight into wheelchair-related incidents with powered and manual wheelchairs reported to the USA FDA, the British MDA and the Dutch Center for Quality and Usability Research of Technical Aids (KBOH). The data in the databases do not indicate that incidents with
Pompili, Cecilia; Falcoz, Pierre Emmanuel; Salati, Michele; Szanto, Zalan; Brunelli, Alessandro
2017-04-01
The study objective was to develop an aggregate risk score for predicting the occurrence of prolonged air leak after video-assisted thoracoscopic lobectomy from patients registered in the European Society of Thoracic Surgeons database. A total of 5069 patients who underwent video-assisted thoracoscopic lobectomy (July 2007 to August 2015) were analyzed. Exclusion criteria included sublobar resections or pneumonectomies, lung resection associated with chest wall or diaphragm resections, sleeve resections, and need for postoperative assisted mechanical ventilation. Prolonged air leak was defined as an air leak more than 5 days. Several baseline and surgical variables were tested for a possible association with prolonged air leak using univariable and logistic regression analyses, determined by bootstrap resampling. Predictors were proportionally weighed according to their regression estimates (assigning 1 point to the smallest coefficient). Prolonged air leak was observed in 504 patients (9.9%). Three variables were found associated with prolonged air leak after logistic regression: male gender (P classes with an incremental risk of prolonged air leak (P class A (score 0 points, 1493 patients) 6.3% with prolonged air leak, class B (score 1 point, 2240 patients) 10% with prolonged air leak, class C (score 2 points, 1219 patients) 13% with prolonged air leak, and class D (score >2 points, 117 patients) 25% with prolonged air leak. An aggregate risk score was created to stratify the incidence of prolonged air leak after video-assisted thoracoscopic lobectomy. The score can be used for patient counseling and to identify those patients who can benefit from additional intraoperative preventative measures. Copyright © 2016 The American Association for Thoracic Surgery. Published by Elsevier Inc. All rights reserved.
Bonnet, Nicolas; Biver, Emmanuel; Chevalley, Thierry; Rizzoli, René; Garnero, Patrick; Ferrari, Serge L
2017-11-01
Periostin is a matricellular protein involved in bone formation and bone matrix organization, but it is also produced by other tissues. Its circulating levels have been weakly associated with bone microstructure and prevalent fractures, possibly because periostin measured by the current commercial assays does not specifically reflect bone metabolism. In this context, we developed a new ELISA for a periostin fragment resulting from cathepsin K digestion (K-Postn). We hypothesized that circulating K-Postn levels could be associated with bone fragility. A total of 695 women (age 65.0 ± 1.5 years), enrolled in the Geneva Retirees Cohort (GERICO), were prospectively evaluated over 4.7 ± 1.9 years for the occurrence of low-trauma fractures. At baseline, we measured serum periostin, K-Postn, and bone turnover markers (BTMs), distal radius and tibia microstructure by HR-pQCT, hip and lumbar spine aBMD by DXA, and estimated fracture probability using the Fracture Risk Assessment Tool (FRAX). Sixty-six women sustained a low-trauma clinical fracture during the follow-up. Total periostin was not associated with fractures (HR [95% CI] per SD: 1.19 [0.89 to 1.59], p = 0.24). In contrast, K-Postn was significantly higher in the fracture versus nonfracture group (57.5 ± 36.6 ng/mL versus 42.5 ± 23.4 ng/mL, p K-Postn remained significantly associated with fracture risk. The performance of the fracture prediction models was improved by adding K-Postn to aBMD or FRAX (Harrell C index for fracture: 0.70 for aBMD + K-Post versus 0.58 for aBMD alone, p = 0.001; 0.73 for FRAX + K-Postn versus 0.65 for FRAX alone, p = 0.005). Circulating K-Postn predicts incident fractures independently of BMD, BTMs, and FRAX in postmenopausal women. Hence measurement of a periostin fragment resulting from in vivo cathepsin K digestion may help to identify subjects at high risk of fracture. © 2017 American Society for Bone and Mineral Research. © 2017
Ting, Hui-Min; Chang, Liyun; Huang, Yu-Jie; Wu, Jia-Ming; Wang, Hung-Yu; Horng, Mong-Fong; Chang, Chun-Ming; Lan, Jen-Hong; Huang, Ya-Yu; Fang, Fu-Min; Leung, Stephen Wan
2014-01-01
Purpose The aim of this study was to develop a multivariate logistic regression model with least absolute shrinkage and selection operator (LASSO) to make valid predictions about the incidence of moderate-to-severe patient-rated xerostomia among head and neck cancer (HNC) patients treated with IMRT. Methods and Materials Quality of life questionnaire datasets from 206 patients with HNC were analyzed. The European Organization for Research and Treatment of Cancer QLQ-H&N35 and QLQ-C30 questionnaires were used as the endpoint evaluation. The primary endpoint (grade 3+ xerostomia) was defined as moderate-to-severe xerostomia at 3 (XER3m) and 12 months (XER12m) after the completion of IMRT. Normal tissue complication probability (NTCP) models were developed. The optimal and suboptimal numbers of prognostic factors for a multivariate logistic regression model were determined using the LASSO with bootstrapping technique. Statistical analysis was performed using the scaled Brier score, Nagelkerke R2, chi-squared test, Omnibus, Hosmer-Lemeshow test, and the AUC. Results Eight prognostic factors were selected by LASSO for the 3-month time point: Dmean-c, Dmean-i, age, financial status, T stage, AJCC stage, smoking, and education. Nine prognostic factors were selected for the 12-month time point: Dmean-i, education, Dmean-c, smoking, T stage, baseline xerostomia, alcohol abuse, family history, and node classification. In the selection of the suboptimal number of prognostic factors by LASSO, three suboptimal prognostic factors were fine-tuned by Hosmer-Lemeshow test and AUC, i.e., Dmean-c, Dmean-i, and age for the 3-month time point. Five suboptimal prognostic factors were also selected for the 12-month time point, i.e., Dmean-i, education, Dmean-c, smoking, and T stage. The overall performance for both time points of the NTCP model in terms of scaled Brier score, Omnibus, and Nagelkerke R2 was satisfactory and corresponded well with the expected values. Conclusions
Likelihood analysis of the minimal AMSB model
Energy Technology Data Exchange (ETDEWEB)
Bagnaschi, E.; Weiglein, G. [DESY, Hamburg (Germany); Borsato, M.; Chobanova, V.; Lucio, M.; Santos, D.M. [Universidade de Santiago de Compostela, Santiago de Compostela (Spain); Sakurai, K. [Institute for Particle Physics Phenomenology, University of Durham, Science Laboratories, Department of Physics, Durham (United Kingdom); University of Warsaw, Faculty of Physics, Institute of Theoretical Physics, Warsaw (Poland); Buchmueller, O.; Citron, M.; Costa, J.C.; Richards, A. [Imperial College, High Energy Physics Group, Blackett Laboratory, London (United Kingdom); Cavanaugh, R. [Fermi National Accelerator Laboratory, Batavia, IL (United States); University of Illinois at Chicago, Physics Department, Chicago, IL (United States); De Roeck, A. [Experimental Physics Department, CERN, Geneva (Switzerland); Antwerp University, Wilrijk (Belgium); Dolan, M.J. [School of Physics, University of Melbourne, ARC Centre of Excellence for Particle Physics at the Terascale, Melbourne (Australia); Ellis, J.R. [King' s College London, Theoretical Particle Physics and Cosmology Group, Department of Physics, London (United Kingdom); CERN, Theoretical Physics Department, Geneva (Switzerland); Flaecher, H. [University of Bristol, H.H. Wills Physics Laboratory, Bristol (United Kingdom); Heinemeyer, S. [Campus of International Excellence UAM+CSIC, Madrid (Spain); Instituto de Fisica Teorica UAM-CSIC, Madrid (Spain); Instituto de Fisica de Cantabria (CSIC-UC), Cantabria (Spain); Isidori, G. [Physik-Institut, Universitaet Zuerich, Zurich (Switzerland); Luo, F. [Kavli IPMU (WPI), UTIAS, The University of Tokyo, Kashiwa, Chiba (Japan); Olive, K.A. [School of Physics and Astronomy, University of Minnesota, William I. Fine Theoretical Physics Institute, Minneapolis, MN (United States)
2017-04-15
We perform a likelihood analysis of the minimal anomaly-mediated supersymmetry-breaking (mAMSB) model using constraints from cosmology and accelerator experiments. We find that either a wino-like or a Higgsino-like neutralino LSP, χ{sup 0}{sub 1}, may provide the cold dark matter (DM), both with similar likelihoods. The upper limit on the DM density from Planck and other experiments enforces m{sub χ{sup 0}{sub 1}}
Dimension-independent likelihood-informed MCMC
Cui, Tiangang
2015-10-08
Many Bayesian inference problems require exploring the posterior distribution of high-dimensional parameters that represent the discretization of an underlying function. This work introduces a family of Markov chain Monte Carlo (MCMC) samplers that can adapt to the particular structure of a posterior distribution over functions. Two distinct lines of research intersect in the methods developed here. First, we introduce a general class of operator-weighted proposal distributions that are well defined on function space, such that the performance of the resulting MCMC samplers is independent of the discretization of the function. Second, by exploiting local Hessian information and any associated low-dimensional structure in the change from prior to posterior distributions, we develop an inhomogeneous discretization scheme for the Langevin stochastic differential equation that yields operator-weighted proposals adapted to the non-Gaussian structure of the posterior. The resulting dimension-independent and likelihood-informed (DILI) MCMC samplers may be useful for a large class of high-dimensional problems where the target probability measure has a density with respect to a Gaussian reference measure. Two nonlinear inverse problems are used to demonstrate the efficiency of these DILI samplers: an elliptic PDE coefficient inverse problem and path reconstruction in a conditioned diffusion.
REDUCING THE LIKELIHOOD OF LONG TENNIS MATCHES
Directory of Open Access Journals (Sweden)
Tristan Barnett
2006-12-01
Full Text Available Long matches can cause problems for tournaments. For example, the starting times of subsequent matches can be substantially delayed causing inconvenience to players, spectators, officials and television scheduling. They can even be seen as unfair in the tournament setting when the winner of a very long match, who may have negative aftereffects from such a match, plays the winner of an average or shorter length match in the next round. Long matches can also lead to injuries to the participating players. One factor that can lead to long matches is the use of the advantage set as the fifth set, as in the Australian Open, the French Open and Wimbledon. Another factor is long rallies and a greater than average number of points per game. This tends to occur more frequently on the slower surfaces such as at the French Open. The mathematical method of generating functions is used to show that the likelihood of long matches can be substantially reduced by using the tiebreak game in the fifth set, or more effectively by using a new type of game, the 50-40 game, throughout the match
Likelihood Analysis of Supersymmetric SU(5) GUTs
Bagnaschi, E.
2017-01-01
We perform a likelihood analysis of the constraints from accelerator experiments and astrophysical observations on supersymmetric (SUSY) models with SU(5) boundary conditions on soft SUSY-breaking parameters at the GUT scale. The parameter space of the models studied has 7 parameters: a universal gaugino mass $m_{1/2}$, distinct masses for the scalar partners of matter fermions in five- and ten-dimensional representations of SU(5), $m_5$ and $m_{10}$, and for the $\\mathbf{5}$ and $\\mathbf{\\bar 5}$ Higgs representations $m_{H_u}$ and $m_{H_d}$, a universal trilinear soft SUSY-breaking parameter $A_0$, and the ratio of Higgs vevs $\\tan \\beta$. In addition to previous constraints from direct sparticle searches, low-energy and flavour observables, we incorporate constraints based on preliminary results from 13 TeV LHC searches for jets + MET events and long-lived particles, as well as the latest PandaX-II and LUX searches for direct Dark Matter detection. In addition to previously-identified mechanisms for bringi...
Optimized Large-scale CMB Likelihood and Quadratic Maximum Likelihood Power Spectrum Estimation
Gjerløw, E.; Colombo, L. P. L.; Eriksen, H. K.; Górski, K. M.; Gruppuso, A.; Jewell, J. B.; Plaszczynski, S.; Wehus, I. K.
2015-11-01
We revisit the problem of exact cosmic microwave background (CMB) likelihood and power spectrum estimation with the goal of minimizing computational costs through linear compression. This idea was originally proposed for CMB purposes by Tegmark et al., and here we develop it into a fully functioning computational framework for large-scale polarization analysis, adopting WMAP as a working example. We compare five different linear bases (pixel space, harmonic space, noise covariance eigenvectors, signal-to-noise covariance eigenvectors, and signal-plus-noise covariance eigenvectors) in terms of compression efficiency, and find that the computationally most efficient basis is the signal-to-noise eigenvector basis, which is closely related to the Karhunen-Loeve and Principal Component transforms, in agreement with previous suggestions. For this basis, the information in 6836 unmasked WMAP sky map pixels can be compressed into a smaller set of 3102 modes, with a maximum error increase of any single multipole of 3.8% at ℓ ≤ 32 and a maximum shift in the mean values of a joint distribution of an amplitude-tilt model of 0.006σ. This compression reduces the computational cost of a single likelihood evaluation by a factor of 5, from 38 to 7.5 CPU seconds, and it also results in a more robust likelihood by implicitly regularizing nearly degenerate modes. Finally, we use the same compression framework to formulate a numerically stable and computationally efficient variation of the Quadratic Maximum Likelihood implementation, which requires less than 3 GB of memory and 2 CPU minutes per iteration for ℓ ≤ 32, rendering low-ℓ QML CMB power spectrum analysis fully tractable on a standard laptop.
Maximum likelihood estimation of the parameters of nonminimum phase and noncausal ARMA models
DEFF Research Database (Denmark)
Rasmussen, Klaus Bolding
1994-01-01
The well-known prediction-error-based maximum likelihood (PEML) method can only handle minimum phase ARMA models. This paper presents a new method known as the back-filtering-based maximum likelihood (BFML) method, which can handle nonminimum phase and noncausal ARMA models. The BFML method...... is identical to the PEML method in the case of a minimum phase ARMA model, and it turns out that the BFML method incorporates a noncausal ARMA filter with poles outside the unit circle for estimation of the parameters of a causal, nonminimum phase ARMA model...
Likelihood analysis of supersymmetric SU(5) GUTs
Energy Technology Data Exchange (ETDEWEB)
Bagnaschi, E.; Weiglein, G. [DESY, Hamburg (Germany); Costa, J.C.; Buchmueller, O.; Citron, M.; Richards, A.; De Vries, K.J. [Imperial College, High Energy Physics Group, Blackett Laboratory, London (United Kingdom); Sakurai, K. [University of Durham, Science Laboratories, Department of Physics, Institute for Particle Physics Phenomenology, Durham (United Kingdom); University of Warsaw, Faculty of Physics, Institute of Theoretical Physics, Warsaw (Poland); Borsato, M.; Chobanova, V.; Lucio, M.; Martinez Santos, D. [Universidade de Santiago de Compostela, Santiago de Compostela (Spain); Cavanaugh, R. [Fermi National Accelerator Laboratory, Batavia, IL (United States); University of Illinois at Chicago, Physics Department, Chicago, IL (United States); Roeck, A. de [CERN, Experimental Physics Department, Geneva (Switzerland); Antwerp University, Wilrijk (Belgium); Dolan, M.J. [University of Melbourne, ARC Centre of Excellence for Particle Physics at the Terascale, School of Physics, Parkville (Australia); Ellis, J.R. [King' s College London, Theoretical Particle Physics and Cosmology Group, Department of Physics, London (United Kingdom); Theoretical Physics Department, CERN, Geneva 23 (Switzerland); Flaecher, H. [University of Bristol, H.H. Wills Physics Laboratory, Bristol (United Kingdom); Heinemeyer, S. [Campus of International Excellence UAM+CSIC, Cantoblanco, Madrid (Spain); Instituto de Fisica Teorica UAM-CSIC, Madrid (Spain); Instituto de Fisica de Cantabria (CSIC-UC), Santander (Spain); Isidori, G. [Universitaet Zuerich, Physik-Institut, Zurich (Switzerland); Olive, K.A. [University of Minnesota, William I. Fine Theoretical Physics Institute, School of Physics and Astronomy, Minneapolis, MN (United States)
2017-02-15
We perform a likelihood analysis of the constraints from accelerator experiments and astrophysical observations on supersymmetric (SUSY) models with SU(5) boundary conditions on soft SUSY-breaking parameters at the GUT scale. The parameter space of the models studied has seven parameters: a universal gaugino mass m{sub 1/2}, distinct masses for the scalar partners of matter fermions in five- and ten-dimensional representations of SU(5), m{sub 5} and m{sub 10}, and for the 5 and anti 5 Higgs representations m{sub H{sub u}} and m{sub H{sub d}}, a universal trilinear soft SUSY-breaking parameter A{sub 0}, and the ratio of Higgs vevs tan β. In addition to previous constraints from direct sparticle searches, low-energy and flavour observables, we incorporate constraints based on preliminary results from 13 TeV LHC searches for jets + E{sub T} events and long-lived particles, as well as the latest PandaX-II and LUX searches for direct Dark Matter detection. In addition to previously identified mechanisms for bringing the supersymmetric relic density into the range allowed by cosmology, we identify a novel u{sub R}/c{sub R} - χ{sup 0}{sub 1} coannihilation mechanism that appears in the supersymmetric SU(5) GUT model and discuss the role of ν{sub τ} coannihilation. We find complementarity between the prospects for direct Dark Matter detection and SUSY searches at the LHC. (orig.)
The incidence of bacterial endosymbionts in terrestrial arthropods
Weinert, Lucy A.; Araujo-Jnr, Eli V.; Ahmed, Muhammad Z.; Welch, John J.
2015-01-01
Intracellular endosymbiotic bacteria are found in many terrestrial arthropods and have a profound influence on host biology. A basic question about these symbionts is why they infect the hosts that they do, but estimating symbiont incidence (the proportion of potential host species that are actually infected) is complicated by dynamic or low prevalence infections. We develop a maximum-likelihood approach to estimating incidence, and testing hypotheses about its variation. We apply our method to a database of screens for bacterial symbionts, containing more than 3600 distinct arthropod species and more than 150 000 individual arthropods. After accounting for sampling bias, we estimate that 52% (CIs: 48–57) of arthropod species are infected with Wolbachia, 24% (CIs: 20–42) with Rickettsia and 13% (CIs: 13–55) with Cardinium. We then show that these differences stem from the significantly reduced incidence of Rickettsia and Cardinium in most hexapod orders, which might be explained by evolutionary differences in the arthropod immune response. Finally, we test the prediction that symbiont incidence should be higher in speciose host clades. But while some groups do show a trend for more infection in species-rich families, the correlations are generally weak and inconsistent. These results argue against a major role for parasitic symbionts in driving arthropod diversification. PMID:25904667
The incidence of bacterial endosymbionts in terrestrial arthropods.
Weinert, Lucy A; Araujo-Jnr, Eli V; Ahmed, Muhammad Z; Welch, John J
2015-05-22
Intracellular endosymbiotic bacteria are found in many terrestrial arthropods and have a profound influence on host biology. A basic question about these symbionts is why they infect the hosts that they do, but estimating symbiont incidence (the proportion of potential host species that are actually infected) is complicated by dynamic or low prevalence infections. We develop a maximum-likelihood approach to estimating incidence, and testing hypotheses about its variation. We apply our method to a database of screens for bacterial symbionts, containing more than 3600 distinct arthropod species and more than 150 000 individual arthropods. After accounting for sampling bias, we estimate that 52% (CIs: 48-57) of arthropod species are infected with Wolbachia, 24% (CIs: 20-42) with Rickettsia and 13% (CIs: 13-55) with Cardinium. We then show that these differences stem from the significantly reduced incidence of Rickettsia and Cardinium in most hexapod orders, which might be explained by evolutionary differences in the arthropod immune response. Finally, we test the prediction that symbiont incidence should be higher in speciose host clades. But while some groups do show a trend for more infection in species-rich families, the correlations are generally weak and inconsistent. These results argue against a major role for parasitic symbionts in driving arthropod diversification. © 2015 The Author(s) Published by the Royal Society. All rights reserved.
A Game Theoretical Approach to Hacktivism: Is Attack Likelihood a Product of Risks and Payoffs?
Bodford, Jessica E; Kwan, Virginia S Y
2018-02-01
The current study examines hacktivism (i.e., hacking to convey a moral, ethical, or social justice message) through a general game theoretic framework-that is, as a product of costs and benefits. Given the inherent risk of carrying out a hacktivist attack (e.g., legal action, imprisonment), it would be rational for the user to weigh these risks against perceived benefits of carrying out the attack. As such, we examined computer science students' estimations of risks, payoffs, and attack likelihood through a game theoretic design. Furthermore, this study aims at constructing a descriptive profile of potential hacktivists, exploring two predicted covariates of attack decision making, namely, peer prevalence of hacking and sex differences. Contrary to expectations, results suggest that participants' estimations of attack likelihood stemmed solely from expected payoffs, rather than subjective risks. Peer prevalence significantly predicted increased payoffs and attack likelihood, suggesting an underlying descriptive norm in social networks. Notably, we observed no sex differences in the decision to attack, nor in the factors predicting attack likelihood. Implications for policymakers and the understanding and prevention of hacktivism are discussed, as are the possible ramifications of widely communicated payoffs over potential risks in hacking communities.
International Nuclear Information System (INIS)
Francois, P.
1996-01-01
We undertook a study programme at the end of 1991. To start with, we performed some exploratory studies aimed at learning some preliminary lessons on this type of analysis: Assessment of the interest of probabilistic incident analysis; possibility of using PSA scenarios; skills and resources required. At the same time, EPN created a working group whose assignment was to define a new approach for analysis of incidents on NPPs. This working group gave thought to both aspects of Operating Feedback that EPN wished to improve: Analysis of significant incidents; analysis of potential consequences. We took part in the work of this group, and for the second aspects, we proposed a method based on an adaptation of the event-tree method in order to establish a link between existing PSA models and actual incidents. Since PSA provides an exhaustive database of accident scenarios applicable to the two most common types of units in France, they are obviously of interest for this sort of analysis. With this method we performed some incident analyses, and at the same time explores some methods employed abroad, particularly ASP (Accident Sequence Precursor, a method used by the NRC). Early in 1994 EDF began a systematic analysis programme. The first, transient phase will set up methods and an organizational structure. 7 figs
The behavior of the likelihood ratio test for testing missingness
Hens, Niel; Aerts, Marc; Molenberghs, Geert; Thijs, Herbert
2003-01-01
To asses the sensitivity of conclusions to model choices in the context of selection models for non-random dropout, one can oppose the different missing mechanisms to each other; e.g. by the likelihood ratio tests. The finite sample behavior of the null distribution and the power of the likelihood ratio test is studied under a variety of missingness mechanisms. missing data; sensitivity analysis; likelihood ratio test; missing mechanisms
Penalized Maximum Likelihood Estimation for univariate normal mixture distributions
International Nuclear Information System (INIS)
Ridolfi, A.; Idier, J.
2001-01-01
Due to singularities of the likelihood function, the maximum likelihood approach for the estimation of the parameters of normal mixture models is an acknowledged ill posed optimization problem. Ill posedness is solved by penalizing the likelihood function. In the Bayesian framework, it amounts to incorporating an inverted gamma prior in the likelihood function. A penalized version of the EM algorithm is derived, which is still explicit and which intrinsically assures that the estimates are not singular. Numerical evidence of the latter property is put forward with a test
Classifying Variants of Undetermined Significance in BRCA2 with Protein Likelihood Ratios
Directory of Open Access Journals (Sweden)
Mary S. Beattie
2008-01-01
Full Text Available Background: Missense (amino-acid changing variants found in cancer predisposition genes often create difficulties when clinically interpreting genetic testing results. Although bioinformatics has developed approaches to predicting the impact of these variants, many of these approaches have not been readily applicable in the clinical setting. Bioinformatics approaches for predicting the impact of these variants have not yet found their footing in clinical practice because 1 interpreting the medical relevance of predictive scores is difficult; 2 the relationship between bioinformatics “predictors” (sequence conservation, protein structure and cancer susceptibility is not understood.Methodology/Principal Findings: We present a computational method that produces a probabilistic likelihood ratio predictive of whether a missense variant impairs protein function. We apply the method to a tumor suppressor gene, BRCA2, whose loss of function is important to cancer susceptibility. Protein likelihood ratios are computed for 229 unclassified variants found in individuals from high-risk breast/ovarian cancer families. We map the variants onto a protein structure model, and suggest that a cluster of predicted deleterious variants in the BRCA2 OB1 domain may destabilize BRCA2 and a protein binding partner, the small acidic protein DSS1. We compare our predictions with variant “re-classifications” provided by Myriad Genetics, a biotechnology company that holds the patent on BRCA2 genetic testing in the U.S., and with classifications made by an established medical genetics model [1]. Our approach uses bioinformatics data that is independent of these genetics-based classifications and yet shows significant agreement with them. Preliminary results indicate that our method is less likely to make false positive errors than other bioinformatics methods, which were designed to predict the impact of missense mutations in general
Shenkman, Geva
2012-10-01
This study examined the frequencies of the desires and likelihood estimations of Israeli gay men regarding fatherhood and couplehood, using a sample of 183 gay men aged 19-50. It follows previous research which indicated the existence of a gap in the United States with respect to fatherhood, and called for generalizability examinations in other countries and the exploration of possible explanations. As predicted, a gap was also found in Israel between fatherhood desires and their likelihood estimations, as well as between couplehood desires and their likelihood estimations. In addition, lower estimations of fatherhood likelihood were found to predict depression and to correlate with decreased subjective well-being. Possible psychosocial explanations are offered. Moreover, by mapping attitudes toward fatherhood and couplehood among Israeli gay men, the current study helps to extend our knowledge of several central human development motivations and their correlations with depression and subjective well-being in a less-studied sexual minority in a complex cultural climate. (PsycINFO Database Record (c) 2012 APA, all rights reserved).
Goto, A; Noda, M; Goto, M; Yasuda, K; Mizoue, T; Yamaji, T; Sawada, N; Iwasaki, M; Inoue, M; Tsugane, S
2018-02-14
To assess the predictive ability of a genetic risk score for the incidence of Type 2 diabetes in a general Japanese population. This prospective case-control study, nested within a Japan Public Health Centre-based prospective study, included 466 participants with incident Type 2 diabetes over a 5-year period (cases) and 1361 control participants, as well as 1463 participants with existing diabetes and 1463 control participants. Eleven susceptibility single nucleotide polymorphisms, identified through genome-wide association studies and replicated in Japanese populations, were analysed. Most single nucleotide polymorphism loci showed directionally consistent associations with diabetes. From the combined samples, one single nucleotide polymorphism (rs2206734 at CDKAL1) reached a genome-wide significance level (odds ratio 1.28, 95% CI 1.18-1.40; P = 1.8 × 10 -8 ). Three single nucleotide polymorphisms (rs2206734 in CDKAL1, rs2383208 in CDKN2A/B, and rs2237892 in KCNQ1) were nominally associated with incident diabetes. Compared with the lowest quintile of the total number of risk alleles, the highest quintile had a higher odds of incident diabetes (odds ratio 2.34, 95% CI 1.59-3.46) after adjusting for conventional risk factors such as age, sex and BMI. The addition to the conventional risk factor-based model of a genetic risk score using the 11 single nucleotide polymorphisms significantly improved predictive performance; the c-statistic increased by 0.021, net reclassification improved by 6.2%, and integrated discrimination improved by 0.003. Our prospective findings suggest that the addition of a genetic risk score may provide modest but significant incremental predictive performance beyond that of the conventional risk factor-based model without biochemical markers. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.
The modified signed likelihood statistic and saddlepoint approximations
DEFF Research Database (Denmark)
Jensen, Jens Ledet
1992-01-01
SUMMARY: For a number of tests in exponential families we show that the use of a normal approximation to the modified signed likelihood ratio statistic r * is equivalent to the use of a saddlepoint approximation. This is also true in a large deviation region where the signed likelihood ratio...... statistic r is of order √ n. © 1992 Biometrika Trust....
Efficient Detection of Repeating Sites to Accelerate Phylogenetic Likelihood Calculations.
Kobert, K; Stamatakis, A; Flouri, T
2017-03-01
The phylogenetic likelihood function (PLF) is the major computational bottleneck in several applications of evolutionary biology such as phylogenetic inference, species delimitation, model selection, and divergence times estimation. Given the alignment, a tree and the evolutionary model parameters, the likelihood function computes the conditional likelihood vectors for every node of the tree. Vector entries for which all input data are identical result in redundant likelihood operations which, in turn, yield identical conditional values. Such operations can be omitted for improving run-time and, using appropriate data structures, reducing memory usage. We present a fast, novel method for identifying and omitting such redundant operations in phylogenetic likelihood calculations, and assess the performance improvement and memory savings attained by our method. Using empirical and simulated data sets, we show that a prototype implementation of our method yields up to 12-fold speedups and uses up to 78% less memory than one of the fastest and most highly tuned implementations of the PLF currently available. Our method is generic and can seamlessly be integrated into any phylogenetic likelihood implementation. [Algorithms; maximum likelihood; phylogenetic likelihood function; phylogenetics]. © The Author(s) 2016. Published by Oxford University Press, on behalf of the Society of Systematic Biologists.
Kannus, P; Palvanen, M; Niemi, S; Parkkari, J; Natri, A; Vuori, I; Järvinen, M
1999-01-15
To increase knowledge about recent trends in the number and incidence of various fall-induced injuries among older adults, the authors selected from the National Hospital Discharge Register all patients 60 years of age or older who were admitted to hospitals in Finland for primary treatment of a first fall-induced severe head injury during 1970-1995. Similar patients aged 30-39 years served as a reference group. For the study period, the number and incidence (per 100,000 persons) of fall-induced severe head injuries in Finnish persons 60 years of age or older increased considerably (554 and 85, respectively, in 1970 compared with 1,393 and 144, respectively, in 1995). The age-adjusted incidence of these injuries also increased in women, from 80 in 1970 to 125 in 1995, and in men, from 102 in 1970 to 147 in 1995. In the reference group (patients aged 30-39 years), the absolute numbers and incidences of similar injuries did not show consistent trend changes over time. We conclude that the number of fall-induced severe head injuries in elderly Finnish women and men is increasing at a rate that cannot be explained simply by demographic changes, and therefore vigorous preventive measures should be instituted at once to control the increasing burden of these devastating injuries.
Weighted likelihood copula modeling of extreme rainfall events in Connecticut
Wang, Xiaojing; Gebremichael, Mekonnen; Yan, Jun
2010-08-01
SummaryCopulas have recently emerged as a practical method for multivariate modeling. To date, only limited amount of work has been done to apply copula-based modeling in the context of extreme rainfall analysis, and no work exists on modeling multiple characteristics of rainfall events from data at resolutions finer than hourly. In this study, trivariate copula-based modeling is applied to annual extreme rainfall events constructed from 15-min time series precipitation data at 12 stations within the state of Connecticut. Three characteristics (volume, duration, and peak intensity) are modeled by a multivariate distribution specified by three marginal distributions and a dependence structure via copula. A major issue in this application is that, because the 15-min precipitation data are only available fairly recently, the sample size at most stations is small, ranging from 10 to 33 years. For each station, we estimate the model parameters by maximizing a weighted likelihood, which assigns weight to data at stations nearby, borrowing strengths from them. The weights are assigned by a kernel function whose bandwidth is chosen by cross-validation in terms of predictive loglikelihood. The fitted model and sampling algorithms provide new knowledge on design storms and risk assessment in Connecticut.
Likelihood analysis of parity violation in the compound nucleus
International Nuclear Information System (INIS)
Bowman, D.; Sharapov, E.
1993-01-01
We discuss the determination of the root mean-squared matrix element of the parity-violating interaction between compound-nuclear states using likelihood analysis. We briefly review the relevant features of the statistical model of the compound nucleus and the formalism of likelihood analysis. We then discuss the application of likelihood analysis to data on panty-violating longitudinal asymmetries. The reliability of the extracted value of the matrix element and errors assigned to the matrix element is stressed. We treat the situations where the spins of the p-wave resonances are not known and known using experimental data and Monte Carlo techniques. We conclude that likelihood analysis provides a reliable way to determine M and its confidence interval. We briefly discuss some problems associated with the normalization of the likelihood function
Directory of Open Access Journals (Sweden)
Matthew N Benedict
2014-10-01
Full Text Available Genome-scale metabolic models provide a powerful means to harness information from genomes to deepen biological insights. With exponentially increasing sequencing capacity, there is an enormous need for automated reconstruction techniques that can provide more accurate models in a short time frame. Current methods for automated metabolic network reconstruction rely on gene and reaction annotations to build draft metabolic networks and algorithms to fill gaps in these networks. However, automated reconstruction is hampered by database inconsistencies, incorrect annotations, and gap filling largely without considering genomic information. Here we develop an approach for applying genomic information to predict alternative functions for genes and estimate their likelihoods from sequence homology. We show that computed likelihood values were significantly higher for annotations found in manually curated metabolic networks than those that were not. We then apply these alternative functional predictions to estimate reaction likelihoods, which are used in a new gap filling approach called likelihood-based gap filling to predict more genomically consistent solutions. To validate the likelihood-based gap filling approach, we applied it to models where essential pathways were removed, finding that likelihood-based gap filling identified more biologically relevant solutions than parsimony-based gap filling approaches. We also demonstrate that models gap filled using likelihood-based gap filling provide greater coverage and genomic consistency with metabolic gene functions compared to parsimony-based approaches. Interestingly, despite these findings, we found that likelihoods did not significantly affect consistency of gap filled models with Biolog and knockout lethality data. This indicates that the phenotype data alone cannot necessarily be used to discriminate between alternative solutions for gap filling and therefore, that the use of other information
Romantic Partners’ Influence on Men’s Likelihood of Arrest in Early Adulthood
Capaldi, Deborah M.; Kim, Hyoun K.; Owen, Lee D.
2008-01-01
Female romantic partners’ influence on official crime occurrence for men across a 12-year period in early adulthood was examined within a comprehensive dynamic prediction model including both social learning and social control predictors. We hypothesized that relationship stability, rather than attachment to partner, would be associated with reduced likelihood of crime, whereas women’s antisocial behavior would be a risk factor, along with deviant peer association. Models were tested on a sam...
Mixture densities, maximum likelihood, and the EM algorithm
Redner, R. A.; Walker, H. F.
1982-01-01
The problem of estimating the parameters which determine a mixture density is reviewed as well as maximum likelihood estimation for it. A particular iterative procedure for numerically approximating maximum likelihood estimates for mixture density problems is considered. This EM algorithm, is a specialization to the mixture density context of a general algorithm of the same name used to approximate maximum likelihood estimates for incomplete data problems. The formulation and theoretical and practical properties of the EM algorithm for mixture densities are discussed focussing in particular on mixtures of densities from exponential families.
McCarthy, Cian P; van Kimmenade, Roland R J; Gaggin, Hanna K; Simon, Mandy L; Ibrahim, Nasrien E; Gandhi, Parul; Kelly, Noreen; Motiwala, Shweta R; Belcher, Arianna M; Harisiades, Jamie; Magaret, Craig A; Rhyne, Rhonda F; Januzzi, James L
2017-07-01
We sought to develop a multiple biomarker approach for prediction of incident major adverse cardiac events (MACE; composite of cardiovascular death, myocardial infarction, and stroke) in patients referred for coronary angiography. In a 649-participant training cohort, predictors of MACE within 1 year were identified using least-angle regression; over 50 clinical variables and 109 biomarkers were analyzed. Predictive models were generated using least absolute shrinkage and selection operator with logistic regression. A score derived from the final model was developed and evaluated with a 278-patient validation set during a median of 3.6 years follow-up. The scoring system consisted of N-terminal pro B-type natriuretic peptide (NT-proBNP), kidney injury molecule-1, osteopontin, and tissue inhibitor of metalloproteinase-1; no clinical variables were retained in the predictive model. In the validation cohort, each biomarker improved model discrimination or calibration for MACE; the final model had an area under the curve (AUC) of 0.79 (p Time-to-first MACE was shorter in those with an elevated score (p <0.001); such risk extended to at least to 4 years. In conclusion, in a cohort of patients who underwent coronary angiography, we describe a novel multiple biomarker score for incident MACE within 1 year (NCT00842868). Copyright © 2017 Elsevier Inc. All rights reserved.
Ren, Qian; Su, Chang; Wang, Huijun; Wang, Zhihong; Du, Wenwen; Zhang, Bing
2016-01-01
Overweight and obesity increase the risk of elevated blood pressure; most of the studies that serve as a background for the debates on the optimal obesity index cut-off values used cross-sectional samples. The aim of this study was to determine the cut-off values of anthropometric markers for detecting hypertension in Chinese adults with data from prospective cohort. This study determines the best cut-off values for the obesity indices that represent elevated incidence of hypertension in 18-65-year-old Chinese adults using data from the China Health and Nutrition Survey (CHNS) 2006-2011 prospective cohort. Individual body mass index (BMI), waist circumference (WC), waist:hip ratio (WHR) and waist:stature ratio (WSR) were assessed. ROC curves for these obesity indices were plotted to estimate and compare the usefulness of these obesity indices and the corresponding values for the maximum of the Youden indices were considered the optimal cut-off values. Five-year cumulative incidences of hypertension were 21.5% (95% CI: 19.4-23.6) in men and 16.5% (95% CI: 14.7-18.2) in women, and there was a significant trend of increased incidence of hypertension with an increase in BMI, WC, WHR or WSR (P for trend Obesity in China (WGOC), the cut-off values for WHR that were developed by the World Health Organization (WHO), and a global WSR cut-off value of 0.50 may be the appropriate upper limits for Chinese adults.
Posterior distributions for likelihood ratios in forensic science.
van den Hout, Ardo; Alberink, Ivo
2016-09-01
Evaluation of evidence in forensic science is discussed using posterior distributions for likelihood ratios. Instead of eliminating the uncertainty by integrating (Bayes factor) or by conditioning on parameter values, uncertainty in the likelihood ratio is retained by parameter uncertainty derived from posterior distributions. A posterior distribution for a likelihood ratio can be summarised by the median and credible intervals. Using the posterior mean of the distribution is not recommended. An analysis of forensic data for body height estimation is undertaken. The posterior likelihood approach has been criticised both theoretically and with respect to applicability. This paper addresses the latter and illustrates an interesting application area. Copyright © 2016 The Chartered Society of Forensic Sciences. Published by Elsevier Ireland Ltd. All rights reserved.
Practical likelihood analysis for spatial generalized linear mixed models
DEFF Research Database (Denmark)
Bonat, W. H.; Ribeiro, Paulo Justiniano
2016-01-01
, respectively, examples of binomial and count datasets modeled by spatial generalized linear mixed models. Our results show that the Laplace approximation provides similar estimates to Markov Chain Monte Carlo likelihood, Monte Carlo expectation maximization, and modified Laplace approximation. Some advantages...
Improved maximum likelihood reconstruction of complex multi-generational pedigrees.
Sheehan, Nuala A; Bartlett, Mark; Cussens, James
2014-11-01
The reconstruction of pedigrees from genetic marker data is relevant to a wide range of applications. Likelihood-based approaches aim to find the pedigree structure that gives the highest probability to the observed data. Existing methods either entail an exhaustive search and are hence restricted to small numbers of individuals, or they take a more heuristic approach and deliver a solution that will probably have high likelihood but is not guaranteed to be optimal. By encoding the pedigree learning problem as an integer linear program we can exploit efficient optimisation algorithms to construct pedigrees guaranteed to have maximal likelihood for the standard situation where we have complete marker data at unlinked loci and segregation of genes from parents to offspring is Mendelian. Previous work demonstrated efficient reconstruction of pedigrees of up to about 100 individuals. The modified method that we present here is not so restricted: we demonstrate its applicability with simulated data on a real human pedigree structure of over 1600 individuals. It also compares well with a very competitive approximate approach in terms of solving time and accuracy. In addition to identifying a maximum likelihood pedigree, we can obtain any number of pedigrees in decreasing order of likelihood. This is useful for assessing the uncertainty of a maximum likelihood solution and permits model averaging over high likelihood pedigrees when this would be appropriate. More importantly, when the solution is not unique, as will often be the case for large pedigrees, it enables investigation into the properties of maximum likelihood pedigree estimates which has not been possible up to now. Crucially, we also have a means of assessing the behaviour of other approximate approaches which all aim to find a maximum likelihood solution. Our approach hence allows us to properly address the question of whether a reasonably high likelihood solution that is easy to obtain is practically as
On the likelihood function of Gaussian max-stable processes
Genton, M. G.
2011-05-24
We derive a closed form expression for the likelihood function of a Gaussian max-stable process indexed by ℝd at p≤d+1 sites, d≥1. We demonstrate the gain in efficiency in the maximum composite likelihood estimators of the covariance matrix from p=2 to p=3 sites in ℝ2 by means of a Monte Carlo simulation study. © 2011 Biometrika Trust.
Tapered composite likelihood for spatial max-stable models
Sang, Huiyan
2014-05-01
Spatial extreme value analysis is useful to environmental studies, in which extreme value phenomena are of interest and meaningful spatial patterns can be discerned. Max-stable process models are able to describe such phenomena. This class of models is asymptotically justified to characterize the spatial dependence among extremes. However, likelihood inference is challenging for such models because their corresponding joint likelihood is unavailable and only bivariate or trivariate distributions are known. In this paper, we propose a tapered composite likelihood approach by utilizing lower dimensional marginal likelihoods for inference on parameters of various max-stable process models. We consider a weighting strategy based on a "taper range" to exclude distant pairs or triples. The "optimal taper range" is selected to maximize various measures of the Godambe information associated with the tapered composite likelihood function. This method substantially reduces the computational cost and improves the efficiency over equally weighted composite likelihood estimators. We illustrate its utility with simulation experiments and an analysis of rainfall data in Switzerland.
Variability in expert assessments of child physical abuse likelihood.
Lindberg, Daniel Martin; Lindsell, Christopher John; Shapiro, Robert Allan
2008-04-01
In the absence of a gold standard, clinicians and researchers often categorize their opinions of the likelihood of inflicted injury using several ordinal scales. The objective of this protocol was to determine the reliability of expert ratings using several of these scales. Participants were pediatricians with substantial academic and clinical activity in the evaluation of children with concerns for physical abuse. The facts from several cases that were referred to 1 hospital's child abuse team were abstracted and recorded as in a multidisciplinary team conference. Participants viewed the recording and rated each case using several scales of child abuse likelihood. Participants (n = 22) showed broad variability for most cases on all scales. Variability was lowest for cases with the highest aggregate concern for abuse. One scale that included examples of cases fitting each category and standard reporting language to summarize results showed a modest (18%-23%) decrease in variability among participants. The interpretation of the categories used by the scales was more consistent. Cases were rarely rated as "definite abuse" when likelihood was estimated at abuse." Only 9 of 858 cases rated > or = 35% likelihood were rated as "reasonable concern for abuse." Assessments of child abuse likelihood often show broad variability between experts. Although a rating scale with patient examples and standard reporting language may decrease variability, clinicians and researchers should be cautious when interpreting abuse likelihood assessments from a single expert. These data support the peer-review or multidisciplinary team approach to child abuse assessments.
The Likelihood of Coliform Bacteria in NJ Domestic Wells Based on Precipitation and Other Factors.
Procopio, Nicholas A; Atherholt, Thomas B; Goodrow, Sandra M; Lester, Lori A
2017-09-01
The influence of precipitation on coliform bacteria detection rates in domestic wells was investigated using data collected through the New Jersey Private Well Testing Act. Measured precipitation data from the National Weather Service (NWS) monitoring stations was compared to estimated data from the Multisensor Precipitation Estimate (MPE) in order to determine which source of data to include in the analyses. A strong concordance existed between these two precipitations datasets; therefore, MPE data was utilized as it is geographically more specific to individual wells. Statewide, 10 days of cumulative precipitation prior to testing was found to be an optimal period influencing the likelihood of coliform detections in wells. A logistic regression model was developed to predict the likelihood of coliform occurrence in wells from 10 days of cumulative precipitation data and other predictive variables including geology, season, coliform bacteria analysis method, pH, and nitrate concentration. Total coliform (TC) and fecal coliform or Escherichia coli (FC/EC) were detected more frequently when the preceding 10 days of cumulative precipitation exceeded 34.5 and 54 mm, respectively. Furthermore, the likelihood of coliform detection was highest in wells located in the bedrock region, during summer and autumn, analyzed with the enzyme substrate method, with pH between 5 and 6.99, and (for FC/EC but not TC) nitrate greater than 10 mg/L. Thus, the likelihood of coliform presence in domestic wells can be predicted from readily available environmental factors including timing and magnitude of precipitation, offering outreach opportunities and potential changes to coliform testing recommendations. © 2017, National Ground Water Association.
Directory of Open Access Journals (Sweden)
Elmståhl S
2014-11-01
Full Text Available Sölve Elmståhl, Elisabet Widerström Division of Geriatric Medicine, Department of Health Sciences, Lund University, Skåne University Hospital, Malmö, Sweden Introduction: Contradictory results have been reported on the relationship between orthostatic hypotension (OH and mild cognitive impairment (MCI. Objective: To study the incidence of MCI and dementia and their relationship to OH and subclinical OH with orthostatic symptoms (orthostatic intolerance.Study design and setting: This study used a prospective general population cohort design and was based on data from the Swedish Good Aging in Skåne study (GÅS-SNAC, they were studied 6 years after baseline of the present study, with the same study protocol at baseline and at follow-up. The study sample comprised 1,480 randomly invited subjects aged 60 to 93 years, and had a participation rate of 82% at follow-up. OH test included assessment of blood pressure and symptoms of OH. Results: The 6-year incidence of MCI was 8%, increasing from 12.1 to 40.5 per 1,000 person-years for men and 6.9 to 16.9 per 1,000 person-years for women aged 60 to >80 years. The corresponding 6-year incidence of dementia was 8%. Orthostatic intolerance during uprising was related to risk for MCI at follow-up (odds ratio [OR] =1.84 [1.20–2.80][95% CI], adjusted for age and education independently of blood pressure during testing. After stratification for hypertension (HT, the corresponding age-adjusted OR for MCI in the non-HT group was 1.71 (1.10–2.31 and 1.76 (1.11–2.13 in the HT group. Among controls, the proportion of those with OH was 16%; those with MCI 24%; and those with dementia 31% (age-adjusted OR 1.93 [1.19–3.14]. Conclusion: Not only OH, but also symptoms of OH, seem to be a risk factor for cognitive decline and should be considered in the management of blood pressure among the elderly population. Keywords: orthostatic blood pressure, epidemiology, elderly
Lightning incidents in Mongolia
Directory of Open Access Journals (Sweden)
Myagmar Doljinsuren
2015-11-01
Full Text Available This is one of the first studies that has been conducted in Mongolia on the distribution of lightning incidents. The study covers a 10-year period from 2004 to 2013. The country records a human death rate of 15.4 deaths per 10 million people per year, which is much higher than that of many countries with similar isokeraunic level. The reason may be the low-grown vegetation observed in most rural areas of Mongolia, a surface topography, typical to steppe climate. We suggest modifications to Gomes–Kadir equation for such countries, as it predicts a much lower annual death rate for Mongolia. The lightning incidents spread over the period from May to August with the peak of the number of incidents occurring in July. The worst lightning affected region in the country is the central part. Compared with impacts of other convective disasters such as squalls, thunderstorms and hail, lightning stands as the second highest in the number of incidents, human deaths and animal deaths. Economic losses due to lightning is only about 1% of the total losses due to the four extreme weather phenomena. However, unless precautionary measures are not promoted among the public, this figure of losses may significantly increase with time as the country is undergoing rapid industrialization at present.
Bodapati, R.K.; Kizer, J.R.; Kop, W.J.; Stein, P.K.
2017-01-01
Background Heart rate variability (HRV) characterizes cardiac autonomic functioning. The association of HRV with stroke is uncertain. We examined whether 24‐hour HRV added predictive value to the Cardiovascular Health Study clinical stroke risk score (CHS‐SCORE), previously developed at the baseline
Staley, Dennis M.; Negri, Jacquelyn A.; Kean, Jason W.; Laber, Jayme L.; Tillery, Anne C.; Youberg, Ann M.
2016-06-30
Wildfire can significantly alter the hydrologic response of a watershed to the extent that even modest rainstorms can generate dangerous flash floods and debris flows. To reduce public exposure to hazard, the U.S. Geological Survey produces post-fire debris-flow hazard assessments for select fires in the western United States. We use publicly available geospatial data describing basin morphology, burn severity, soil properties, and rainfall characteristics to estimate the statistical likelihood that debris flows will occur in response to a storm of a given rainfall intensity. Using an empirical database and refined geospatial analysis methods, we defined new equations for the prediction of debris-flow likelihood using logistic regression methods. We showed that the new logistic regression model outperformed previous models used to predict debris-flow likelihood.
International Nuclear Information System (INIS)
Prada-Sanchez, J.M.; Febrero-Bande, M.; Gonzalez-Manteiga, W.; Costos-Yanez, T.; Bermudez-Cela, J.L.; Lucas-Dominguez, T.
2000-01-01
Atmospheric SO 2 concentrations at sampling stations near the fossil fuel fired power station at As Pontes (La Coruna, Spain) were predicted using a model for the corresponding time series consisting of a self-explicative term and a linear combination of exogenous variables. In a supplementary simulation study, models of this kind behaved better than the corresponding pure self-explicative or pure linear regression models. (Author)
International Nuclear Information System (INIS)
Qi, Jinyi; Klein, Gregory J.; Huesman, Ronald H.
2000-01-01
A positron emission mammography scanner is under development at our Laboratory. The tomograph has a rectangular geometry consisting of four banks of detector modules. For each detector, the system can measure the depth of interaction information inside the crystal. The rectangular geometry leads to irregular radial and angular sampling and spatially variant sensitivity that are different from conventional PET systems. Therefore, it is of importance to study the image properties of the reconstructions. We adapted the theoretical analysis that we had developed for conventional PET systems to the list mode likelihood reconstruction for this tomograph. The local impulse response and covariance of the reconstruction can be easily computed using FFT. These theoretical results are also used with computer observer models to compute the signal-to-noise ratio for lesion detection. The analysis reveals the spatially variant resolution and noise properties of the list mode likelihood reconstruction. The theoretical predictions are in good agreement with Monte Carlo results
Energy Technology Data Exchange (ETDEWEB)
Qi, Jinyi; Klein, Gregory J.; Huesman, Ronald H.
2000-10-01
A positron emission mammography scanner is under development at our Laboratory. The tomograph has a rectangular geometry consisting of four banks of detector modules. For each detector, the system can measure the depth of interaction information inside the crystal. The rectangular geometry leads to irregular radial and angular sampling and spatially variant sensitivity that are different from conventional PET systems. Therefore, it is of importance to study the image properties of the reconstructions. We adapted the theoretical analysis that we had developed for conventional PET systems to the list mode likelihood reconstruction for this tomograph. The local impulse response and covariance of the reconstruction can be easily computed using FFT. These theoretical results are also used with computer observer models to compute the signal-to-noise ratio for lesion detection. The analysis reveals the spatially variant resolution and noise properties of the list mode likelihood reconstruction. The theoretical predictions are in good agreement with Monte Carlo results.
Exclusion probabilities and likelihood ratios with applications to kinship problems.
Slooten, Klaas-Jan; Egeland, Thore
2014-05-01
In forensic genetics, DNA profiles are compared in order to make inferences, paternity cases being a standard example. The statistical evidence can be summarized and reported in several ways. For example, in a paternity case, the likelihood ratio (LR) and the probability of not excluding a random man as father (RMNE) are two common summary statistics. There has been a long debate on the merits of the two statistics, also in the context of DNA mixture interpretation, and no general consensus has been reached. In this paper, we show that the RMNE is a certain weighted average of inverse likelihood ratios. This is true in any forensic context. We show that the likelihood ratio in favor of the correct hypothesis is, in expectation, bigger than the reciprocal of the RMNE probability. However, with the exception of pathological cases, it is also possible to obtain smaller likelihood ratios. We illustrate this result for paternity cases. Moreover, some theoretical properties of the likelihood ratio for a large class of general pairwise kinship cases, including expected value and variance, are derived. The practical implications of the findings are discussed and exemplified.
Rishoej, Rikke Mie; Hallas, Jesper; Juel Kjeldsen, Lene; Thybo Christesen, Henrik; Almarsdóttir, Anna Birna
2018-03-01
Hospitalized children are at risk of medication errors (MEs) due to complex dosage calculations and preparations. Incident reporting systems may facilitate prevention of MEs but underreporting potentially undermines this system. We aimed to examine whether scenarios involving medications should be reported to a national mandatory incident reporting system and the likelihood of self- and peer-reporting these scenarios among paediatric nurses and physicians. Participants' reporting of MEs was explored through a questionnaire involving 20 medication scenarios. The scenarios represented different steps in the medication process, types of error, patient outcomes and medications. Reporting rates and odds ratios with 95% confidence interval [OR, (95% CI)] were calculated. Barriers to and enablers of reporting were identified through content analysis of participants' comments. The response rate was 42% (291/689). Overall, 61% of participants reported that scenarios should be reported. The likelihood of reporting was 60% for self-reporting and 37% for peer-reporting. Nurses versus physicians, and healthcare professionals with versus without patient safety responsibilities assessed to a larger extent that the scenarios should be reported [OR = 1.34 (1.05-1.70) and OR = 1.41 (1.12-1.78), respectively]; were more likely to self-report, [OR = 2.81 (1.71-4.62) and OR = 2.93 (1.47-5.84), respectively]; and were more likely to peer-report [OR = 1.89 (1.36-2.63) and OR = 3.61 (2.57-5.06), respectively].Healthcare professionals with versus without management responsibilities were more likely to peer-report [OR = 5.16 (3.44-7.72)]. Participants reported that scenarios resulting in actual injury or incidents considered to have a learning potential should be reported. The likelihood of underreporting scenarios was high among paediatric nurses and physicians. Nurses and staff with patient safety responsibilities were more likely to assess that scenarios should be reported and to report
Romantic Partners' Influence on Men's Likelihood of Arrest in Early Adulthood.
Capaldi, Deborah M; Kim, Hyoun K; Owen, Lee D
2008-05-01
Female romantic partners' influence on official crime occurrence for men across a 12-year period in early adulthood was examined within a comprehensive dynamic prediction model including both social learning and social control predictors. We hypothesized that relationship stability, rather than attachment to partner, would be associated with reduced likelihood of crime, whereas women's antisocial behavior would be a risk factor, along with deviant peer association. Models were tested on a sample of at-risk men [the Oregon Youth Study (OYS)] using zero-inflated Poisson (ZIP) modeling predicting to 1) arrest persistence (class and count) and 2) arrest onset class. Findings indicated that women's antisocial behavior was predictive of both onset and persistence of arrests for men, and deviant peer association was predictive of persistence. Relationship stability was protective against persistence.
Silventoinen, Karri; Pankow, James; Lindström, Jaana; Jousilahti, Pekka; Hu, Gang; Tuomilehto, Jaakko
2005-10-01
Cardiovascular disease shares several risk factors with type 2 diabetes. We tested whether the Finnish Diabetes Risk Score (FINDRISC), recently developed in a Finnish population to estimate the future risk of diabetes, would also identify individuals at high risk of coronary heart disease (CHD) and stroke, and total mortality in this same population. Independent risk factor surveys were conducted in 1987, 1992, and 1997 in Finland, comprising 8268 men and 9457 women aged 25-64 years and free of CHD and stroke at baseline. During the follow-up until the end of 2001, 699 incident acute CHD events, 324 acute stroke events, and 765 deaths occurred. The data were analysed by using receiver operating characteristic (ROC) curves and the Cox-regression model. The areas under the ROC curves (AUC) were 71% for CHD, 73% for stroke, and 68% for total mortality in men and 78, 68, and 72% in women, respectively. The addition of systolic and diastolic blood pressures, total and high-density lipoprotein cholesterol, and smoking increased the AUC values modestly (the change of the absolute values from 2.6 to 6.5%), but the additional use of plasma glucose had only a slight effect on the AUC values for CHD and stroke. The FINDRISC is a reasonably good predictor of CHD, stroke and total mortality.
Makizako, Hyuma; Shimada, Hiroyuki; Doi, Takehiko; Yoshida, Daisuke; Anan, Yuya; Tsutsumimoto, Kota; Uemura, Kazuki; Liu-Ambrose, Teresa; Park, Hyuntae; Lee, Sanyoon; Suzuki, Takao
2015-03-01
The purpose of this study was to determine whether frailty is an important and independent predictor of incident depressive symptoms in elderly people without depressive symptoms at baseline. Fifteen-month prospective study. General community in Japan. A total of 3025 community-dwelling elderly people aged 65 years or over without depressive symptoms at baseline. The self-rated 15-item Geriatric Depression Scale was used to assess symptoms of depression with a score of 6 or more at baseline and 15-month follow-up. Participants underwent a structural interview designed to obtain demographic factors and frailty status, and completed cognitive testing with the Mini-Mental State Examination and physical performance testing with the Short Physical Performance Battery as potential predictors. At a 15-month follow-up survey, 226 participants (7.5%) reported the development of depressive symptoms. We found that frailty and poor self-rated general health (adjusted odds ratio 1.86, 95% confidence interval 1.30-2.66, P Physical Performance Battery, and Geriatric Depression Scale scores at baseline. Our findings suggested that frailty and poor self-rated general health were independent predictors of depressive symptoms in community-dwelling elderly people. Copyright © 2015 AMDA – The Society for Post-Acute and Long-Term Care Medicine. Published by Elsevier Inc. All rights reserved.
Penalized maximum likelihood estimation for generalized linear point processes
DEFF Research Database (Denmark)
Hansen, Niels Richard
2010-01-01
A generalized linear point process is specified in terms of an intensity that depends upon a linear predictor process through a fixed non-linear function. We present a framework where the linear predictor is parametrized by a Banach space and give results on Gateaux differentiability of the log......-likelihood. Of particular interest is when the intensity is expressed in terms of a linear filter parametrized by a Sobolev space. Using that the Sobolev spaces are reproducing kernel Hilbert spaces we derive results on the representation of the penalized maximum likelihood estimator in a special case and the gradient...... of the negative log-likelihood in general. The latter is used to develop a descent algorithm in the Sobolev space. We conclude the paper by extensions to multivariate and additive model specifications. The methods are implemented in the R-package ppstat....
Penalized maximum likelihood estimation for generalized linear point processes
DEFF Research Database (Denmark)
Hansen, Niels Richard
2010-01-01
A generalized linear point process is specified in terms of an intensity that depends upon a linear predictor process through a fixed non-linear function. We present a framework where the linear predictor is parametrized by a Banach space and give results on Gateaux differentiability of the log...... of the negative log-likelihood in general. The latter is used to develop a descent algorithm in the Sobolev space. We conclude the paper by extensions to multivariate and additive model specifications. The methods are implemented in the R-package ppstat.......-likelihood. Of particular interest is when the intensity is expressed in terms of a linear filter parametrized by a Sobolev space. Using that the Sobolev spaces are reproducing kernel Hilbert spaces we derive results on the representation of the penalized maximum likelihood estimator in a special case and the gradient...
Measuring coherence of computer-assisted likelihood ratio methods.
Haraksim, Rudolf; Ramos, Daniel; Meuwly, Didier; Berger, Charles E H
2015-04-01
Measuring the performance of forensic evaluation methods that compute likelihood ratios (LRs) is relevant for both the development and the validation of such methods. A framework of performance characteristics categorized as primary and secondary is introduced in this study to help achieve such development and validation. Ground-truth labelled fingerprint data is used to assess the performance of an example likelihood ratio method in terms of those performance characteristics. Discrimination, calibration, and especially the coherence of this LR method are assessed as a function of the quantity and quality of the trace fingerprint specimen. Assessment of the coherence revealed a weakness of the comparison algorithm in the computer-assisted likelihood ratio method used. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Generalized empirical likelihood methods for analyzing longitudinal data
Wang, S.
2010-02-16
Efficient estimation of parameters is a major objective in analyzing longitudinal data. We propose two generalized empirical likelihood based methods that take into consideration within-subject correlations. A nonparametric version of the Wilks theorem for the limiting distributions of the empirical likelihood ratios is derived. It is shown that one of the proposed methods is locally efficient among a class of within-subject variance-covariance matrices. A simulation study is conducted to investigate the finite sample properties of the proposed methods and compare them with the block empirical likelihood method by You et al. (2006) and the normal approximation with a correctly estimated variance-covariance. The results suggest that the proposed methods are generally more efficient than existing methods which ignore the correlation structure, and better in coverage compared to the normal approximation with correctly specified within-subject correlation. An application illustrating our methods and supporting the simulation study results is also presented.
Unbinned likelihood maximisation framework for neutrino clustering in Python
Energy Technology Data Exchange (ETDEWEB)
Coenders, Stefan [Technische Universitaet Muenchen, Boltzmannstr. 2, 85748 Garching (Germany)
2016-07-01
Albeit having detected an astrophysical neutrino flux with IceCube, sources of astrophysical neutrinos remain hidden up to now. A detection of a neutrino point source is a smoking gun for hadronic processes and acceleration of cosmic rays. The search for neutrino sources has many degrees of freedom, for example steady versus transient, point-like versus extended sources, et cetera. Here, we introduce a Python framework designed for unbinned likelihood maximisations as used in searches for neutrino point sources by IceCube. Implementing source scenarios in a modular way, likelihood searches on various kinds can be implemented in a user-friendly way, without sacrificing speed and memory management.
Likelihood-based inference for clustered line transect data
DEFF Research Database (Denmark)
Waagepetersen, Rasmus Plenge; Schweder, Tore
The uncertainty in estimation of spatial animal density from line transect surveys depends on the degree of spatial clustering in the animal population. To quantify the clustering we model line transect data as independent thinnings of spatial shot-noise Cox processes. Likelihood-based inference...... is implemented using Markov Chain Monte Carlo methods to obtain efficient estimates of spatial clustering parameters. Uncertainty is addressed using parametric bootstrap or by consideration of posterior distributions in a Bayesian setting. Maximum likelihood estimation and Bayesian inference is compared...
Nearly Efficient Likelihood Ratio Tests for Seasonal Unit Roots
DEFF Research Database (Denmark)
Jansson, Michael; Nielsen, Morten Ørregaard
In an important generalization of zero frequency autore- gressive unit root tests, Hylleberg, Engle, Granger, and Yoo (1990) developed regression-based tests for unit roots at the seasonal frequencies in quarterly time series. We develop likelihood ratio tests for seasonal unit roots and show...... that these tests are "nearly efficient" in the sense of Elliott, Rothenberg, and Stock (1996), i.e. that their local asymptotic power functions are indistinguishable from the Gaussian power envelope. Currently available nearly efficient testing procedures for seasonal unit roots are regression-based and require...... the choice of a GLS detrending parameter, which our likelihood ratio tests do not....
Nearly Efficient Likelihood Ratio Tests of the Unit Root Hypothesis
DEFF Research Database (Denmark)
Jansson, Michael; Nielsen, Morten Ørregaard
Seemingly absent from the arsenal of currently available "nearly efficient" testing procedures for the unit root hypothesis, i.e. tests whose local asymptotic power functions are indistinguishable from the Gaussian power envelope, is a test admitting a (quasi-)likelihood ratio interpretation. We...... show that the likelihood ratio unit root test derived in a Gaussian AR(1) model with standard normal innovations is nearly efficient in that model. Moreover, these desirable properties carry over to more complicated models allowing for serially correlated and/or non-Gaussian innovations....
Bayesian and maximum likelihood estimation of genetic maps
DEFF Research Database (Denmark)
York, Thomas L.; Durrett, Richard T.; Tanksley, Steven
2005-01-01
that makes the Bayesian method applicable to large data sets. We present an extensive simulation study examining the statistical properties of the method and comparing it with the likelihood method implemented in Mapmaker. We show that the Maximum A Posteriori (MAP) estimator of the genetic distances......, corresponding to the maximum likelihood estimator, performs better than estimators based on the posterior expectation. We also show that while the performance is similar between Mapmaker and the MCMC-based method in the absence of genotyping errors, the MCMC-based method has a distinct advantage in the presence...
Lu, Karen H; Skates, Steven; Hernandez, Mary A; Bedi, Deepak; Bevers, Therese; Leeds, Leroy; Moore, Richard; Granai, Cornelius; Harris, Steven; Newland, William; Adeyinka, Olasunkanmi; Geffen, Jeremy; Deavers, Michael T; Sun, Charlotte C; Horick, Nora; Fritsche, Herbert; Bast, Robert C
2013-10-01
A 2-stage ovarian cancer screening strategy was evaluated that incorporates change of carbohydrate antigen 125 (CA125) levels over time and age to estimate risk of ovarian cancer. Women with high-risk scores were referred for transvaginal ultrasound (TVS). A single-arm, prospective study of postmenopausal women was conducted. Participants underwent an annual CA125 blood test. Based on the Risk of Ovarian Cancer Algorithm (ROCA) result, women were triaged to next annual CA125 test (low risk), repeat CA125 test in 3 months (intermediate risk), or TVS and referral to a gynecologic oncologist (high risk). A total of 4051 women participated over 11 years. The average annual rate of referral to a CA125 test in 3 months was 5.8%, and the average annual referral rate to TVS and review by a gynecologic oncologist was 0.9%. Ten women underwent surgery on the basis of TVS, with 4 invasive ovarian cancers (1 with stage IA disease, 2 with stage IC disease, and 1 with stage IIB disease), 2 ovarian tumors of low malignant potential (both stage IA), 1 endometrial cancer (stage I), and 3 benign ovarian tumors, providing a positive predictive value of 40% (95% confidence interval = 12.2%, 73.8%) for detecting invasive ovarian cancer. The specificity was 99.9% (95% confidence interval = 99.7%, 100%). All 4 women with invasive ovarian cancer were enrolled in the study for at least 3 years with low-risk annual CA125 test values prior to rising CA125 levels. ROCA followed by TVS demonstrated excellent specificity and positive predictive value in a population of US women at average risk for ovarian cancer. Copyright © 2013 American Cancer Society.
Morrissey, Taryn W
2017-07-01
The presence of firearms and their unsafe storage in the home can increase risk of firearm-related death and injury, but public opinion suggests that firearm ownership is a protective factor against gun violence. This study examined the effects of a recent nearby active shooter incident on gun ownership and storage practices among families with young children. A series of regression models, with data from the nationally representative Early Childhood Longitudinal Study-Birth Cohort merged with the FBI's Active Shooter Incidents data collected in 2003-2006, were used to examine whether household gun ownership and storage practices differed in the months prior to and following an active shooter incident that occurred anywhere in the United States or within the same state. Approximately one-fifth of young children lived in households with one or more guns; of these children, only two-thirds lived in homes that stored all guns in locked cabinets. Results suggest that the experience of a recent active shooter incident was associated with an increased likelihood of storing all guns locked, with the magnitude dependent on the temporal and geographic proximity of the incident. The severity of the incident, defined as the number of fatalities, predicted an increase in storing guns locked. Findings suggest that public shootings change behaviors related to firearm storage among families with young children. Copyright © 2017 Elsevier Inc. All rights reserved.
Comparison of likelihood testing procedures for parallel systems with covariances
International Nuclear Information System (INIS)
Ayman Baklizi; Isa Daud; Noor Akma Ibrahim
1998-01-01
In this paper we considered investigating and comparing the behavior of the likelihood ratio, the Rao's and the Wald's statistics for testing hypotheses on the parameters of the simple linear regression model based on parallel systems with covariances. These statistics are asymptotically equivalent (Barndorff-Nielsen and Cox, 1994). However, their relative performances in finite samples are generally known. A Monte Carlo experiment is conducted to stimulate the sizes and the powers of these statistics for complete samples and in the presence of time censoring. Comparisons of the statistics are made according to the attainment of assumed size of the test and their powers at various points in the parameter space. The results show that the likelihood ratio statistics appears to have the best performance in terms of the attainment of the assumed size of the test. Power comparisons show that the Rao statistic has some advantage over the Wald statistic in almost all of the space of alternatives while likelihood ratio statistic occupies either the first or the last position in term of power. Overall, the likelihood ratio statistic appears to be more appropriate to the model under study, especially for small sample sizes
Maximum likelihood estimation of the attenuated ultrasound pulse
DEFF Research Database (Denmark)
Rasmussen, Klaus Bolding
1994-01-01
The attenuated ultrasound pulse is divided into two parts: a stationary basic pulse and a nonstationary attenuation pulse. A standard ARMA model is used for the basic pulse, and a nonstandard ARMA model is derived for the attenuation pulse. The maximum likelihood estimator of the attenuated...
Planck 2013 results. XV. CMB power spectra and likelihood
Ade, P.A.R.; Armitage-Caplan, C.; Arnaud, M.; Ashdown, M.; Atrio-Barandela, F.; Aumont, J.; Baccigalupi, C.; Banday, A.J.; Barreiro, R.B.; Bartlett, J.G.; Battaner, E.; Benabed, K.; Benoit, A.; Benoit-Levy, A.; Bernard, J.P.; Bersanelli, M.; Bielewicz, P.; Bobin, J.; Bock, J.J.; Bonaldi, A.; Bonavera, L.; Bond, J.R.; Borrill, J.; Bouchet, F.R.; Boulanger, F.; Bridges, M.; Bucher, M.; Burigana, C.; Butler, R.C.; Calabrese, E.; Cardoso, J.F.; Catalano, A.; Challinor, A.; Chamballu, A.; Chiang, L.Y.; Chiang, H.C.; Christensen, P.R.; Church, S.; Clements, D.L.; Colombi, S.; Colombo, L.P.L.; Combet, C.; Couchot, F.; Coulais, A.; Crill, B.P.; Curto, A.; Cuttaia, F.; Danese, L.; Davies, R.D.; Davis, R.J.; de Bernardis, P.; de Rosa, A.; de Zotti, G.; Delabrouille, J.; Delouis, J.M.; Desert, F.X.; Dickinson, C.; Diego, J.M.; Dole, H.; Donzelli, S.; Dore, O.; Douspis, M.; Dunkley, J.; Dupac, X.; Efstathiou, G.; Elsner, F.; Ensslin, T.A.; Eriksen, H.K.; Finelli, F.; Forni, O.; Frailis, M.; Fraisse, A.A.; Franceschi, E.; Gaier, T.C.; Galeotta, S.; Galli, S.; Ganga, K.; Giard, M.; Giardino, G.; Giraud-Heraud, Y.; Gjerlow, E.; Gonzalez-Nuevo, J.; Gorski, K.M.; Gratton, S.; Gregorio, A.; Gruppuso, A.; Gudmundsson, J.E.; Hansen, F.K.; Hanson, D.; Harrison, D.; Helou, G.; Henrot-Versille, S.; Hernandez-Monteagudo, C.; Herranz, D.; Hildebrandt, S.R.; Hivon, E.; Hobson, M.; Holmes, W.A.; Hornstrup, A.; Hovest, W.; Huffenberger, K.M.; Hurier, G.; Jaffe, T.R.; Jaffe, A.H.; Jewell, J.; Jones, W.C.; Juvela, M.; Keihanen, E.; Keskitalo, R.; Kiiveri, K.; Kisner, T.S.; Kneissl, R.; Knoche, J.; Knox, L.; Kunz, M.; Kurki-Suonio, H.; Lagache, G.; Lahteenmaki, A.; Lamarre, J.M.; Lasenby, A.; Lattanzi, M.; Laureijs, R.J.; Lawrence, C.R.; Le Jeune, M.; Leach, S.; Leahy, J.P.; Leonardi, R.; Leon-Tavares, J.; Lesgourgues, J.; Liguori, M.; Lilje, P.B.; Lindholm, V.; Linden-Vornle, M.; Lopez-Caniego, M.; Lubin, P.M.; Macias-Perez, J.F.; Maffei, B.; Maino, D.; Mandolesi, N.; Marinucci, D.; Maris, M.; Marshall, D.J.; Martin, P.G.; Martinez-Gonzalez, E.; Masi, S.; Matarrese, S.; Matthai, F.; Mazzotta, P.; Meinhold, P.R.; Melchiorri, A.; Mendes, L.; Menegoni, E.; Mennella, A.; Migliaccio, M.; Millea, M.; Mitra, S.; Miville-Deschenes, M.A.; Molinari, D.; Moneti, A.; Montier, L.; Morgante, G.; Mortlock, D.; Moss, A.; Munshi, D.; Naselsky, P.; Nati, F.; Natoli, P.; Netterfield, C.B.; Norgaard-Nielsen, H.U.; Noviello, F.; Novikov, D.; Novikov, I.; O'Dwyer, I.J.; Orieux, F.; Osborne, S.; Oxborrow, C.A.; Paci, F.; Pagano, L.; Pajot, F.; Paladini, R.; Paoletti, D.; Partridge, B.; Pasian, F.; Patanchon, G.; Paykari, P.; Perdereau, O.; Perotto, L.; Perrotta, F.; Piacentini, F.; Piat, M.; Pierpaoli, E.; Pietrobon, D.; Plaszczynski, S.; Pointecouteau, E.; Polenta, G.; Ponthieu, N.; Popa, L.; Poutanen, T.; Pratt, G.W.; Prezeau, G.; Prunet, S.; Puget, J.L.; Rachen, J.P.; Rahlin, A.; Rebolo, R.; Reinecke, M.; Remazeilles, M.; Renault, C.; Ricciardi, S.; Riller, T.; Ringeval, C.; Ristorcelli, I.; Rocha, G.; Rosset, C.; Roudier, G.; Rowan-Robinson, M.; Rubino-Martin, J.A.; Rusholme, B.; Sandri, M.; Sanselme, L.; Santos, D.; Savini, G.; Scott, D.; Seiffert, M.D.; Shellard, E.P.S.; Spencer, L.D.; Starck, J.L.; Stolyarov, V.; Stompor, R.; Sudiwala, R.; Sureau, F.; Sutton, D.; Suur-Uski, A.S.; Sygnet, J.F.; Tauber, J.A.; Tavagnacco, D.; Terenzi, L.; Toffolatti, L.; Tomasi, M.; Tristram, M.; Tucci, M.; Tuovinen, J.; Turler, M.; Valenziano, L.; Valiviita, J.; Van Tent, B.; Varis, J.; Vielva, P.; Villa, F.; Vittorio, N.; Wade, L.A.; Wandelt, B.D.; Wehus, I.K.; White, M.; White, S.D.M.; Yvon, D.; Zacchei, A.; Zonca, A.
2014-01-01
We present the Planck likelihood, a complete statistical description of the two-point correlation function of the CMB temperature fluctuations. We use this likelihood to derive the Planck CMB power spectrum over three decades in l, covering 2 = 50, we employ a correlated Gaussian likelihood approximation based on angular cross-spectra derived from the 100, 143 and 217 GHz channels. We validate our likelihood through an extensive suite of consistency tests, and assess the impact of residual foreground and instrumental uncertainties on cosmological parameters. We find good internal agreement among the high-l cross-spectra with residuals of a few uK^2 at l <= 1000. We compare our results with foreground-cleaned CMB maps, and with cross-spectra derived from the 70 GHz Planck map, and find broad agreement in terms of spectrum residuals and cosmological parameters. The best-fit LCDM cosmology is in excellent agreement with preliminary Planck polarisation spectra. The standard LCDM cosmology is well constrained b...
Likelihood functions for state space models with diffuse initial conditions
Koopman, S.J.; Shephard, N.; de Vos, A.F.
2010-01-01
State space models with non-stationary processes and/or fixed regression effects require a state vector with diffuse initial conditions. Different likelihood functions can be adopted for the estimation of parameters in time-series models with diffuse initial conditions. In this article, we consider
Likelihood functions for state space models with diffuse initial conditions
Francke, M.K.; Koopmans, S.J.; de Vos, A.F.
2008-01-01
State space models with nonstationary processes and fixed regression effects require a state vector with diffuse initial conditions. Different likelihood functions can be adopted for the estimation of parameters in time series models with diffuse initial conditions. In this paper we consider
LIKELIHOOD ESTIMATION OF PARAMETERS USING SIMULTANEOUSLY MONITORED PROCESSES
DEFF Research Database (Denmark)
Friis-Hansen, Peter; Ditlevsen, Ove Dalager
2004-01-01
The topic is maximum likelihood inference from several simultaneously monitored response processes of a structure to obtain knowledge about the parameters of other not monitored but important response processes when the structure is subject to some Gaussian load field in space and time. The consi....... The considered example is a ship sailing with a given speed through a Gaussian wave field....
Young adult consumers' media usage and online purchase likelihood
African Journals Online (AJOL)
Convenience sampling resulted in 1 298 completed questionnaires. The results indicate that young adult consumers use new media more frequently than traditional media. Respondents with online shopping experience displayed a higher online purchasing likelihood than those who have not used this medium before.
A simplification of the likelihood ratio test statistic for testing ...
African Journals Online (AJOL)
The traditional likelihood ratio test statistic for testing hypothesis about goodness of fit of multinomial probabilities in one, two and multi – dimensional contingency table was simplified. Advantageously, using the simplified version of the statistic to test the null hypothesis is easier and faster because calculating the expected ...
MAXIMUM-LIKELIHOOD-ESTIMATION OF THE ENTROPY OF AN ATTRACTOR
SCHOUTEN, JC; TAKENS, F; VANDENBLEEK, CM
In this paper, a maximum-likelihood estimate of the (Kolmogorov) entropy of an attractor is proposed that can be obtained directly from a time series. Also, the relative standard deviation of the entropy estimate is derived; it is dependent on the entropy and on the number of samples used in the
Attitude towards, and likelihood of, complaining in the banking ...
African Journals Online (AJOL)
Attitude towards, and likelihood of, complaining in the banking, domestic airline and restaurant industries. D.J. Petzer & P.G. Mostert. 1AbstrAct. 1It is imperative that service organisations implement effective service recovery strategies when customers experience a service failure, since unresolved service failures can result ...
A modification of the restricted maximum likelihood method in ...
African Journals Online (AJOL)
The existing Restricted Maximum Likelihood Method of obtaining variance component estimates in generalized linear models with random effects is a complicated procedure requiring the value of the parameter it is intended to estimate. This paper addresses this problem by providing a modification to the existing Restricted ...
Cases in which ancestral maximum likelihood will be confusingly misleading.
Handelman, Tomer; Chor, Benny
2017-05-07
Ancestral maximum likelihood (AML) is a phylogenetic tree reconstruction criteria that "lies between" maximum parsimony (MP) and maximum likelihood (ML). ML has long been known to be statistically consistent. On the other hand, Felsenstein (1978) showed that MP is statistically inconsistent, and even positively misleading: There are cases where the parsimony criteria, applied to data generated according to one tree topology, will be optimized on a different tree topology. The question of weather AML is statistically consistent or not has been open for a long time. Mossel et al. (2009) have shown that AML can "shrink" short tree edges, resulting in a star tree with no internal resolution, which yields a better AML score than the original (resolved) model. This result implies that AML is statistically inconsistent, but not that it is positively misleading, because the star tree is compatible with any other topology. We show that AML is confusingly misleading: For some simple, four taxa (resolved) tree, the ancestral likelihood optimization criteria is maximized on an incorrect (resolved) tree topology, as well as on a star tree (both with specific edge lengths), while the tree with the original, correct topology, has strictly lower ancestral likelihood. Interestingly, the two short edges in the incorrect, resolved tree topology are of length zero, and are not adjacent, so this resolved tree is in fact a simple path. While for MP, the underlying phenomenon can be described as long edge attraction, it turns out that here we have long edge repulsion. Copyright © 2017. Published by Elsevier Ltd.
Likelihood-Based Confidence Intervals in Exploratory Factor Analysis
Oort, Frans J.
2011-01-01
In exploratory or unrestricted factor analysis, all factor loadings are free to be estimated. In oblique solutions, the correlations between common factors are free to be estimated as well. The purpose of this article is to show how likelihood-based confidence intervals can be obtained for rotated
Likelihood-based confidence intervals in exploratory factor analysis
Oort, F.J.
2011-01-01
In exploratory or unrestricted factor analysis, all factor loadings are free to be estimated. In oblique solutions, the correlations between common factors are free to be estimated as well. The purpose of this article is to show how likelihood-based confidence intervals can be obtained for rotated
Likelihood-Based Confidence Intervals in Exploratory Factor Analysis
Oort, Frans J.
2011-01-01
In exploratory or unrestricted factor analysis, all factor loadings are free to be estimated. In oblique solutions, the correlations between common factors are free to be estimated as well. The purpose of this article is to show how likelihood-based confidence intervals can be obtained for rotated factor loadings and factor correlations, by…
Robustness of the approximate likelihood of the protracted speciation model
Simonet, C.; Scherrer, R.; Rego-Costa, A.; Etienne, R. S.
The protracted speciation model presents a realistic and parsimonious explanation for the observed slowdown in lineage accumulation through time, by accounting for the fact that speciation takes time. A method to compute the likelihood for this model given a phylogeny is available and allows
Multilevel maximum likelihood estimation with application to covariance matrices
Czech Academy of Sciences Publication Activity Database
Turčičová, Marie; Mandel, J.; Eben, Kryštof
Published online: 23 January (2018) ISSN 0361-0926 R&D Projects: GA ČR GA13-34856S Institutional support: RVO:67985807 Keywords : Fisher information * High dimension * Hierarchical maximum likelihood * Nested parameter spaces * Spectral diagonal covariance model * Sparse inverse covariance model Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 0.311, year: 2016
Maximum likelihood decay curve fits by the simplex method
International Nuclear Information System (INIS)
Gregorich, K.E.
1991-01-01
A multicomponent decay curve analysis technique has been developed and incorporated into the decay curve fitting computer code, MLDS (maximum likelihood decay by the simplex method). The fitting criteria are based on the maximum likelihood technique for decay curves made up of time binned events. The probabilities used in the likelihood functions are based on the Poisson distribution, so decay curves constructed from a small number of events are treated correctly. A simple utility is included which allows the use of discrete event times, rather than time-binned data, to make maximum use of the decay information. The search for the maximum in the multidimensional likelihood surface for multi-component fits is performed by the simplex method, which makes the success of the iterative fits extremely insensitive to the initial values of the fit parameters and eliminates the problems of divergence. The simplex method also avoids the problem of programming the partial derivatives of the decay curves with respect to all the variable parameters, which makes the implementation of new types of decay curves straightforward. Any of the decay curve parameters can be fixed or allowed to vary. Asymmetric error limits for each of the free parameters, which do not consider the covariance of the other free parameters, are determined. A procedure is presented for determining the error limits which contain the associated covariances. The curve fitting procedure in MLDS can easily be adapted for fits to other curves with any functional form. (orig.)
Lei, Ning; Chiang, Kwo-Fu; Oudrari, Hassan; Xiong, Xiaoxiong
2011-01-01
Optical sensors aboard Earth orbiting satellites such as the next generation Visible/Infrared Imager/Radiometer Suite (VIIRS) assume that the sensors radiometric response in the Reflective Solar Bands (RSB) is described by a quadratic polynomial, in relating the aperture spectral radiance to the sensor Digital Number (DN) readout. For VIIRS Flight Unit 1, the coefficients are to be determined before launch by an attenuation method, although the linear coefficient will be further determined on-orbit through observing the Solar Diffuser. In determining the quadratic polynomial coefficients by the attenuation method, a Maximum Likelihood approach is applied in carrying out the least-squares procedure. Crucial to the Maximum Likelihood least-squares procedure is the computation of the weight. The weight not only has a contribution from the noise of the sensor s digital count, with an important contribution from digitization error, but also is affected heavily by the mathematical expression used to predict the value of the dependent variable, because both the independent and the dependent variables contain random noise. In addition, model errors have a major impact on the uncertainties of the coefficients. The Maximum Likelihood approach demonstrates the inadequacy of the attenuation method model with a quadratic polynomial for the retrieved spectral radiance. We show that using the inadequate model dramatically increases the uncertainties of the coefficients. We compute the coefficient values and their uncertainties, considering both measurement and model errors.
Regularization parameter selection for penalized-likelihood list-mode image reconstruction in PET.
Zhang, Mengxi; Zhou, Jian; Niu, Xiaofeng; Asma, Evren; Wang, Wenli; Qi, Jinyi
2017-06-21
Penalized likelihood (PL) reconstruction has demonstrated potential to improve image quality of positron emission tomography (PET) over unregularized ordered-subsets expectation-maximization (OSEM) algorithm. However, selecting proper regularization parameters in PL reconstruction has been challenging due to the lack of ground truth and variation of penalty functions. Here we present a method to choose regularization parameters using a cross-validation log-likelihood (CVLL) function. This new method does not require any knowledge of the true image and is directly applicable to list-mode PET data. We performed statistical analysis of the mean and variance of the CVLL. The results show that the CVLL provides an unbiased estimate of the log-likelihood function calculated using the noise free data. The predicted variance can be used to verify the statistical significance of the difference between CVLL values. The proposed method was validated using simulation studies and also applied to real patient data. The reconstructed images using optimum parameters selected by the proposed method show good image quality visually.
Shimizu, Hiroyuki; Tsuchiya, Takafumi; Oh-I, Shinsuke; Ohtani, Ken-Ichi; Okada, Shuichi; Mori, Masatomo
2007-05-01
Mazindol, a centrally acting monoamine re-uptake inhibitor, enhances satiety and supports body weight loss, but response to this drug among obese patients is very variable. The possible involvement of the Trp64Arg polymorphism of the β3-adrenergic receptor (ADRB3) gene in the development of severe obesity and weight loss response to anorexigenic drugs has not been established. In the present study, the allelic frequency of the Trp64Arg ADRB3 gene polymorphism was determined in massively obese Japanese outpatients (BMI > 35 kg/m(2)), and we investigated whether allelic differences may determine the weight loss effect of mazindol. The allelic frequency of Trp64Arg heterozygotes and homozygotes did not differ in severely obese subjects compared to non-obese subjects. Trp64Arg heterozygotes experienced significantly increased weight loss and reduced blood pressure following mazindol administration for 12 weeks. Thus the ADRB3 gene polymorphism is predictive for difficulty in weight reduction with mazindol treatment, but is not related to the development of severe obesity in the Japanese population.: Â© 2007 Asian Oceanian Association for the Study of Obesity . Published by Elsevier Ltd. All rights reserved.
Exclusion probabilities and likelihood ratios with applications to mixtures.
Slooten, Klaas-Jan; Egeland, Thore
2016-01-01
The statistical evidence obtained from mixed DNA profiles can be summarised in several ways in forensic casework including the likelihood ratio (LR) and the Random Man Not Excluded (RMNE) probability. The literature has seen a discussion of the advantages and disadvantages of likelihood ratios and exclusion probabilities, and part of our aim is to bring some clarification to this debate. In a previous paper, we proved that there is a general mathematical relationship between these statistics: RMNE can be expressed as a certain average of the LR, implying that the expected value of the LR, when applied to an actual contributor to the mixture, is at least equal to the inverse of the RMNE. While the mentioned paper presented applications for kinship problems, the current paper demonstrates the relevance for mixture cases, and for this purpose, we prove some new general properties. We also demonstrate how to use the distribution of the likelihood ratio for donors of a mixture, to obtain estimates for exceedance probabilities of the LR for non-donors, of which the RMNE is a special case corresponding to L R>0. In order to derive these results, we need to view the likelihood ratio as a random variable. In this paper, we describe how such a randomization can be achieved. The RMNE is usually invoked only for mixtures without dropout. In mixtures, artefacts like dropout and drop-in are commonly encountered and we address this situation too, illustrating our results with a basic but widely implemented model, a so-called binary model. The precise definitions, modelling and interpretation of the required concepts of dropout and drop-in are not entirely obvious, and we attempt to clarify them here in a general likelihood framework for a binary model.
Modeling gene expression measurement error: a quasi-likelihood approach
Directory of Open Access Journals (Sweden)
Strimmer Korbinian
2003-03-01
Full Text Available Abstract Background Using suitable error models for gene expression measurements is essential in the statistical analysis of microarray data. However, the true probabilistic model underlying gene expression intensity readings is generally not known. Instead, in currently used approaches some simple parametric model is assumed (usually a transformed normal distribution or the empirical distribution is estimated. However, both these strategies may not be optimal for gene expression data, as the non-parametric approach ignores known structural information whereas the fully parametric models run the risk of misspecification. A further related problem is the choice of a suitable scale for the model (e.g. observed vs. log-scale. Results Here a simple semi-parametric model for gene expression measurement error is presented. In this approach inference is based an approximate likelihood function (the extended quasi-likelihood. Only partial knowledge about the unknown true distribution is required to construct this function. In case of gene expression this information is available in the form of the postulated (e.g. quadratic variance structure of the data. As the quasi-likelihood behaves (almost like a proper likelihood, it allows for the estimation of calibration and variance parameters, and it is also straightforward to obtain corresponding approximate confidence intervals. Unlike most other frameworks, it also allows analysis on any preferred scale, i.e. both on the original linear scale as well as on a transformed scale. It can also be employed in regression approaches to model systematic (e.g. array or dye effects. Conclusions The quasi-likelihood framework provides a simple and versatile approach to analyze gene expression data that does not make any strong distributional assumptions about the underlying error model. For several simulated as well as real data sets it provides a better fit to the data than competing models. In an example it also
Zaslavsky, Oleg; Zelber-Sagi, Shira; LaCroix, Andrea Z; Brunner, Robert L; Wallace, Robert B; Cochrane, Barbara B; Woods, Nancy F
2017-10-01
We compared the simplified Women's Health Initiative (sWHI) and the standard Cardiovascular Health Study (CHS) frailty phenotypes in predicting falls, hip fracture, and death in older women. Participants are from the WHI Clinical Trial. CHS frailty criteria included weight loss, exhaustion, weakness, slowness, and low physical activity. The sWHI frailty score used two items from the RAND-36 physical function and vitality subscales, one item from the WHI physical activity scale plus the CHS weight loss criteria. Specifically, level of physical function was the capacity to walk one block and scored as severe (2-points), moderate (1-point), or no limitation (0). Vitality was based on feeling tired most or all of the time (1-point) versus less often (0). Low physical activity was walking outside less than twice a week (1-point) versus more often (0). A total score of 3 resulted in a frailty classification, a score of 1 or 2 defined pre-frailty, and 0 indicated nonfrailty. Outcomes were modeled using Cox regression and Harrell C-statistics were used for comparisons. Approximately 5% of the participants were frail based on the CHS or sWHI phenotype. The sWHI frailty phenotype was associated with higher rates of mortality (hazard ratio [HR] = 2.36, p ≤ .001) and falls (HR = 1.45, p = .005). Comparable HRs in CHS-phenotype were 1.97 (p statistics revealed nonsignificant differences in HRs between the CHS and sWHI frailty phenotypes. The sWHI phenotype, which is self-reported and brief, might be practical in settings with limited resources. © The Author 2017. Published by Oxford University Press on behalf of The Gerontological Society of America. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Energy Technology Data Exchange (ETDEWEB)
Pan, Hubert Y.; Allen, Pamela K. [Department of Radiation Oncology, University of Texas MD Anderson Cancer, Houston, Texas (United States); Wang, Xin S. [Department of Symptom Research, University of Texas MD Anderson Cancer, Houston, Texas (United States); Chang, Eric L. [Department of Radiation Oncology, University of Texas MD Anderson Cancer, Houston, Texas (United States); Department of Radiation Oncology, USC Norris Cancer Center, Los Angeles, California (United States); Rhines, Laurence D.; Tatsui, Claudio E. [Department of Neurosurgery, University of Texas MD Anderson Cancer, Houston, Texas (United States); Amini, Behrang [Department of Diagnostic Radiology, University of Texas MD Anderson Cancer, Houston, Texas (United States); Wang, Xin A. [Department of Radiation Physics, University of Texas MD Anderson Cancer, Houston, Texas (United States); Tannir, Nizar M. [Department of Genitourinary Medical Oncology, University of Texas MD Anderson Cancer, Houston, Texas (United States); Brown, Paul D. [Department of Radiation Oncology, University of Texas MD Anderson Cancer, Houston, Texas (United States); Ghia, Amol J., E-mail: AJGhia@mdanderson.org [Department of Radiation Oncology, University of Texas MD Anderson Cancer, Houston, Texas (United States)
2014-11-15
Purpose/Objective(s): To perform a secondary analysis of institutional prospective spine stereotactic body radiation therapy (SBRT) trials to investigate posttreatment acute pain flare. Methods and Materials: Medical records for enrolled patients were reviewed. Study protocol included baseline and follow-up surveys with pain assessment by Brief Pain Inventory and documentation of pain medications. Patients were considered evaluable for pain flare if clinical note or follow-up survey was completed within 2 weeks of SBRT. Pain flare was defined as a clinical note indicating increased pain at the treated site or survey showing a 2-point increase in worst pain score, a 25% increase in analgesic intake, or the initiation of steroids. Binary logistic regression was used to determine predictive factors for pain flare occurrence. Results: Of the 210 enrolled patients, 195 (93%) were evaluable for pain flare, including 172 (88%) clinically, 135 (69%) by survey, and 112 (57%) by both methods. Of evaluable patients, 61 (31%) had undergone prior surgery, 57 (29%) had received prior radiation, and 34 (17%) took steroids during treatment, mostly for prior conditions. Pain flare was observed in 44 patients (23%). Median time to pain flare was 5 days (range, 0-20 days) after the start of treatment. On multivariate analysis, the only independent factor associated with pain flare was the number of treatment fractions (odds ratio = 0.66, P=.004). Age, sex, performance status, spine location, number of treated vertebrae, prior radiation, prior surgery, primary tumor histology, baseline pain score, and steroid use were not significant. Conclusions: Acute pain flare after spine SBRT is a relatively common event, for which patients should be counseled. Additional study is needed to determine whether prophylactic or symptomatic intervention is preferred.
International Nuclear Information System (INIS)
Pan, Hubert Y.; Allen, Pamela K.; Wang, Xin S.; Chang, Eric L.; Rhines, Laurence D.; Tatsui, Claudio E.; Amini, Behrang; Wang, Xin A.; Tannir, Nizar M.; Brown, Paul D.; Ghia, Amol J.
2014-01-01
Purpose/Objective(s): To perform a secondary analysis of institutional prospective spine stereotactic body radiation therapy (SBRT) trials to investigate posttreatment acute pain flare. Methods and Materials: Medical records for enrolled patients were reviewed. Study protocol included baseline and follow-up surveys with pain assessment by Brief Pain Inventory and documentation of pain medications. Patients were considered evaluable for pain flare if clinical note or follow-up survey was completed within 2 weeks of SBRT. Pain flare was defined as a clinical note indicating increased pain at the treated site or survey showing a 2-point increase in worst pain score, a 25% increase in analgesic intake, or the initiation of steroids. Binary logistic regression was used to determine predictive factors for pain flare occurrence. Results: Of the 210 enrolled patients, 195 (93%) were evaluable for pain flare, including 172 (88%) clinically, 135 (69%) by survey, and 112 (57%) by both methods. Of evaluable patients, 61 (31%) had undergone prior surgery, 57 (29%) had received prior radiation, and 34 (17%) took steroids during treatment, mostly for prior conditions. Pain flare was observed in 44 patients (23%). Median time to pain flare was 5 days (range, 0-20 days) after the start of treatment. On multivariate analysis, the only independent factor associated with pain flare was the number of treatment fractions (odds ratio = 0.66, P=.004). Age, sex, performance status, spine location, number of treated vertebrae, prior radiation, prior surgery, primary tumor histology, baseline pain score, and steroid use were not significant. Conclusions: Acute pain flare after spine SBRT is a relatively common event, for which patients should be counseled. Additional study is needed to determine whether prophylactic or symptomatic intervention is preferred
Ping, Fan; Li, Zeng-Yi; Lv, Ke; Zhou, Mei-Cen; Dong, Ya-Xiu; Sun, Qi; Li, Yu-Xiu
2017-03-01
To investigate the effect of telomere shortening and other predictive factors of non-alcoholic fatty liver disease (NAFLD) in type 2 diabetes mellitus patients in a 6-year prospective cohort study. A total of 70 type 2 diabetes mellitus (mean age 57.8 ± 6.7 years) patients without NAFLD were included in the study, and 64 of them were successfully followed up 6 years later, excluding four cases with significant alcohol consumption. NAFLD was diagnosed by the hepatorenal ratio obtained by a quantitative ultrasound method using NIH image analysis software. The 39 individuals that developed NAFLD were allocated to group A, and the 21 individuals that did not develop NAFLD were allocated to group B. Fluorescent real-time quantitative polymerase chain reaction was used to measure telomere length. There was no significant difference between the two groups in baseline telomere length; however, at the end of the 6th year, telomere length had become shorter in group A compared with group B. There were significant differences between these two groups in baseline body mass index, waistline, systolic blood pressure, glycated hemoglobin and fasting C-peptide level. In addition, the estimated indices of baseline insulin resistance increased in group A. Fasting insulin level, body mass index, systolic blood pressure at baseline and the shortening of telomere length were independent risk factors of NAFLD in type 2 diabetes mellitus patients. Telomere length became shorter in type 2 diabetes mellitus patients who developed NAFLD over the course of 6 years. Type 2 diabetes mellitus patients who developed NAFLD had more serious insulin resistance compared with those who did not develop NAFLD a long time ago. © 2016 The Authors. Journal of Diabetes Investigation published by Asian Association for the Study of Diabetes (AASD) and John Wiley & Sons Australia, Ltd.
International Nuclear Information System (INIS)
Petterson, J.S.
1988-06-01
The reasons for wanting to document this case study and present the findings are simple. According to USDOE technical risk assessments (and our own initial work on the Hanford socioeconomic study), the likelihood of a major accident involving exposure to radioactive materials in the process of site characterization, construction, operation, and closure of a high-level waste repository is extremely remote. Most would agree, however, that there is a relatively high probability that a minor accident involving radiological contamination will occur sometime during the lifetime of the repository -- for example, during transport, at an MRS site or at the permanent site itself during repacking and deposition. Thus, one of the major concerns of the Yucca Mountain Socioeconomic Study is the potential impact of a relatively minor radiation-related accident. A large number of potential impact of a relatively minor radiation-related accident. A large number of potential accident scenarios have been under consideration (such as a transportation or other surface accident which results in a significant decline in tourism, the number of conventions, or the selection of Nevada as a retirement residence). The results of the work in Goiania make it clear, however, that such a significant shift in established social patterns and trends is not likely to occur as a direct outcome of a single nuclear-related accident (even, perhaps, a relatively major one), but rather, are likely to occur as a result of the enduring social interpretations of such an accident -- that is, as a result of the process of understanding, communicating, and socially sustaining a particular set of associations with respect to the initial incident
Cook, Alan; Gonzalez, Jennifer Reingle; Balasubramanian, Bijal A
2014-12-01
Unintentional injury leads all other causes of death for those 1 to 45 years old. The expense of medical care for injured people is estimated to exceed $406 billion annually. Given this burden on the population, the Centers for Disease Control and Prevention consistently refers to injury prevention as a national priority. We postulated that exposure to crime and the density of alcohol outlets in one's neighborhood will be positively associated with the incidence of hospitalization for and mortality from traumatic injuries, independent of other neighborhood characteristics. We conducted a cross-sectional study with ecological and individual analyses. Patient-level data for traumatic injury, injury severity, and hospital mortality due to traumatic injury in 2010 were gathered from the Dallas-Fort Worth Hospital Council Foundation. Each case of traumatic injury or death was geospatially linked with neighborhood of origin information from the 2010 U.S. Census within Dallas County, Texas. This information was subsequently linked with crime data gathered from 20 local police departments and the Texas Alcoholic Beverage Commission alcohol outlet dataset. The crime data are the Part One crimes reported to the Federal Bureau of Investigation. The proportion of persons 65 years old or older was the strongest predictor of the incidence of hospitalization for traumatic injury (b = 12.64, 95% confidence interval (CI) 8.73 to 16.55). In turn, the incidence of traumatic injury most strongly predicted the severity of traumatic injury (b = 0.008, 95% CI 0.0003 - 0.0012). The tract-level unemployment rate was associated with a 5% increase in the odds of hospital mortality among hospitalized trauma patients. Several neighborhood characteristics were associated with the incidence, severity, and hospital mortality from traumatic injury. However, crime rates and alcohol outlet density carried no such association. Prevention efforts should focus on neighborhood characteristics such
The Jarvis gas release incident
International Nuclear Information System (INIS)
Manocha, J.
1992-01-01
On 26 September, 1991, large volumes of natural gas were observed to be leaking from two water wells in the Town of Jarvis. Gas and water were being ejected from a drilled water well, at which a subsequent gas explosion occurred. Measurements of gas concentrations indicated levels far in excess of the lower flammability limit at several locations. Electrical power and natural gas services were cut off, and residents were evacuated. A state of emergency was declared, and gas was found to be flowing from water wells, around building foundations, and through other fractures in the ground. By 27 September the volumes of gas had reduced substantially, and by 30 September all residents had returned to their homes and the state of emergency was cancelled. The emergency response, possible pathways of natural gas into the aquifer, and public relations are discussed. It is felt that the likelihood of a similar incident occurring in the future is high. 11 figs
Maximum-likelihood methods for array processing based on time-frequency distributions
Zhang, Yimin; Mu, Weifeng; Amin, Moeness G.
1999-11-01
This paper proposes a novel time-frequency maximum likelihood (t-f ML) method for direction-of-arrival (DOA) estimation for non- stationary signals, and compares this method with conventional maximum likelihood DOA estimation techniques. Time-frequency distributions localize the signal power in the time-frequency domain, and as such enhance the effective SNR, leading to improved DOA estimation. The localization of signals with different t-f signatures permits the division of the time-frequency domain into smaller regions, each contains fewer signals than those incident on the array. The reduction of the number of signals within different time-frequency regions not only reduces the required number of sensors, but also decreases the computational load in multi- dimensional optimizations. Compared to the recently proposed time- frequency MUSIC (t-f MUSIC), the proposed t-f ML method can be applied in coherent environments, without the need to perform any type of preprocessing that is subject to both array geometry and array aperture.
Maximum likelihood analysis of the first KamLAND results
Ianni, A
2003-01-01
A maximum likelihood approach has been used to analize the first results from KamLAND emphasizing the application of this method for low statistics samples. The goodness of fit has been determined exploiting a simple Monte Carlo approach in order to test two different null hytpotheses. It turns out that with the present statistics the neutrino oscillation hypothesis has a significance of about 90% (the best-fit for the oscillation parameters from KamLAND are found to be: $\\delta m_{12}^2 \\sim 7.1 \\times 10^{-5}$ eV$^2$ and $\\sin^2 \\theta_{12} = 0.424/0.576$), while the no-oscillation hypothesis of about 50%. Through the likelihood ratio the hypothesis of no disappearence is rejected at about 99.9% C.L. with the present data from the positron spectrum. A comparison with other analyses is presented.
Maximum likelihood analysis of the first KamLAND results
Energy Technology Data Exchange (ETDEWEB)
Ianni, Aldo [INFN, Laboratori Nazionali del Gran Sasso, S. S. 17bis Km 18-910, I-67010 Assergi, Aquila (Italy)
2003-09-01
A maximum likelihood approach has been used to analyse the first results from KamLAND emphasizing the application of this method for low statistics samples. The goodness of fit has been determined by using the Monte Carlo method in order to test two different null hypotheses. It turns out that with the present statistics, the neutrino oscillation hypothesis has a significance of about 90% (the best fits for the oscillation parameters from KamLAND are found to be: {delta}m{sup 2}{sub 12} {approx} 7.1 x 10{sup -5} eV{sup 2} and sin{sup 2} {theta}{sub 12} = 0.424/0.576), while the no-oscillation hypothesis has a significance of about 50%. Through the likelihood ratio test, the hypothesis of no disappearance is rejected at about 99.9% C.L. with the present data from the positron spectrum. A comparison with other analyses is presented.
Composite likelihood and two-stage estimation in family studies
DEFF Research Database (Denmark)
Andersen, Elisabeth Anne Wreford
2004-01-01
In this paper register based family studies provide the motivation for linking a two-stage estimation procedure in copula models for multivariate failure time data with a composite likelihood approach. The asymptotic properties of the estimators in both parametric and semi-parametric models...... are derived, combining the approaches of Parner (2001) and Andersen (2003). The method is mainly studied when the families consist of groups of exchangeable members (e.g. siblings) or members at different levels (e.g. parents and children). The advantages of the proposed method are especially clear...... in this last case where very flexible modelling is possible. The suggested method is also studied in simulations and found to be efficient compared to maximum likelihood. Finally, the suggested method is applied to a family study of deep venous thromboembolism where it is seen that the association between ages...
Maximum Likelihood Compton Polarimetry with the Compton Spectrometer and Imager
Lowell, A. W.; Boggs, S. E.; Chiu, C. L.; Kierans, C. A.; Sleator, C.; Tomsick, J. A.; Zoglauer, A. C.; Chang, H.-K.; Tseng, C.-H.; Yang, C.-Y.; Jean, P.; von Ballmoos, P.; Lin, C.-H.; Amman, M.
2017-10-01
Astrophysical polarization measurements in the soft gamma-ray band are becoming more feasible as detectors with high position and energy resolution are deployed. Previous work has shown that the minimum detectable polarization (MDP) of an ideal Compton polarimeter can be improved by ˜21% when an unbinned, maximum likelihood method (MLM) is used instead of the standard approach of fitting a sinusoid to a histogram of azimuthal scattering angles. Here we outline a procedure for implementing this maximum likelihood approach for real, nonideal polarimeters. As an example, we use the recent observation of GRB 160530A with the Compton Spectrometer and Imager. We find that the MDP for this observation is reduced by 20% when the MLM is used instead of the standard method.
A composite likelihood approach for spatially correlated survival data.
Paik, Jane; Ying, Zhiliang
2013-01-01
The aim of this paper is to provide a composite likelihood approach to handle spatially correlated survival data using pairwise joint distributions. With e-commerce data, a recent question of interest in marketing research has been to describe spatially clustered purchasing behavior and to assess whether geographic distance is the appropriate metric to describe purchasing dependence. We present a model for the dependence structure of time-to-event data subject to spatial dependence to characterize purchasing behavior from the motivating example from e-commerce data. We assume the Farlie-Gumbel-Morgenstern (FGM) distribution and then model the dependence parameter as a function of geographic and demographic pairwise distances. For estimation of the dependence parameters, we present pairwise composite likelihood equations. We prove that the resulting estimators exhibit key properties of consistency and asymptotic normality under certain regularity conditions in the increasing-domain framework of spatial asymptotic theory.
Planck 2013 results. XV. CMB power spectra and likelihood
DEFF Research Database (Denmark)
Tauber, Jan; Bartlett, J.G.; Bucher, M.
2014-01-01
This paper presents the Planck 2013 likelihood, a complete statistical description of the two-point correlation function of the CMB temperature fluctuations that accounts for all known relevant uncertainties, both instrumental and astrophysical in nature. We use this likelihood to derive our best......, as well as with cross-spectra derived from the 70 GHz Planck map, and find broad agreement in terms of spectrum residuals and cosmological parameters. We further show that the best-fit CDM cosmology is in excellent agreement with preliminary PlanckEE and TE polarisation spectra. We find that the standard...... CDM cosmology is well constrained by Planck from the measurements at 1500. One specific example is the spectral index of scalar perturbations, for which we report a 5.4 deviation from scale invariance, n= 1. Increasingthe multipole range beyond 1500 does not increase our accuracy for the CDM...
GENERALIZATION OF RAYLEIGH MAXIMUM LIKELIHOOD DESPECKLING FILTER USING QUADRILATERAL KERNELS
Directory of Open Access Journals (Sweden)
S. Sridevi
2013-02-01
Full Text Available Speckle noise is the most prevalent noise in clinical ultrasound images. It visibly looks like light and dark spots and deduce the pixel intensity as murkiest. Gazing at fetal ultrasound images, the impact of edge and local fine details are more palpable for obstetricians and gynecologists to carry out prenatal diagnosis of congenital heart disease. A robust despeckling filter has to be contrived to proficiently suppress speckle noise and simultaneously preserve the features. The proposed filter is the generalization of Rayleigh maximum likelihood filter by the exploitation of statistical tools as tuning parameters and use different shapes of quadrilateral kernels to estimate the noise free pixel from neighborhood. The performance of various filters namely Median, Kuwahura, Frost, Homogenous mask filter and Rayleigh maximum likelihood filter are compared with the proposed filter in terms PSNR and image profile. Comparatively the proposed filters surpass the conventional filters.
Determining the likelihood of pauses and surges in global warming
Schurer, Andrew P.; Hegerl, Gabriele C.; Obrochta, Stephen P.
2015-07-01
The recent warming "hiatus" is subject to intense interest, with proposed causes including natural forcing and internal variability. Here we derive samples of all natural and internal variability from observations and a recent proxy reconstruction to investigate the likelihood that these two sources of variability could produce a hiatus or rapid warming in surface temperature. The likelihood is found to be consistent with that calculated previously for models and exhibits a similar spatial pattern, with an Interdecadal Pacific Oscillation-like structure, although with more signal in the Atlantic than in model patterns. The number and length of events increases if natural forcing is also considered, particularly in the models. From the reconstruction it can be seen that large eruptions, such as Mount Tambora in 1815, or clusters of eruptions, may result in a hiatus of over 20 years, a finding supported by model results.
Empirical likelihood-based tests for stochastic ordering
BARMI, HAMMOU EL; MCKEAGUE, IAN W.
2013-01-01
This paper develops an empirical likelihood approach to testing for the presence of stochastic ordering among univariate distributions based on independent random samples from each distribution. The proposed test statistic is formed by integrating a localized empirical likelihood statistic with respect to the empirical distribution of the pooled sample. The asymptotic null distribution of this test statistic is found to have a simple distribution-free representation in terms of standard Brownian bridge processes. The approach is used to compare the lengths of rule of Roman Emperors over various historical periods, including the “decline and fall” phase of the empire. In a simulation study, the power of the proposed test is found to improve substantially upon that of a competing test due to El Barmi and Mukerjee. PMID:23874142
A composite likelihood approach for spatially correlated survival data
Paik, Jane; Ying, Zhiliang
2013-01-01
The aim of this paper is to provide a composite likelihood approach to handle spatially correlated survival data using pairwise joint distributions. With e-commerce data, a recent question of interest in marketing research has been to describe spatially clustered purchasing behavior and to assess whether geographic distance is the appropriate metric to describe purchasing dependence. We present a model for the dependence structure of time-to-event data subject to spatial dependence to characterize purchasing behavior from the motivating example from e-commerce data. We assume the Farlie-Gumbel-Morgenstern (FGM) distribution and then model the dependence parameter as a function of geographic and demographic pairwise distances. For estimation of the dependence parameters, we present pairwise composite likelihood equations. We prove that the resulting estimators exhibit key properties of consistency and asymptotic normality under certain regularity conditions in the increasing-domain framework of spatial asymptotic theory. PMID:24223450
Likelihood inference for a nonstationary fractional autoregressive model
DEFF Research Database (Denmark)
Johansen, Søren; Ørregård Nielsen, Morten
2010-01-01
This paper discusses model-based inference in an autoregressive model for fractional processes which allows the process to be fractional of order d or d-b. Fractional differencing involves infinitely many past values and because we are interested in nonstationary processes we model the data X1......,...,X_{T} given the initial values X_{-n}, n=0,1,..., as is usually done. The initial values are not modeled but assumed to be bounded. This represents a considerable generalization relative to all previous work where it is assumed that initial values are zero. For the statistical analysis we assume...... the conditional Gaussian likelihood and for the probability analysis we also condition on initial values but assume that the errors in the autoregressive model are i.i.d. with suitable moment conditions. We analyze the conditional likelihood and its derivatives as stochastic processes in the parameters, including...
Maximum Likelihood, Consistency and Data Envelopment Analysis: A Statistical Foundation
Rajiv D. Banker
1993-01-01
This paper provides a formal statistical basis for the efficiency evaluation techniques of data envelopment analysis (DEA). DEA estimators of the best practice monotone increasing and concave production function are shown to be also maximum likelihood estimators if the deviation of actual output from the efficient output is regarded as a stochastic variable with a monotone decreasing probability density function. While the best practice frontier estimator is biased below the theoretical front...
Posteriori Probabilities and Likelihoods Combination for Speech and Speaker Recognition
BenZeghiba, Mohamed Faouzi; Bourlard, Hervé
2004-01-01
This paper investigates a new approach to perform simultaneous speech and speaker recognition. The likelihood estimated by a speaker identification system is combined with the posterior probability estimated by the speech recognizer. So, the joint posterior probability of the pronounced word and the speaker identity is maximized. A comparison study with other standard techniques is carried out in three different applications, (1) closed set speech and speaker identification, (2) open set spee...
Estimating small signals by using maximum likelihood and Poisson statistics
Hannam, M D
1999-01-01
Estimation of small signals from counting experiments with backgrounds larger than signals is solved using maximum likelihood estimation for situations in which both signal and background statistics are Poissonian. Confidence levels are discussed, and Poisson, Gauss and least-squares fitting methods are compared. Efficient algorithms that estimate signal strengths and confidence levels are devised for computer implementation. Examples from simulated data and a low count rate experiment in nuclear physics are given. (author)
Vibrational mode analysis using maximum likelihood and maximum entropy
International Nuclear Information System (INIS)
Redondo, A.; Sinha, D.N.
1993-01-01
A simple algorithm is presented that uses the maximum likelihood and maximum entropy approaches to determine the vibrational modes of elastic bodies. This method assumes that the vibrational frequencies have been previously determined, but the modes to which they correspond are unknown. Although the method is illustrated through the analysis of simulated vibrational modes for a flat rectangular plate, it has broad applicability to any experimental technique in which spectral frequencies can be associated to specific modes by means of a mathematical model
Applying exclusion likelihoods from LHC searches to extended Higgs sectors
Bechtle, Philip; Heinemeyer, Sven; Stål, Oscar; Stefaniak, Tim; Weiglein, Georg
2015-09-01
LHC searches for non-standard Higgs bosons decaying into tau lepton pairs constitute a sensitive experimental probe for physics beyond the Standard Model (BSM), such as supersymmetry (SUSY). Recently, the limits obtained from these searches have been presented by the CMS collaboration in a nearly model-independent fashion - as a narrow resonance model - based on the full dataset. In addition to publishing a exclusion limit, the full likelihood information for the narrow resonance model has been released. This provides valuable information that can be incorporated into global BSM fits. We present a simple algorithm that maps an arbitrary model with multiple neutral Higgs bosons onto the narrow resonance model and derives the corresponding value for the exclusion likelihood from the CMS search. This procedure has been implemented into the public computer code HiggsBounds (version 4.2.0 and higher). We validate our implementation by cross-checking against the official CMS exclusion contours in three Higgs benchmark scenarios in the Minimal Supersymmetric Standard Model (MSSM), and find very good agreement. Going beyond validation, we discuss the combined constraints of the search and the rate measurements of the SM-like Higgs at in a recently proposed MSSM benchmark scenario, where the lightest Higgs boson obtains SM-like couplings independently of the decoupling of the heavier Higgs states. Technical details for how to access the likelihood information within HiggsBounds are given in the appendix. The program is available at http://higgsbounds.hepforge.org.
Approximate maximum likelihood estimation for population genetic inference.
Bertl, Johanna; Ewing, Gregory; Kosiol, Carolin; Futschik, Andreas
2017-11-27
In many population genetic problems, parameter estimation is obstructed by an intractable likelihood function. Therefore, approximate estimation methods have been developed, and with growing computational power, sampling-based methods became popular. However, these methods such as Approximate Bayesian Computation (ABC) can be inefficient in high-dimensional problems. This led to the development of more sophisticated iterative estimation methods like particle filters. Here, we propose an alternative approach that is based on stochastic approximation. By moving along a simulated gradient or ascent direction, the algorithm produces a sequence of estimates that eventually converges to the maximum likelihood estimate, given a set of observed summary statistics. This strategy does not sample much from low-likelihood regions of the parameter space, and is fast, even when many summary statistics are involved. We put considerable efforts into providing tuning guidelines that improve the robustness and lead to good performance on problems with high-dimensional summary statistics and a low signal-to-noise ratio. We then investigate the performance of our resulting approach and study its properties in simulations. Finally, we re-estimate parameters describing the demographic history of Bornean and Sumatran orang-utans.
Maximum likelihood as a common computational framework in tomotherapy
International Nuclear Information System (INIS)
Olivera, G.H.; Shepard, D.M.; Reckwerdt, P.J.; Ruchala, K.; Zachman, J.; Fitchard, E.E.; Mackie, T.R.
1998-01-01
Tomotherapy is a dose delivery technique using helical or axial intensity modulated beams. One of the strengths of the tomotherapy concept is that it can incorporate a number of processes into a single piece of equipment. These processes include treatment optimization planning, dose reconstruction and kilovoltage/megavoltage image reconstruction. A common computational technique that could be used for all of these processes would be very appealing. The maximum likelihood estimator, originally developed for emission tomography, can serve as a useful tool in imaging and radiotherapy. We believe that this approach can play an important role in the processes of optimization planning, dose reconstruction and kilovoltage and/or megavoltage image reconstruction. These processes involve computations that require comparable physical methods. They are also based on equivalent assumptions, and they have similar mathematical solutions. As a result, the maximum likelihood approach is able to provide a common framework for all three of these computational problems. We will demonstrate how maximum likelihood methods can be applied to optimization planning, dose reconstruction and megavoltage image reconstruction in tomotherapy. Results for planning optimization, dose reconstruction and megavoltage image reconstruction will be presented. Strengths and weaknesses of the methodology are analysed. Future directions for this work are also suggested. (author)
Maximum likelihood estimation of motor unit firing pattern statistics.
Navallas, Javier; Malanda, Armando; Rodriguez-Falces, Javier
2014-05-01
Estimation of motor unit firing pattern statistics is a valuable method in physiological studies and a key procedure in electromyographic (EMG) decomposition algorithms. However, if any firings within the pattern are undetected or missed during the decomposition process, the estimation procedure can be disrupted. In order to provide an optimal solution, we present a maximum likelihood estimator of EMG firing pattern statistics, taking into account that some firings may be undetected. A model of the inter-discharge interval (IDI) probability density function with missing firings has been employed to derive the maximum likelihood estimator of the mean and standard deviation of the IDIs. Actual calculation of the maximum likelihood solution has been obtained by means of numerical optimization. The proposed estimator has been evaluated and compared to other previously developed algorithms by means of simulation experiments and has been tested on real signals. The new estimator was found to be robust and reliable in diverse conditions: IDI distributions with a high coefficient of variance or considerable skewness. Moreover, the proposed estimator outperforms previous algorithms both in simulated and real conditions.
Predicting occupational lung diseases
Suarthana, E.
2008-01-01
This thesis aims at demonstrating the development, validation, and application of prediction models for occupational lung diseases. Prediction models are developed to estimate an individual’s probability of the presence or future likelihood of occurrence of an outcome (i.e. disease of interest or
Evaluation of Dynamic Coastal Response to Sea-level Rise Modifies Inundation Likelihood
Lentz, Erika E.; Thieler, E. Robert; Plant, Nathaniel G.; Stippa, Sawyer R.; Horton, Radley M.; Gesch, Dean B.
2016-01-01
Sea-level rise (SLR) poses a range of threats to natural and built environments, making assessments of SLR-induced hazards essential for informed decision making. We develop a probabilistic model that evaluates the likelihood that an area will inundate (flood) or dynamically respond (adapt) to SLR. The broad-area applicability of the approach is demonstrated by producing 30x30m resolution predictions for more than 38,000 sq km of diverse coastal landscape in the northeastern United States. Probabilistic SLR projections, coastal elevation and vertical land movement are used to estimate likely future inundation levels. Then, conditioned on future inundation levels and the current land-cover type, we evaluate the likelihood of dynamic response versus inundation. We find that nearly 70% of this coastal landscape has some capacity to respond dynamically to SLR, and we show that inundation models over-predict land likely to submerge. This approach is well suited to guiding coastal resource management decisions that weigh future SLR impacts and uncertainty against ecological targets and economic constraints.
Comparisons of likelihood and machine learning methods of individual classification
Guinand, B.; Topchy, A.; Page, K.S.; Burnham-Curtis, M. K.; Punch, W.F.; Scribner, K.T.
2002-01-01
Classification methods used in machine learning (e.g., artificial neural networks, decision trees, and k-nearest neighbor clustering) are rarely used with population genetic data. We compare different nonparametric machine learning techniques with parametric likelihood estimations commonly employed in population genetics for purposes of assigning individuals to their population of origin (“assignment tests”). Classifier accuracy was compared across simulated data sets representing different levels of population differentiation (low and high FST), number of loci surveyed (5 and 10), and allelic diversity (average of three or eight alleles per locus). Empirical data for the lake trout (Salvelinus namaycush) exhibiting levels of population differentiation comparable to those used in simulations were examined to further evaluate and compare classification methods. Classification error rates associated with artificial neural networks and likelihood estimators were lower for simulated data sets compared to k-nearest neighbor and decision tree classifiers over the entire range of parameters considered. Artificial neural networks only marginally outperformed the likelihood method for simulated data (0–2.8% lower error rates). The relative performance of each machine learning classifier improved relative likelihood estimators for empirical data sets, suggesting an ability to “learn” and utilize properties of empirical genotypic arrays intrinsic to each population. Likelihood-based estimation methods provide a more accessible option for reliable assignment of individuals to the population of origin due to the intricacies in development and evaluation of artificial neural networks. In recent years, characterization of highly polymorphic molecular markers such as mini- and microsatellites and development of novel methods of analysis have enabled researchers to extend investigations of ecological and evolutionary processes below the population level to the level of
Communicating likelihoods and probabilities in forecasts of volcanic eruptions
Doyle, Emma E. H.; McClure, John; Johnston, David M.; Paton, Douglas
2014-02-01
The issuing of forecasts and warnings of natural hazard events, such as volcanic eruptions, earthquake aftershock sequences and extreme weather often involves the use of probabilistic terms, particularly when communicated by scientific advisory groups to key decision-makers, who can differ greatly in relative expertise and function in the decision making process. Recipients may also differ in their perception of relative importance of political and economic influences on interpretation. Consequently, the interpretation of these probabilistic terms can vary greatly due to the framing of the statements, and whether verbal or numerical terms are used. We present a review from the psychology literature on how the framing of information influences communication of these probability terms. It is also unclear as to how people rate their perception of an event's likelihood throughout a time frame when a forecast time window is stated. Previous research has identified that, when presented with a 10-year time window forecast, participants viewed the likelihood of an event occurring ‘today’ as being of less than that in year 10. Here we show that this skew in perception also occurs for short-term time windows (under one week) that are of most relevance for emergency warnings. In addition, unlike the long-time window statements, the use of the phrasing “within the next…” instead of “in the next…” does not mitigate this skew, nor do we observe significant differences between the perceived likelihoods of scientists and non-scientists. This finding suggests that effects occurring due to the shorter time window may be ‘masking’ any differences in perception due to wording or career background observed for long-time window forecasts. These results have implications for scientific advice, warning forecasts, emergency management decision-making, and public information as any skew in perceived event likelihood towards the end of a forecast time window may result in
Numerical Prediction of Green Water Incidents
DEFF Research Database (Denmark)
Nielsen, K. B.; Mayer, Stefan
2004-01-01
Green water loads on moored or sailing ships occur when an incoming wave signigicantly exceeds the freeboard and water runs onto the deck. In this paper, a Navier-Stokes solver with a free surface capturing scheme (i.e. the VOF model; Hirt and Nichols, 1981) is used to numerically model green water...... loads on a moored FPSO exposed to head sea waves. Two cases are investigated: first, green water ona fixed vessel has been analysed, where resulting waterheight on deck, and impact pressure on a deck mounted structure have been computed. These results have been compared to experimental data obtained...
Maximum likelihood estimation of semiparametric mixture component models for competing risks data.
Choi, Sangbum; Huang, Xuelin
2014-09-01
In the analysis of competing risks data, the cumulative incidence function is a useful quantity to characterize the crude risk of failure from a specific event type. In this article, we consider an efficient semiparametric analysis of mixture component models on cumulative incidence functions. Under the proposed mixture model, latency survival regressions given the event type are performed through a class of semiparametric models that encompasses the proportional hazards model and the proportional odds model, allowing for time-dependent covariates. The marginal proportions of the occurrences of cause-specific events are assessed by a multinomial logistic model. Our mixture modeling approach is advantageous in that it makes a joint estimation of model parameters associated with all competing risks under consideration, satisfying the constraint that the cumulative probability of failing from any cause adds up to one given any covariates. We develop a novel maximum likelihood scheme based on semiparametric regression analysis that facilitates efficient and reliable estimation. Statistical inferences can be conveniently made from the inverse of the observed information matrix. We establish the consistency and asymptotic normality of the proposed estimators. We validate small sample properties with simulations and demonstrate the methodology with a data set from a study of follicular lymphoma. © 2014, The International Biometric Society.
Applying exclusion likelihoods from LHC searches to extended Higgs sectors
International Nuclear Information System (INIS)
Bechtle, Philip; Heinemeyer, Sven; Staal, Oscar; Stefaniak, Tim; Weiglein, Georg
2015-01-01
LHC searches for non-standard Higgs bosons decaying into tau lepton pairs constitute a sensitive experimental probe for physics beyond the Standard Model (BSM), such as supersymmetry (SUSY). Recently, the limits obtained from these searches have been presented by the CMS collaboration in a nearly model-independent fashion - as a narrow resonance model - based on the full 8 TeV dataset. In addition to publishing a 95 % C.L. exclusion limit, the full likelihood information for the narrowresonance model has been released. This provides valuable information that can be incorporated into global BSM fits. We present a simple algorithm that maps an arbitrary model with multiple neutral Higgs bosons onto the narrow resonance model and derives the corresponding value for the exclusion likelihood from the CMS search. This procedure has been implemented into the public computer code HiggsBounds (version 4.2.0 and higher). We validate our implementation by cross-checking against the official CMS exclusion contours in three Higgs benchmark scenarios in the Minimal Supersymmetric Standard Model (MSSM), and find very good agreement. Going beyond validation, we discuss the combined constraints of the ττ search and the rate measurements of the SM-like Higgs at 125 GeV in a recently proposed MSSM benchmark scenario, where the lightest Higgs boson obtains SM-like couplings independently of the decoupling of the heavier Higgs states. Technical details for how to access the likelihood information within HiggsBounds are given in the appendix. The program is available at http:// higgsbounds.hepforge.org. (orig.)
Early Course in Obstetrics Increases Likelihood of Practice Including Obstetrics.
Pearson, Jennifer; Westra, Ruth
2016-10-01
The Department of Family Medicine and Community Health Duluth has offered the Obstetrical Longitudinal Course (OBLC) as an elective for first-year medical students since 1999. The objective of the OBLC Impact Survey was to assess the effectiveness of the course over the past 15 years. A Qualtrics survey was emailed to participants enrolled in the course from 1999-2014. Data was compiled for the respondent group as a whole as well as four cohorts based on current level of training/practice. Cross-tabulations with Fisher's exact test were applied and odds ratios calculated for factors affecting likelihood of eventual practice including obstetrics. Participation in the OBLC was successful in increasing exposure, awareness, and comfort in caring for obstetrical patients and feeling more prepared for the OB-GYN Clerkship. A total of 50.5% of course participants felt the OBLC influenced their choice of specialty. For participants who are currently physicians, 51% are practicing family medicine with obstetrics or OB-GYN. Of the cohort of family physicians, 65.2% made the decision whether to include obstetrics in practice during medical school. Odds ratios show the likelihood of practicing obstetrics is higher when participants have completed the OBLC and also are practicing in a rural community. Early exposure to obstetrics, as provided by the OBLC, appears to increase the likelihood of including obstetrics in practice, especially if eventual practice is in a rural community. This course may be a tool to help create a pipeline for future rural family physicians providing obstetrical care.
Likelihood of Tree Topologies with Fossils and Diversification Rate Estimation.
Didier, Gilles; Fau, Marine; Laurin, Michel
2017-11-01
Since the diversification process cannot be directly observed at the human scale, it has to be studied from the information available, namely the extant taxa and the fossil record. In this sense, phylogenetic trees including both extant taxa and fossils are the most complete representations of the diversification process that one can get. Such phylogenetic trees can be reconstructed from molecular and morphological data, to some extent. Among the temporal information of such phylogenetic trees, fossil ages are by far the most precisely known (divergence times are inferences calibrated mostly with fossils). We propose here a method to compute the likelihood of a phylogenetic tree with fossils in which the only considered time information is the fossil ages, and apply it to the estimation of the diversification rates from such data. Since it is required in our computation, we provide a method for determining the probability of a tree topology under the standard diversification model. Testing our approach on simulated data shows that the maximum likelihood rate estimates from the phylogenetic tree topology and the fossil dates are almost as accurate as those obtained by taking into account all the data, including the divergence times. Moreover, they are substantially more accurate than the estimates obtained only from the exact divergence times (without taking into account the fossil record). We also provide an empirical example composed of 50 Permo-Carboniferous eupelycosaur (early synapsid) taxa ranging in age from about 315 Ma (Late Carboniferous) to 270 Ma (shortly after the end of the Early Permian). Our analyses suggest a speciation (cladogenesis, or birth) rate of about 0.1 per lineage and per myr, a marginally lower extinction rate, and a considerable hidden paleobiodiversity of early synapsids. [Extinction rate; fossil ages; maximum likelihood estimation; speciation rate.]. © The Author(s) 2017. Published by Oxford University Press, on behalf of the Society
Statistical-likelihood Exo-Planetary Habitability Index (SEPHI)
Rodríguez-Mozos, J. M.; Moya, A.
2017-11-01
A new index, the Statistical-likelihood Exo-Planetary Habitability Index (SEPHI), is presented. It has been developed to cover the current and future features required for a classification scheme disentangling whether any exoplanet discovered is potentially habitable compared with life on Earth. SEPHI uses likelihood functions to estimate the habitability potential. It is defined as the geometric mean of four sub-indexes related to four comparison criteria: Is the planet telluric? Does it have an atmosphere dense enough and a gravity compatible with life? Does it have liquid water on its surface? Does it have a magnetic field shielding its surface from harmful radiation and stellar winds? SEPHI can be estimated with only seven physical characteristics: planetary mass, planetary radius, planetary orbital period, stellar mass, stellar radius, stellar effective temperature and planetary system age. We have applied SEPHI to all the planets in the Exoplanet Encyclopaedia using a Monte Carlo method. Kepler-1229b, Kepler-186f and Kepler-442b have the largest SEPHI values assuming certain physical descriptions. Kepler-1229b is the most unexpected planet in this privileged position since no previous study pointed to this planet as a potentially interesting and habitable one. In addition, most of the tidally locked Earth-like planets present a weak magnetic field, incompatible with habitability potential. We must stress that our results are linked to the physics used in this study. Any change in the physics used implies only an updating of the likelihood functions. We have developed a web application allowing the online estimation of SEPHI (http://sephi.azurewebsites.net/).
Asymptotic formulae for likelihood-based tests of new physics
Cowan, Glen; Cranmer, Kyle; Gross, Eilam; Vitells, Ofer
2011-02-01
We describe likelihood-based statistical tests for use in high energy physics for the discovery of new phenomena and for construction of confidence intervals on model parameters. We focus on the properties of the test procedures that allow one to account for systematic uncertainties. Explicit formulae for the asymptotic distributions of test statistics are derived using results of Wilks and Wald. We motivate and justify the use of a representative data set, called the "Asimov data set", which provides a simple method to obtain the median experimental sensitivity of a search or measurement as well as fluctuations about this expectation.
Asymptotic formulae for likelihood-based tests of new physics
Energy Technology Data Exchange (ETDEWEB)
Cowan, Glen [Royal Holloway, University of London, Physics Department, Egham (United Kingdom); Cranmer, Kyle [New York University, Physics Department, New York, NY (United States); Gross, Eilam; Vitells, Ofer [Weizmann Institute of Science, Rehovot (Israel)
2011-02-15
We describe likelihood-based statistical tests for use in high energy physics for the discovery of new phenomena and for construction of confidence intervals on model parameters. We focus on the properties of the test procedures that allow one to account for systematic uncertainties. Explicit formulae for the asymptotic distributions of test statistics are derived using results of Wilks and Wald. We motivate and justify the use of a representative data set, called the ''Asimov data set'', which provides a simple method to obtain the median experimental sensitivity of a search or measurement as well as fluctuations about this expectation. (orig.)
Likelihood-Based Inference in Nonlinear Error-Correction Models
DEFF Research Database (Denmark)
Kristensen, Dennis; Rahbæk, Anders
We consider a class of vector nonlinear error correction models where the transfer function (or loadings) of the stationary relation- ships is nonlinear. This includes in particular the smooth transition models. A general representation theorem is given which establishes the dynamic properties...... and a linear trend in general. Gaussian likelihood-based estimators are considered for the long- run cointegration parameters, and the short-run parameters. Asymp- totic theory is provided for these and it is discussed to what extend asymptotic normality and mixed normaity can be found. A simulation study...
Improved Likelihood Function in Particle-based IR Eye Tracking
DEFF Research Database (Denmark)
Satria, R.; Sorensen, J.; Hammoud, R.
2005-01-01
In this paper we propose a log likelihood-ratio function of foreground and background models used in a particle filter to track the eye region in dark-bright pupil image sequences. This model fuses information from both dark and bright pupil images and their difference image into one model. Our...... enhanced tracker overcomes the issues of prior selection of static thresholds during the detection of feature observations in the bright-dark difference images. The auto-initialization process is performed using cascaded classifier trained using adaboost and adapted to IR eye images. Experiments show good...
Similar tests and the standardized log likelihood ratio statistic
DEFF Research Database (Denmark)
Jensen, Jens Ledet
1986-01-01
When testing an affine hypothesis in an exponential family the 'ideal' procedure is to calculate the exact similar test, or an approximation to this, based on the conditional distribution given the minimal sufficient statistic under the null hypothesis. By contrast to this there is a 'primitive......' approach in which the marginal distribution of a test statistic considered and any nuisance parameter appearing in the test statistic is replaced by an estimate. We show here that when using standardized likelihood ratio statistics the 'primitive' procedure is in fact an 'ideal' procedure to order O(n -3...
GPU accelerated likelihoods for stereo-based articulated tracking
DEFF Research Database (Denmark)
Friborg, Rune Møllegaard; Hauberg, Søren; Erleben, Kenny
2010-01-01
For many years articulated tracking has been an active research topic in the computer vision community. While working solutions have been suggested, computational time is still problematic. We present a GPU implementation of a ray-casting based likelihood model that is orders of magnitude faster...... than a traditional CPU implementation. We explain the non-intuitive steps required to attain an optimized GPU implementation, where the dominant part is to hide the memory latency effectively. Benchmarks show that computations which previously required several minutes, are now performed in few seconds....
Maximum-likelihood method for numerical inversion of Mellin transform
International Nuclear Information System (INIS)
Iqbal, M.
1997-01-01
A method is described for inverting the Mellin transform which uses an expansion in Laguerre polynomials and converts the Mellin transform to Laplace transform, then the maximum-likelihood regularization method is used to recover the original function of the Mellin transform. The performance of the method is illustrated by the inversion of the test functions available in the literature (J. Inst. Math. Appl., 20 (1977) 73; Math. Comput., 53 (1989) 589). Effectiveness of the method is shown by results obtained through demonstration by means of tables and diagrams
Nonparametric likelihood based estimation of linear filters for point processes
DEFF Research Database (Denmark)
Hansen, Niels Richard
2015-01-01
We consider models for multivariate point processes where the intensity is given nonparametrically in terms of functions in a reproducing kernel Hilbert space. The likelihood function involves a time integral and is consequently not given in terms of a finite number of kernel evaluations. The main...... the implementation relies crucially on the use of sparse matrices. As an illustration we consider neuron network modeling, and we use this example to investigate how the computational costs of the approximations depend on the resolution of the time discretization. The implementation is available in the R package...
Australian food life style segments and elaboration likelihood differences
DEFF Research Database (Denmark)
Brunsø, Karen; Reid, Mike
As the global food marketing environment becomes more competitive, the international and comparative perspective of consumers' attitudes and behaviours becomes more important for both practitioners and academics. This research employs the Food-Related Life Style (FRL) instrument in Australia...... in order to 1) determine Australian Life Style Segments and compare these with their European counterparts, and to 2) explore differences in elaboration likelihood among the Australian segments, e.g. consumers' interest and motivation to perceive product related communication. The results provide new...
CCD data processor for maximum likelihood feature classification
Benz, H. F.; Kelly, W. L.; Husson, C.; Culotta, P. W.; Snyder, W. E.
1980-01-01
The paper describes an advanced technology development which utilizes a high speed analog/binary CCD correlator to perform the matrix multiplications necessary to implement onboard feature classification. The matrix manipulation module uses the maximum likelihood classification algorithm assuming a Gaussian probability density function. The module will process 16 element multispectral vectors at rates in excess of 500 thousand multispectral vector elements per second. System design considerations for the optimum use of this module are discussed, test results from initial device fabrication runs are presented, and the performance in typical processing applications is described
Oestrus Detection in Dairy Cows Using Likelihood Ratio Tests
DEFF Research Database (Denmark)
Jónsson, Ragnar Ingi; Björgvinssin, Trausti; Blanke, Mogens
2008-01-01
were identified for the ensemble and for the individual cows. A diurnal filter was adapted to remove the daily variation of the individual. Change detection algorithms were designed for the actual probability densities, which were Rayleigh distributed with individual parameters for each cow....... A generalized likelihood ratio algorithm was derived for the compensated activity signal and detection algorithm was tested on 2323 days of activity, which contained 42 oestruses on 12 cows in total. The application of statistical change detection methods is a new approach for detecting oestrus in dairy cows...... and the results are shown to outperform earlier approaches in respect to combined statistics of false alarms and missed detections...
Vector Antenna and Maximum Likelihood Imaging for Radio Astronomy
2016-03-05
Maximum Likelihood Imaging for Radio Astronomy Mary Knapp1, Frank Robey2, Ryan Volz3, Frank Lind3, Alan Fenn2, Alex Morris2, Mark Silver2, Sarah Klein2...haystack.mit.edu Abstract1— Radio astronomy using frequencies less than ~100 MHz provides a window into non-thermal processes in objects ranging from planets...observational astronomy . Ground-based observatories including LOFAR [1], LWA [2], [3], MWA [4], and the proposed SKA-Low [5], [6] are improving access to
Maximum Likelihood Joint Tracking and Association in Strong Clutter
Directory of Open Access Journals (Sweden)
Leonid I. Perlovsky
2013-01-01
Full Text Available We have developed a maximum likelihood formulation for a joint detection, tracking and association problem. An efficient non-combinatorial algorithm for this problem is developed in case of strong clutter for radar data. By using an iterative procedure of the dynamic logic process “from vague-to-crisp” explained in the paper, the new tracker overcomes the combinatorial complexity of tracking in highly-cluttered scenarios and results in an orders-of-magnitude improvement in signal-to-clutter ratio.
Zhang, Zhu-ming; Prineas, Ronald J; Eaton, Charles B
2010-07-01
Electrocardiographic (ECG) Q- and ST-T-wave abnormalities predict coronary heart disease (CHD) and total mortality. No comparison has been made of the classification of these abnormalities by the 2 most widely used ECG coding systems for epidemiologic studies-the Minnesota Code (MC) and Novacode (NC). We evaluated 12-lead electrocardiograms from 64,597 participants (49 to 79 years old, 82% non-Hispanic white) in the Women's Health Initiative clinical trial in 1993 to 1998, with a maximum of 11 years of follow-up. We used MC and NC criteria to identify Q-wave, ST-segment, and T-wave abnormalities for comparison. In total, 3,322 participants (5.1%) died during an average 8-year follow-up, and 1,314 had incident CHD in the baseline cardiovascular disease-free group. Independently, ECG myocardial infarction criteria by the MC or NC were generally equivalent and were strong predictors for CHD death and total mortality (hazard ratio 1.62, 95% confidence interval 1.05 to 2.51 for CHD death; hazard ratio 1.36, 95% confidence interval 1.09 to 1.71 for total mortality) in a multivariable analytic model. Electrocardiograms with major ST-T abnormalities by the MC or NC coding system were stronger in predicting CHD deaths and total mortality than was the presence of Q waves alone. In conclusion, the ECG classification systems for myocardial infarction/ischemia abnormalities from the MC and NC are valuable and useful in clinical trials and epidemiologic studies. ST-T abnormalities are stronger predictors for CHD events and total mortality than isolated Q-wave abnormalities. Copyright (c) 2010 Elsevier Inc. All rights reserved.
Directory of Open Access Journals (Sweden)
Yun-Seong Cho
2017-01-01
Full Text Available We address the performance analysis of the maximum likelihood (ML direction-of-arrival (DOA estimation algorithm in the case of azimuth/elevation estimation of two incident signals using the uniform circular array (UCA. Based on the Taylor series expansion and approximation, we get explicit expressions of the root mean square errors (RMSEs of the azimuths and elevations. The validity of the derived expressions is shown by comparing the analytic results with the simulation results. The derivation in this paper is further verified by illustrating the consistency of the analytic results with the Cramer-Rao lower bound (CRLB.
Cançado, André L F; Duarte, Anderson R; Duczmal, Luiz H; Ferreira, Sabino J; Fonseca, Carlos M; Gontijo, Eliane C D M
2010-10-29
Irregularly shaped spatial clusters are difficult to delineate. A cluster found by an algorithm often spreads through large portions of the map, impacting its geographical meaning. Penalized likelihood methods for Kulldorff's spatial scan statistics have been used to control the excessive freedom of the shape of clusters. Penalty functions based on cluster geometry and non-connectivity have been proposed recently. Another approach involves the use of a multi-objective algorithm to maximize two objectives: the spatial scan statistics and the geometric penalty function. We present a novel scan statistic algorithm employing a function based on the graph topology to penalize the presence of under-populated disconnection nodes in candidate clusters, the disconnection nodes cohesion function. A disconnection node is defined as a region within a cluster, such that its removal disconnects the cluster. By applying this function, the most geographically meaningful clusters are sifted through the immense set of possible irregularly shaped candidate cluster solutions. To evaluate the statistical significance of solutions for multi-objective scans, a statistical approach based on the concept of attainment function is used. In this paper we compared different penalized likelihoods employing the geometric and non-connectivity regularity functions and the novel disconnection nodes cohesion function. We also build multi-objective scans using those three functions and compare them with the previous penalized likelihood scans. An application is presented using comprehensive state-wide data for Chagas' disease in puerperal women in Minas Gerais state, Brazil. We show that, compared to the other single-objective algorithms, multi-objective scans present better performance, regarding power, sensitivity and positive predicted value. The multi-objective non-connectivity scan is faster and better suited for the detection of moderately irregularly shaped clusters. The multi
DarkBit: a GAMBIT module for computing dark matter observables and likelihoods
Bringmann, Torsten; Conrad, Jan; Cornell, Jonathan M.; Dal, Lars A.; Edsjö, Joakim; Farmer, Ben; Kahlhoefer, Felix; Kvellestad, Anders; Putze, Antje; Savage, Christopher; Scott, Pat; Weniger, Christoph; White, Martin; Wild, Sebastian
2017-12-01
We introduce DarkBit, an advanced software code for computing dark matter constraints on various extensions to the Standard Model of particle physics, comprising both new native code and interfaces to external packages. This release includes a dedicated signal yield calculator for gamma-ray observations, which significantly extends current tools by implementing a cascade-decay Monte Carlo, as well as a dedicated likelihood calculator for current and future experiments ( gamLike). This provides a general solution for studying complex particle physics models that predict dark matter annihilation to a multitude of final states. We also supply a direct detection package that models a large range of direct detection experiments ( DDCalc), and that provides the corresponding likelihoods for arbitrary combinations of spin-independent and spin-dependent scattering processes. Finally, we provide custom relic density routines along with interfaces to DarkSUSY, micrOMEGAs, and the neutrino telescope likelihood package nulike. DarkBit is written in the framework of the Global And Modular Beyond the Standard Model Inference Tool ( GAMBIT), providing seamless integration into a comprehensive statistical fitting framework that allows users to explore new models with both particle and astrophysics constraints, and a consistent treatment of systematic uncertainties. In this paper we describe its main functionality, provide a guide to getting started quickly, and show illustrative examples for results obtained with DarkBit (both as a stand-alone tool and as a GAMBIT module). This includes a quantitative comparison between two of the main dark matter codes ( DarkSUSY and micrOMEGAs), and application of DarkBit 's advanced direct and indirect detection routines to a simple effective dark matter model.
DarkBit. A GAMBIT module for computing dark matter observables and likelihoods
Energy Technology Data Exchange (ETDEWEB)
Bringmann, Torsten; Dal, Lars A. [University of Oslo, Department of Physics, Oslo (Norway); Conrad, Jan; Edsjoe, Joakim; Farmer, Ben [AlbaNova University Centre, Oskar Klein Centre for Cosmoparticle Physics, Stockholm (Sweden); Stockholm University, Department of Physics, Stockholm (Sweden); Cornell, Jonathan M. [McGill University, Department of Physics, Montreal, QC (Canada); Kahlhoefer, Felix; Wild, Sebastian [DESY, Hamburg (Germany); Kvellestad, Anders; Savage, Christopher [NORDITA, Stockholm (Sweden); Putze, Antje [LAPTh, Universite de Savoie, CNRS, Annecy-le-Vieux (France); Scott, Pat [Blackett Laboratory, Imperial College London, Department of Physics, London (United Kingdom); Weniger, Christoph [University of Amsterdam, GRAPPA, Institute of Physics, Amsterdam (Netherlands); White, Martin [University of Adelaide, Department of Physics, Adelaide, SA (Australia); Australian Research Council Centre of Excellence for Particle Physics at the Tera-scale, Parkville (Australia); Collaboration: The GAMBIT Dark Matter Workgroup
2017-12-15
We introduce DarkBit, an advanced software code for computing dark matter constraints on various extensions to the Standard Model of particle physics, comprising both new native code and interfaces to external packages. This release includes a dedicated signal yield calculator for gamma-ray observations, which significantly extends current tools by implementing a cascade-decay Monte Carlo, as well as a dedicated likelihood calculator for current and future experiments (gamLike). This provides a general solution for studying complex particle physics models that predict dark matter annihilation to a multitude of final states. We also supply a direct detection package that models a large range of direct detection experiments (DDCalc), and that provides the corresponding likelihoods for arbitrary combinations of spin-independent and spin-dependent scattering processes. Finally, we provide custom relic density routines along with interfaces to DarkSUSY, micrOMEGAs, and the neutrino telescope likelihood package nulike. DarkBit is written in the framework of the Global And Modular Beyond the Standard Model Inference Tool (GAMBIT), providing seamless integration into a comprehensive statistical fitting framework that allows users to explore new models with both particle and astrophysics constraints, and a consistent treatment of systematic uncertainties. In this paper we describe its main functionality, provide a guide to getting started quickly, and show illustrative examples for results obtained with DarkBit (both as a stand-alone tool and as a GAMBIT module). This includes a quantitative comparison between two of the main dark matter codes (DarkSUSY and micrOMEGAs), and application of DarkBit's advanced direct and indirect detection routines to a simple effective dark matter model. (orig.)
Directory of Open Access Journals (Sweden)
Fonseca Carlos M
2010-10-01
Full Text Available Abstract Background Irregularly shaped spatial clusters are difficult to delineate. A cluster found by an algorithm often spreads through large portions of the map, impacting its geographical meaning. Penalized likelihood methods for Kulldorff's spatial scan statistics have been used to control the excessive freedom of the shape of clusters. Penalty functions based on cluster geometry and non-connectivity have been proposed recently. Another approach involves the use of a multi-objective algorithm to maximize two objectives: the spatial scan statistics and the geometric penalty function. Results & Discussion We present a novel scan statistic algorithm employing a function based on the graph topology to penalize the presence of under-populated disconnection nodes in candidate clusters, the disconnection nodes cohesion function. A disconnection node is defined as a region within a cluster, such that its removal disconnects the cluster. By applying this function, the most geographically meaningful clusters are sifted through the immense set of possible irregularly shaped candidate cluster solutions. To evaluate the statistical significance of solutions for multi-objective scans, a statistical approach based on the concept of attainment function is used. In this paper we compared different penalized likelihoods employing the geometric and non-connectivity regularity functions and the novel disconnection nodes cohesion function. We also build multi-objective scans using those three functions and compare them with the previous penalized likelihood scans. An application is presented using comprehensive state-wide data for Chagas' disease in puerperal women in Minas Gerais state, Brazil. Conclusions We show that, compared to the other single-objective algorithms, multi-objective scans present better performance, regarding power, sensitivity and positive predicted value. The multi-objective non-connectivity scan is faster and better suited for the
Effects of parameter estimation on maximum-likelihood bootstrap analysis.
Ripplinger, Jennifer; Abdo, Zaid; Sullivan, Jack
2010-08-01
Bipartition support in maximum-likelihood (ML) analysis is most commonly assessed using the nonparametric bootstrap. Although bootstrap replicates should theoretically be analyzed in the same manner as the original data, model selection is almost never conducted for bootstrap replicates, substitution-model parameters are often fixed to their maximum-likelihood estimates (MLEs) for the empirical data, and bootstrap replicates may be subjected to less rigorous heuristic search strategies than the original data set. Even though this approach may increase computational tractability, it may also lead to the recovery of suboptimal tree topologies and affect bootstrap values. However, since well-supported bipartitions are often recovered regardless of method, use of a less intensive bootstrap procedure may not significantly affect the results. In this study, we investigate the impact of parameter estimation (i.e., assessment of substitution-model parameters and tree topology) on ML bootstrap analysis. We find that while forgoing model selection and/or setting substitution-model parameters to their empirical MLEs may lead to significantly different bootstrap values, it probably would not change their biological interpretation. Similarly, even though the use of reduced search methods often results in significant differences among bootstrap values, only omitting branch swapping is likely to change any biological inferences drawn from the data. Copyright 2010 Elsevier Inc. All rights reserved.
Likelihood Approximation With Parallel Hierarchical Matrices For Large Spatial Datasets
Litvinenko, Alexander
2017-11-01
The main goal of this article is to introduce the parallel hierarchical matrix library HLIBpro to the statistical community. We describe the HLIBCov package, which is an extension of the HLIBpro library for approximating large covariance matrices and maximizing likelihood functions. We show that an approximate Cholesky factorization of a dense matrix of size $2M\\\\times 2M$ can be computed on a modern multi-core desktop in few minutes. Further, HLIBCov is used for estimating the unknown parameters such as the covariance length, variance and smoothness parameter of a Matérn covariance function by maximizing the joint Gaussian log-likelihood function. The computational bottleneck here is expensive linear algebra arithmetics due to large and dense covariance matrices. Therefore covariance matrices are approximated in the hierarchical ($\\\\H$-) matrix format with computational cost $\\\\mathcal{O}(k^2n \\\\log^2 n/p)$ and storage $\\\\mathcal{O}(kn \\\\log n)$, where the rank $k$ is a small integer (typically $k<25$), $p$ the number of cores and $n$ the number of locations on a fairly general mesh. We demonstrate a synthetic example, where the true values of known parameters are known. For reproducibility we provide the C++ code, the documentation, and the synthetic data.
Maximum likelihood density modification by pattern recognition of structural motifs
Terwilliger, Thomas C.
2004-04-13
An electron density for a crystallographic structure having protein regions and solvent regions is improved by maximizing the log likelihood of a set of structures factors {F.sub.h } using a local log-likelihood function: (x)+p(.rho.(x).vertline.SOLV)p.sub.SOLV (x)+p(.rho.(x).vertline.H)p.sub.H (x)], where p.sub.PROT (x) is the probability that x is in the protein region, p(.rho.(x).vertline.PROT) is the conditional probability for .rho.(x) given that x is in the protein region, and p.sub.SOLV (x) and p(.rho.(x).vertline.SOLV) are the corresponding quantities for the solvent region, p.sub.H (x) refers to the probability that there is a structural motif at a known location, with a known orientation, in the vicinity of the point x; and p(.rho.(x).vertline.H) is the probability distribution for electron density at this point given that the structural motif actually is present. One appropriate structural motif is a helical structure within the crystallographic structure.
Likelihood-Based Inference of B Cell Clonal Families.
Directory of Open Access Journals (Sweden)
Duncan K Ralph
2016-10-01
Full Text Available The human immune system depends on a highly diverse collection of antibody-making B cells. B cell receptor sequence diversity is generated by a random recombination process called "rearrangement" forming progenitor B cells, then a Darwinian process of lineage diversification and selection called "affinity maturation." The resulting receptors can be sequenced in high throughput for research and diagnostics. Such a collection of sequences contains a mixture of various lineages, each of which may be quite numerous, or may consist of only a single member. As a step to understanding the process and result of this diversification, one may wish to reconstruct lineage membership, i.e. to cluster sampled sequences according to which came from the same rearrangement events. We call this clustering problem "clonal family inference." In this paper we describe and validate a likelihood-based framework for clonal family inference based on a multi-hidden Markov Model (multi-HMM framework for B cell receptor sequences. We describe an agglomerative algorithm to find a maximum likelihood clustering, two approximate algorithms with various trade-offs of speed versus accuracy, and a third, fast algorithm for finding specific lineages. We show that under simulation these algorithms greatly improve upon existing clonal family inference methods, and that they also give significantly different clusters than previous methods when applied to two real data sets.
Safe semi-supervised learning based on weighted likelihood.
Kawakita, Masanori; Takeuchi, Jun'ichi
2014-05-01
We are interested in developing a safe semi-supervised learning that works in any situation. Semi-supervised learning postulates that n(') unlabeled data are available in addition to n labeled data. However, almost all of the previous semi-supervised methods require additional assumptions (not only unlabeled data) to make improvements on supervised learning. If such assumptions are not met, then the methods possibly perform worse than supervised learning. Sokolovska, Cappé, and Yvon (2008) proposed a semi-supervised method based on a weighted likelihood approach. They proved that this method asymptotically never performs worse than supervised learning (i.e., it is safe) without any assumption. Their method is attractive because it is easy to implement and is potentially general. Moreover, it is deeply related to a certain statistical paradox. However, the method of Sokolovska et al. (2008) assumes a very limited situation, i.e., classification, discrete covariates, n(')→∞ and a maximum likelihood estimator. In this paper, we extend their method by modifying the weight. We prove that our proposal is safe in a significantly wide range of situations as long as n≤n('). Further, we give a geometrical interpretation of the proof of safety through the relationship with the above-mentioned statistical paradox. Finally, we show that the above proposal is asymptotically safe even when n(')
Targeted Maximum Likelihood Estimation for Causal Inference in Observational Studies.
Schuler, Megan S; Rose, Sherri
2017-01-01
Estimation of causal effects using observational data continues to grow in popularity in the epidemiologic literature. While many applications of causal effect estimation use propensity score methods or G-computation, targeted maximum likelihood estimation (TMLE) is a well-established alternative method with desirable statistical properties. TMLE is a doubly robust maximum-likelihood-based approach that includes a secondary "targeting" step that optimizes the bias-variance tradeoff for the target parameter. Under standard causal assumptions, estimates can be interpreted as causal effects. Because TMLE has not been as widely implemented in epidemiologic research, we aim to provide an accessible presentation of TMLE for applied researchers. We give step-by-step instructions for using TMLE to estimate the average treatment effect in the context of an observational study. We discuss conceptual similarities and differences between TMLE and 2 common estimation approaches (G-computation and inverse probability weighting) and present findings on their relative performance using simulated data. Our simulation study compares methods under parametric regression misspecification; our results highlight TMLE's property of double robustness. Additionally, we discuss best practices for TMLE implementation, particularly the use of ensembled machine learning algorithms. Our simulation study demonstrates all methods using super learning, highlighting that incorporation of machine learning may outperform parametric regression in observational data settings. © The Author 2016. Published by Oxford University Press on behalf of the Johns Hopkins Bloomberg School of Public Health. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Menyoal Elaboration Likelihood Model (ELM dan Teori Retorika
Directory of Open Access Journals (Sweden)
Yudi Perbawaningsih
2012-06-01
Full Text Available Abstract: Persuasion is a communication process to establish or change attitudes, which can be understood through theory of Rhetoric and theory of Elaboration Likelihood Model (ELM. This study elaborates these theories in a Public Lecture series which to persuade the students in choosing their concentration of study. The result shows that in term of persuasion effectiveness it is not quite relevant to separate the message and its source. The quality of source is determined by the quality of the message, and vice versa. Separating the two routes of the persuasion process as described in the ELM theory would not be relevant. Abstrak: Persuasi adalah proses komunikasi untuk membentuk atau mengubah sikap, yang dapat dipahami dengan teori Retorika dan teori Elaboration Likelihood Model (ELM. Penelitian ini mengelaborasi teori tersebut dalam Kuliah Umum sebagai sarana mempersuasi mahasiswa untuk memilih konsentrasi studi studi yang didasarkan pada proses pengolahan informasi. Menggunakan metode survey, didapatkan hasil yaitu tidaklah cukup relevan memisahkan pesan dan narasumber dalam melihat efektivitas persuasi. Keduanya menyatu yang berarti bahwa kualitas narasumber ditentukan oleh kualitas pesan yang disampaikannya, dan sebaliknya. Memisahkan proses persuasi dalam dua lajur seperti yang dijelaskan dalam ELM teori menjadi tidak relevan.
tmle : An R Package for Targeted Maximum Likelihood Estimation
Directory of Open Access Journals (Sweden)
Susan Gruber
2012-11-01
Full Text Available Targeted maximum likelihood estimation (TMLE is a general approach for constructing an efficient double-robust semi-parametric substitution estimator of a causal effect parameter or statistical association measure. tmle is a recently developed R package that implements TMLE of the effect of a binary treatment at a single point in time on an outcome of interest, controlling for user supplied covariates, including an additive treatment effect, relative risk, odds ratio, and the controlled direct effect of a binary treatment controlling for a binary intermediate variable on the pathway from treatment to the out- come. Estimation of the parameters of a marginal structural model is also available. The package allows outcome data with missingness, and experimental units that contribute repeated records of the point-treatment data structure, thereby allowing the analysis of longitudinal data structures. Relevant factors of the likelihood may be modeled or fit data-adaptively according to user specifications, or passed in from an external estimation procedure. Effect estimates, variances, p values, and 95% confidence intervals are provided by the software.
Likelihood inference for a fractionally cointegrated vector autoregressive model
DEFF Research Database (Denmark)
Johansen, Søren; Ørregård Nielsen, Morten
2012-01-01
such that the process X_{t} is fractional of order d and cofractional of order d-b; that is, there exist vectors ß for which ß'X_{t} is fractional of order d-b, and no other fractionality order is possible. We define the statistical model by 0inference when the true values satisfy b0¿1/2 and d0-b0......We consider model based inference in a fractionally cointegrated (or cofractional) vector autoregressive model with a restricted constant term, ¿, based on the Gaussian likelihood conditional on initial values. The model nests the I(d) VAR model. We give conditions on the parameters...... process in the parameters when errors are i.i.d. with suitable moment conditions and initial values are bounded. When the limit is deterministic this implies uniform convergence in probability of the conditional likelihood function. If the true value b0>1/2, we prove that the limit distribution of (ß...
GNSS Spoofing Detection and Mitigation Based on Maximum Likelihood Estimation.
Wang, Fei; Li, Hong; Lu, Mingquan
2017-06-30
Spoofing attacks are threatening the global navigation satellite system (GNSS). The maximum likelihood estimation (MLE)-based positioning technique is a direct positioning method originally developed for multipath rejection and weak signal processing. We find this method also has a potential ability for GNSS anti-spoofing since a spoofing attack that misleads the positioning and timing result will cause distortion to the MLE cost function. Based on the method, an estimation-cancellation approach is presented to detect spoofing attacks and recover the navigation solution. A statistic is derived for spoofing detection with the principle of the generalized likelihood ratio test (GLRT). Then, the MLE cost function is decomposed to further validate whether the navigation solution obtained by MLE-based positioning is formed by consistent signals. Both formulae and simulations are provided to evaluate the anti-spoofing performance. Experiments with recordings in real GNSS spoofing scenarios are also performed to validate the practicability of the approach. Results show that the method works even when the code phase differences between the spoofing and authentic signals are much less than one code chip, which can improve the availability of GNSS service greatly under spoofing attacks.
Driving the Model to Its Limit: Profile Likelihood Based Model Reduction.
Maiwald, Tim; Hass, Helge; Steiert, Bernhard; Vanlier, Joep; Engesser, Raphael; Raue, Andreas; Kipkeew, Friederike; Bock, Hans H; Kaschek, Daniel; Kreutz, Clemens; Timmer, Jens
2016-01-01
In systems biology, one of the major tasks is to tailor model complexity to information content of the data. A useful model should describe the data and produce well-determined parameter estimates and predictions. Too small of a model will not be able to describe the data whereas a model which is too large tends to overfit measurement errors and does not provide precise predictions. Typically, the model is modified and tuned to fit the data, which often results in an oversized model. To restore the balance between model complexity and available measurements, either new data has to be gathered or the model has to be reduced. In this manuscript, we present a data-based method for reducing non-linear models. The profile likelihood is utilised to assess parameter identifiability and designate likely candidates for reduction. Parameter dependencies are analysed along profiles, providing context-dependent suggestions for the type of reduction. We discriminate four distinct scenarios, each associated with a specific model reduction strategy. Iterating the presented procedure eventually results in an identifiable model, which is capable of generating precise and testable predictions. Source code for all toy examples is provided within the freely available, open-source modelling environment Data2Dynamics based on MATLAB available at http://www.data2dynamics.org/, as well as the R packages dMod/cOde available at https://github.com/dkaschek/. Moreover, the concept is generally applicable and can readily be used with any software capable of calculating the profile likelihood.
Driving the Model to Its Limit: Profile Likelihood Based Model Reduction.
Directory of Open Access Journals (Sweden)
Tim Maiwald
Full Text Available In systems biology, one of the major tasks is to tailor model complexity to information content of the data. A useful model should describe the data and produce well-determined parameter estimates and predictions. Too small of a model will not be able to describe the data whereas a model which is too large tends to overfit measurement errors and does not provide precise predictions. Typically, the model is modified and tuned to fit the data, which often results in an oversized model. To restore the balance between model complexity and available measurements, either new data has to be gathered or the model has to be reduced. In this manuscript, we present a data-based method for reducing non-linear models. The profile likelihood is utilised to assess parameter identifiability and designate likely candidates for reduction. Parameter dependencies are analysed along profiles, providing context-dependent suggestions for the type of reduction. We discriminate four distinct scenarios, each associated with a specific model reduction strategy. Iterating the presented procedure eventually results in an identifiable model, which is capable of generating precise and testable predictions. Source code for all toy examples is provided within the freely available, open-source modelling environment Data2Dynamics based on MATLAB available at http://www.data2dynamics.org/, as well as the R packages dMod/cOde available at https://github.com/dkaschek/. Moreover, the concept is generally applicable and can readily be used with any software capable of calculating the profile likelihood.
Evaluation of predictive models for delayed graft function of deceased kidney transplantation.
Zhang, Huanxi; Zheng, Linli; Qin, Shuhang; Liu, Longshan; Yuan, Xiaopeng; Fu, Qian; Li, Jun; Deng, Ronghai; Deng, Suxiong; Yu, Fangchao; He, Xiaoshun; Wang, Changxi
2018-01-05
This study aimed to evaluate the predictive power of five available delayed graft function (DGF)-prediction models for kidney transplants in the Chinese population. Among the five models, the Irish 2010 model scored the best in performance for the Chinese population. Irish 2010 model had an area under the receiver operating characteristic (ROC) curve of 0.737. Hosmer-Lemeshow goodness-of-fit test showed that the Irish 2010 model had a strong correlation between the calculated DGF risk and the observed DGF incidence ( p = 0.887). When Irish 2010 model was used in the clinic, the optimal upper cut-off was set to 0.5 with the best positive likelihood ratio, while the lower cut-off was set to 0.1 with the best negative likelihood ratio. In the subgroup of donor aged ≤ 5, the observed DGF incidence was significantly higher than the calculated DGF risk by Irish 2010 model (27% vs. 9%). A total of 711 renal transplant cases using deceased donors from China Donation after Citizen's Death Program at our center between February 2007 and August 2016 were included in the analysis using the five predictive models (Irish 2010, Irish 2003, Chaphal 2014, Zaza 2015, Jeldres 2009). Irish 2010 model has the best predictive power for DGF risk in Chinese population among the five models. However, it may not be suitable for allograft recipients whose donor aged ≤ 5-year-old.
Predictors of likelihood of speaking up about safety concerns in labour and delivery.
Lyndon, Audrey; Sexton, J Bryan; Simpson, Kathleen Rice; Rosenstein, Alan; Lee, Kathryn A; Wachter, Robert M
2012-09-01
Despite widespread emphasis on promoting 'assertive communication' by care givers as essential to patient-safety-improvement efforts, little is known about when and how clinicians speak up to address safety concerns. In this cross-sectional study, the authors use a new measure of speaking up to begin exploring this issue in maternity care. The authors developed a scenario-based measure of clinician's assessment of potential harm and likelihood of speaking up in response to perceived harm. The authors embedded this scale in a survey with measures of safety climate, teamwork climate, disruptive behaviour, work stress, and personality traits of bravery and assertiveness. The survey was distributed to all registered nurses and obstetricians practising in two US Labour & Delivery units. The response rate was 54% (125 of 230 potential respondents). Respondents were experienced clinicians (13.7±11 years in specialty). A higher perception of harm, respondent role, specialty experience and site predicted the likelihood of speaking up when controlling for bravery and assertiveness. Physicians rated potential harm in common clinical scenarios lower than nurses did (7.5 vs 8.4 on 2-10 scale; pclimate scores. Differing assessments of potential harms inherent in everyday practice may be a target for teamwork intervention in maternity care.
Coakley, Kevin J.; Vecchia, Dominic F.; Hussey, Daniel S.; Jacobson, David L.
2013-10-01
At the NIST Neutron Imaging Facility, we collect neutron projection data for both the dry and wet states of a Proton-Exchange-Membrane (PEM) fuel cell. Transmitted thermal neutrons captured in a scintillator doped with lithium-6 produce scintillation light that is detected by an amorphous silicon detector. Based on joint analysis of the dry and wet state projection data, we reconstruct a residual neutron attenuation image with a Penalized Likelihood method with an edge-preserving Huber penalty function that has two parameters that control how well jumps in the reconstruction are preserved and how well noisy fluctuations are smoothed out. The choice of these parameters greatly influences the resulting reconstruction. We present a data-driven method that objectively selects these parameters, and study its performance for both simulated and experimental data. Before reconstruction, we transform the projection data so that the variance-to-mean ratio is approximately one. For both simulated and measured projection data, the Penalized Likelihood method reconstruction is visually sharper than a reconstruction yielded by a standard Filtered Back Projection method. In an idealized simulation experiment, we demonstrate that the cross validation procedure selects regularization parameters that yield a reconstruction that is nearly optimal according to a root-mean-square prediction error criterion.
Thompson, William C; Newman, Eryn J
2015-08-01
Forensic scientists have come under increasing pressure to quantify the strength of their evidence, but it is not clear which of several possible formats for presenting quantitative conclusions will be easiest for lay people, such as jurors, to understand. This experiment examined the way that people recruited from Amazon's Mechanical Turk (n = 541) responded to 2 types of forensic evidence--a DNA comparison and a shoeprint comparison--when an expert explained the strength of this evidence 3 different ways: using random match probabilities (RMPs), likelihood ratios (LRs), or verbal equivalents of likelihood ratios (VEs). We found that verdicts were sensitive to the strength of DNA evidence regardless of how the expert explained it, but verdicts were sensitive to the strength of shoeprint evidence only when the expert used RMPs. The weight given to DNA evidence was consistent with the predictions of a Bayesian network model that incorporated the perceived risk of a false match from 3 causes (coincidence, a laboratory error, and a frame-up), but shoeprint evidence was undervalued relative to the same Bayesian model. Fallacious interpretations of the expert's testimony (consistent with the source probability error and the defense attorney's fallacy) were common and were associated with the weight given to the evidence and verdicts. The findings indicate that perceptions of forensic science evidence are shaped by prior beliefs and expectations as well as expert testimony and consequently that the best way to characterize and explain forensic evidence may vary across forensic disciplines. (c) 2015 APA, all rights reserved).
Assessing Individual Weather Risk-Taking and Its Role in Modeling Likelihood of Hurricane Evacuation
Stewart, A. E.
2017-12-01
This research focuses upon measuring an individual's level of perceived risk of different severe and extreme weather conditions using a new self-report measure, the Weather Risk-Taking Scale (WRTS). For 32 severe and extreme situations in which people could perform an unsafe behavior (e. g., remaining outside with lightning striking close by, driving over roadways covered with water, not evacuating ahead of an approaching hurricane, etc.), people rated: 1.their likelihood of performing the behavior, 2. The perceived risk of performing the behavior, 3. the expected benefits of performing the behavior, and 4. whether the behavior has actually been performed in the past. Initial development research with the measure using 246 undergraduate students examined its psychometric properties and found that it was internally consistent (Cronbach's a ranged from .87 to .93 for the four scales) and that the scales possessed good temporal (test-retest) reliability (r's ranged from .84 to .91). A second regression study involving 86 undergraduate students found that taking weather risks was associated with having taken similar risks in one's past and with the personality trait of sensation-seeking. Being more attentive to the weather and perceiving its risks when it became extreme was associated with lower likelihoods of taking weather risks (overall regression model, R2adj = 0.60). A third study involving 334 people examined the contributions of weather risk perceptions and risk-taking in modeling the self-reported likelihood of complying with a recommended evacuation ahead of a hurricane. Here, higher perceptions of hurricane risks and lower perceived benefits of risk-taking along with fear of severe weather and hurricane personal self-efficacy ratings were all statistically significant contributors to the likelihood of evacuating ahead of a hurricane. Psychological rootedness and attachment to one's home also tend to predict lack of evacuation. This research highlights the
Kim, Sunghee; Seo, Dong-Jun; Riazi, Hamideh; Shin, Changmin
2014-11-01
An ensemble data assimilation (DA) procedure is developed and evaluated for the Hydrologic Simulation Program - Fortran (HSPF), a widely used watershed water quality model. The procedure aims at improving the accuracy of short-range water quality prediction by updating the model initial conditions (IC) based on real-time observations of hydrologic and water quality variables. The observations assimilated include streamflow, biochemical oxygen demand (BOD), dissolved oxygen (DO), chlorophyll a (CHL-a), nitrate (NO3), phosphate (PO4) and water temperature (TW). The DA procedure uses the maximum likelihood ensemble filter (MLEF), which is capable of handling both nonlinear model dynamics and nonlinear observation equations, in a fixed-lag smoother formulation. For evaluation, the DA procedure was applied to the Kumho Catchment of the Nakdong River Basin in the Republic of Korea. A set of performance measures was used to evaluate analysis and prediction of streamflow and water quality variables. To remove systematic errors in the model simulation originating from structural and parametric errors, a parsimonious bias correction procedure is incorporated into the observation equation. The results show that the DA procedure substantially improves predictive skill for most variables; reduction in root mean square error ranges from 11% to 60% for Day-1 through 3 predictions for all observed variables except DO. It is seen that MLEF handles highly nonlinear hydrologic and biochemical observation equations very well, and that it is an effective DA technique for water quality forecasting.
Bonnes, Stephanie
2016-01-01
Intimate partner violence is a social and public health problem that is prevalent across the world. In many societies, power differentials in relationships, often supported by social norms that promote gender inequality, lead to incidents of intimate partner violence. Among other factors, both a woman's years of education and educational differences between a woman and her partner have been shown to have an effect on her likelihood of experiencing intimate partner abuse. Using the 2010 Malawian Demographic and Health Survey data to analyze intimate partner violence among 3,893 married Malawian women and their husbands, this article focuses on understanding the effect of educational differences between husband and wife on the likelihood of physical and emotional abuse within a marriage. The results from logistic regression models show that a woman's level of education is a significant predictor of her likelihood of experiencing intimate partner violence by her current husband, but that this effect is contingent on her husband's level of education. This study demonstrates the need to educate men alongside of women in Malawi to help decrease women's risk of physical and emotional intimate partner violence.
Montgomery County of Maryland — This dataset contains the monthly summary data indicating incident occurred in each fire station response area. The summary data is the incident count broken down by...
Police Incident Reports Written
Town of Chapel Hill, North Carolina — This table contains incident reports filed with the Chapel Hill Police Department. Multiple incidents may have been reported at the same time. The most serious...
CSIR Research Space (South Africa)
Kok, S
2012-07-01
Full Text Available This study reports on the asymptotic behavior of the maximum likelihood function, encountered when constructing Kriging approximations using the Gaussian correlation function. Of specific interest is a maximum likelihood function that decreases...
Narrow band interference cancelation in OFDM: Astructured maximum likelihood approach
Sohail, Muhammad Sadiq
2012-06-01
This paper presents a maximum likelihood (ML) approach to mitigate the effect of narrow band interference (NBI) in a zero padded orthogonal frequency division multiplexing (ZP-OFDM) system. The NBI is assumed to be time variant and asynchronous with the frequency grid of the ZP-OFDM system. The proposed structure based technique uses the fact that the NBI signal is sparse as compared to the ZP-OFDM signal in the frequency domain. The structure is also useful in reducing the computational complexity of the proposed method. The paper also presents a data aided approach for improved NBI estimation. The suitability of the proposed method is demonstrated through simulations. © 2012 IEEE.
Likelihood Approximation With Hierarchical Matrices For Large Spatial Datasets
Litvinenko, Alexander
2017-09-03
We use available measurements to estimate the unknown parameters (variance, smoothness parameter, and covariance length) of a covariance function by maximizing the joint Gaussian log-likelihood function. To overcome cubic complexity in the linear algebra, we approximate the discretized covariance function in the hierarchical (H-) matrix format. The H-matrix format has a log-linear computational cost and storage O(kn log n), where the rank k is a small integer and n is the number of locations. The H-matrix technique allows us to work with general covariance matrices in an efficient way, since H-matrices can approximate inhomogeneous covariance functions, with a fairly general mesh that is not necessarily axes-parallel, and neither the covariance matrix itself nor its inverse have to be sparse. We demonstrate our method with Monte Carlo simulations and an application to soil moisture data. The C, C++ codes and data are freely available.
Music genre classification via likelihood fusion from multiple feature models
Shiu, Yu; Kuo, C.-C. J.
2005-01-01
Music genre provides an efficient way to index songs in a music database, and can be used as an effective means to retrieval music of a similar type, i.e. content-based music retrieval. A new two-stage scheme for music genre classification is proposed in this work. At the first stage, we examine a couple of different features, construct their corresponding parametric models (e.g. GMM and HMM) and compute their likelihood functions to yield soft classification results. In particular, the timbre, rhythm and temporal variation features are considered. Then, at the second stage, these soft classification results are integrated to result in a hard decision for final music genre classification. Experimental results are given to demonstrate the performance of the proposed scheme.
H.264 SVC Complexity Reduction Based on Likelihood Mode Decision
Directory of Open Access Journals (Sweden)
L. Balaji
2015-01-01
Full Text Available H.264 Advanced Video Coding (AVC was prolonged to Scalable Video Coding (SVC. SVC executes in different electronics gadgets such as personal computer, HDTV, SDTV, IPTV, and full-HDTV in which user demands various scaling of the same content. The various scaling is resolution, frame rate, quality, heterogeneous networks, bandwidth, and so forth. Scaling consumes more encoding time and computational complexity during mode selection. In this paper, to reduce encoding time and computational complexity, a fast mode decision algorithm based on likelihood mode decision (LMD is proposed. LMD is evaluated in both temporal and spatial scaling. From the results, we conclude that LMD performs well, when compared to the previous fast mode decision algorithms. The comparison parameters are time, PSNR, and bit rate. LMD achieve time saving of 66.65% with 0.05% detriment in PSNR and 0.17% increment in bit rate compared with the full search method.
H.264 SVC Complexity Reduction Based on Likelihood Mode Decision.
Balaji, L; Thyagharajan, K K
2015-01-01
H.264 Advanced Video Coding (AVC) was prolonged to Scalable Video Coding (SVC). SVC executes in different electronics gadgets such as personal computer, HDTV, SDTV, IPTV, and full-HDTV in which user demands various scaling of the same content. The various scaling is resolution, frame rate, quality, heterogeneous networks, bandwidth, and so forth. Scaling consumes more encoding time and computational complexity during mode selection. In this paper, to reduce encoding time and computational complexity, a fast mode decision algorithm based on likelihood mode decision (LMD) is proposed. LMD is evaluated in both temporal and spatial scaling. From the results, we conclude that LMD performs well, when compared to the previous fast mode decision algorithms. The comparison parameters are time, PSNR, and bit rate. LMD achieve time saving of 66.65% with 0.05% detriment in PSNR and 0.17% increment in bit rate compared with the full search method.
Salient Point and Scale Detection by Minimum Likelihood
DEFF Research Database (Denmark)
Pedersen, Kim Steenstrup; Loog, Marco; van Dorst, Pieter
2007-01-01
We propose a novel approach for detection of salient image points and estimation of their intrinsic scales based on the fractional Brownian image model. Under this model images are realisations of a Gaussian random process on the plane. We define salient points as points that have a locally unique...... image structure. Such points are usually sparsely distributed in images and carry important information about the image content. Locality is defined in terms of the measurement scale of the filters used to describe the image structure. Here we use partial derivatives of the image function defined using...... linear scale space theory. We propose to detect salient points and their intrinsic scale by detecting points in scale-space that locally minimise the likelihood under the model....
Marginal Maximum Likelihood Estimation of Item Response Models in R
Directory of Open Access Journals (Sweden)
Matthew S. Johnson
2007-02-01
Full Text Available Item response theory (IRT models are a class of statistical models used by researchers to describe the response behaviors of individuals to a set of categorically scored items. The most common IRT models can be classified as generalized linear fixed- and/or mixed-effect models. Although IRT models appear most often in the psychological testing literature, researchers in other fields have successfully utilized IRT-like models in a wide variety of applications. This paper discusses the three major methods of estimation in IRT and develops R functions utilizing the built-in capabilities of the R environment to find the marginal maximum likelihood estimates of the generalized partial credit model. The currently available R packages ltm is also discussed.
Maximum likelihood estimation of phase-type distributions
DEFF Research Database (Denmark)
Esparza, Luz Judith R
for both univariate and multivariate cases. Methods like the EM algorithm and Markov chain Monte Carlo are applied for this purpose. Furthermore, this thesis provides explicit formulae for computing the Fisher information matrix for discrete and continuous phase-type distributions, which is needed to find......This work is concerned with the statistical inference of phase-type distributions and the analysis of distributions with rational Laplace transform, known as matrix-exponential distributions. The thesis is focused on the estimation of the maximum likelihood parameters of phase-type distributions...... confidence regions for their estimated parameters. Finally, a new general class of distributions, called bilateral matrix-exponential distributions, is defined. These distributions have the entire real line as domain and can be used, for instance, for modelling. In addition, this class of distributions...
Calibration of two complex ecosystem models with different likelihood functions
Hidy, Dóra; Haszpra, László; Pintér, Krisztina; Nagy, Zoltán; Barcza, Zoltán
2014-05-01
The biosphere is a sensitive carbon reservoir. Terrestrial ecosystems were approximately carbon neutral during the past centuries, but they became net carbon sinks due to climate change induced environmental change and associated CO2 fertilization effect of the atmosphere. Model studies and measurements indicate that the biospheric carbon sink can saturate in the future due to ongoing climate change which can act as a positive feedback. Robustness of carbon cycle models is a key issue when trying to choose the appropriate model for decision support. The input parameters of the process-based models are decisive regarding the model output. At the same time there are several input parameters for which accurate values are hard to obtain directly from experiments or no local measurements are available. Due to the uncertainty associated with the unknown model parameters significant bias can be experienced if the model is used to simulate the carbon and nitrogen cycle components of different ecosystems. In order to improve model performance the unknown model parameters has to be estimated. We developed a multi-objective, two-step calibration method based on Bayesian approach in order to estimate the unknown parameters of PaSim and Biome-BGC models. Biome-BGC and PaSim are a widely used biogeochemical models that simulate the storage and flux of water, carbon, and nitrogen between the ecosystem and the atmosphere, and within the components of the terrestrial ecosystems (in this research the developed version of Biome-BGC is used which is referred as BBGC MuSo). Both models were calibrated regardless the simulated processes and type of model parameters. The calibration procedure is based on the comparison of measured data with simulated results via calculating a likelihood function (degree of goodness-of-fit between simulated and measured data). In our research different likelihood function formulations were used in order to examine the effect of the different model
Maximum Likelihood Blood Velocity Estimator Incorporating Properties of Flow Physics
DEFF Research Database (Denmark)
Schlaikjer, Malene; Jensen, Jørgen Arendt
2004-01-01
)-data under investigation. The flow physic properties are exploited in the second term, as the range of velocity values investigated in the cross-correlation analysis are compared to the velocity estimates in the temporal and spatial neighborhood of the signal segment under investigation. The new estimator...... has been compared to the cross-correlation (CC) estimator and the previously developed maximum likelihood estimator (MLE). The results show that the CMLE can handle a larger velocity search range and is capable of estimating even low velocity levels from tissue motion. The CC and the MLE produce...... for the CC and the MLE. When the velocity search range is set to twice the limit of the CC and the MLE, the number of incorrect velocity estimates are 0, 19.1, and 7.2% for the CMLE, CC, and MLE, respectively. The ability to handle a larger search range and estimating low velocity levels was confirmed...
Incident Information Management Tool
Pejovic, Vladimir
2015-01-01
Flaws of\tcurrent incident information management at CMS and CERN\tare discussed. A new data\tmodel for future incident database is\tproposed and briefly described. Recently developed draft version of GIS-‐based tool for incident tracking is presented.
Targeted maximum likelihood estimation for a binary treatment: A tutorial.
Luque-Fernandez, Miguel Angel; Schomaker, Michael; Rachet, Bernard; Schnitzer, Mireille E
2018-04-23
When estimating the average effect of a binary treatment (or exposure) on an outcome, methods that incorporate propensity scores, the G-formula, or targeted maximum likelihood estimation (TMLE) are preferred over naïve regression approaches, which are biased under misspecification of a parametric outcome model. In contrast propensity score methods require the correct specification of an exposure model. Double-robust methods only require correct specification of either the outcome or the exposure model. Targeted maximum likelihood estimation is a semiparametric double-robust method that improves the chances of correct model specification by allowing for flexible estimation using (nonparametric) machine-learning methods. It therefore requires weaker assumptions than its competitors. We provide a step-by-step guided implementation of TMLE and illustrate it in a realistic scenario based on cancer epidemiology where assumptions about correct model specification and positivity (ie, when a study participant had 0 probability of receiving the treatment) are nearly violated. This article provides a concise and reproducible educational introduction to TMLE for a binary outcome and exposure. The reader should gain sufficient understanding of TMLE from this introductory tutorial to be able to apply the method in practice. Extensive R-code is provided in easy-to-read boxes throughout the article for replicability. Stata users will find a testing implementation of TMLE and additional material in the Appendix S1 and at the following GitHub repository: https://github.com/migariane/SIM-TMLE-tutorial. © 2018 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd.
Likelihood ratio tests in rare variant detection for continuous phenotypes.
Zeng, Ping; Zhao, Yang; Liu, Jin; Liu, Liya; Zhang, Liwei; Wang, Ting; Huang, Shuiping; Chen, Feng
2014-09-01
It is believed that rare variants play an important role in human phenotypes; however, the detection of rare variants is extremely challenging due to their very low minor allele frequency. In this paper, the likelihood ratio test (LRT) and restricted likelihood ratio test (ReLRT) are proposed to test the association of rare variants based on the linear mixed effects model, where a group of rare variants are treated as random effects. Like the sequence kernel association test (SKAT), a state-of-the-art method for rare variant detection, LRT and ReLRT can effectively overcome the problem of directionality of effect inherent in the burden test in practice. By taking full advantage of the spectral decomposition, exact finite sample null distributions for LRT and ReLRT are obtained by simulation. We perform extensive numerical studies to evaluate the performance of LRT and ReLRT, and compare to the burden test, SKAT and SKAT-O. The simulations have shown that LRT and ReLRT can correctly control the type I error, and the controls are robust to the weights chosen and the number of rare variants under study. LRT and ReLRT behave similarly to the burden test when all the causal rare variants share the same direction of effect, and outperform SKAT across various situations. When both positive and negative effects exist, LRT and ReLRT suffer from few power reductions compared to the other two competing methods; under this case, an additional finding from our simulations is that SKAT-O is no longer the optimal test, and its power is even lower than that of SKAT. The exome sequencing SNP data from Genetic Analysis Workshop 17 were employed to illustrate the proposed methods, and interesting results are described. © 2014 John Wiley & Sons Ltd/University College London.
Accelerated maximum likelihood parameter estimation for stochastic biochemical systems
Directory of Open Access Journals (Sweden)
Daigle Bernie J
2012-05-01
Full Text Available Abstract Background A prerequisite for the mechanistic simulation of a biochemical system is detailed knowledge of its kinetic parameters. Despite recent experimental advances, the estimation of unknown parameter values from observed data is still a bottleneck for obtaining accurate simulation results. Many methods exist for parameter estimation in deterministic biochemical systems; methods for discrete stochastic systems are less well developed. Given the probabilistic nature of stochastic biochemical models, a natural approach is to choose parameter values that maximize the probability of the observed data with respect to the unknown parameters, a.k.a. the maximum likelihood parameter estimates (MLEs. MLE computation for all but the simplest models requires the simulation of many system trajectories that are consistent with experimental data. For models with unknown parameters, this presents a computational challenge, as the generation of consistent trajectories can be an extremely rare occurrence. Results We have developed Monte Carlo Expectation-Maximization with Modified Cross-Entropy Method (MCEM2: an accelerated method for calculating MLEs that combines advances in rare event simulation with a computationally efficient version of the Monte Carlo expectation-maximization (MCEM algorithm. Our method requires no prior knowledge regarding parameter values, and it automatically provides a multivariate parameter uncertainty estimate. We applied the method to five stochastic systems of increasing complexity, progressing from an analytically tractable pure-birth model to a computationally demanding model of yeast-polarization. Our results demonstrate that MCEM2 substantially accelerates MLE computation on all tested models when compared to a stand-alone version of MCEM. Additionally, we show how our method identifies parameter values for certain classes of models more accurately than two recently proposed computationally efficient methods
Directory of Open Access Journals (Sweden)
Guilherme Bernardino da Cunha
2010-10-01
Full Text Available INTRODUÇÃO: A malária é uma doença endêmica na Amazônia Legal Brasileira, apresentando riscos diferentes para cada região. O Município de Cantá, no Estado de Roraima, apresentou para todo o período estudado, um dos maiores índices parasitários anuais do Brasil, com valor sempre maior que 50. O presente estudo visa à utilização de uma rede neural artificial para previsão da incidência da malária nesse município, a fim de auxiliar os coordenadores de saúde no planejamento e gestão dos recursos. MÉTODOS: Os dados foram coletados no site do Ministério da Saúde, SIVEP - Malária entre 2003 e 2009. Estruturou-se uma rede neural artificial com três neurônios na camada de entrada, duas camadas intermediárias e uma camada de saída com um neurônio. A função de ativação foi à sigmoide. No treinamento, utilizou-se o método backpropagation, com taxa de aprendizado de 0,05 e momentum 0,01. O critério de parada foi atingir 20.000 ciclos ou uma meta de 0,001. Os dados de 2003 a 2008 foram utilizados para treinamento e validação. Comparam-se os resultados com os de um modelo de regressão logística. RESULTADOS: Os resultados para todos os períodos previstos mostraram-se que as redes neurais artificiais obtiveram um menor erro quadrático médio e erro absoluto quando comparado com o modelo de regressão para o ano de 2009. CONCLUSÕES: A rede neural artificial se mostrou adequada para um sistema de previsão de malária no município estudado, determinando com pequenos erros absolutos os valores preditivos, quando comparados ao modelo de regressão logística e aos valores reais.INTRODUCTION: Malaria is endemic in the Brazilian Amazon region, with different risks for each region. The City of Cantá, State of Roraima, presented one of the largest annual parasite indices in Brazil for the entire study period, with a value always greater than 50. The present study aimed to use an artificial neural network to predict the
Estimation for Non-Gaussian Locally Stationary Processes with Empirical Likelihood Method
Directory of Open Access Journals (Sweden)
Hiroaki Ogata
2012-01-01
Full Text Available An application of the empirical likelihood method to non-Gaussian locally stationary processes is presented. Based on the central limit theorem for locally stationary processes, we give the asymptotic distributions of the maximum empirical likelihood estimator and the empirical likelihood ratio statistics, respectively. It is shown that the empirical likelihood method enables us to make inferences on various important indices in a time series analysis. Furthermore, we give a numerical study and investigate a finite sample property.
AZALIA: an A to Z Assessment of the Likelihood of Insider Attack
Energy Technology Data Exchange (ETDEWEB)
Bishop, Matt; Gates, Carrie; Frincke, Deborah A.; Greitzer, Frank L.
2009-05-12
Recent surveys indicate that the ``financial impact and operating losses due to insider intrusions are increasing'' . Within the government, insider abuse by those with access to sensitive or classified material can be particularly damaging. Further, the detection of such abuse is becoming more difficult due to other influences, such as out-sourcing, social networking and mobile computing. This paper focuses on a key aspect of our enterprise-wide architecture: a risk assessment based on predictions of the likelihood that a specific user poses an increased risk of behaving in a manner that is inconsistent with the organization’s stated goals and interests. We present a high-level architectural description for an enterprise-level insider threat product and we describe psychosocial factors and associated data needs to recognize possible insider threats.
Number of Siblings During Childhood and the Likelihood of Divorce in Adulthood.
Bobbitt-Zeher, Donna; Downey, Douglas B; Merry, Joseph
2016-11-01
Despite fertility decline across economically developed countries, relatively little is known about the social consequences of children being raised with fewer siblings. Much research suggests that growing up with fewer siblings is probably positive, as children tend to do better in school when sibship size is small. Less scholarship, however, has explored how growing up with few siblings influences children's ability to get along with peers and develop long-term meaningful relationships. If siblings serve as important social practice partners during childhood, individuals with few or no siblings may struggle to develop successful social lives later in adulthood. With data from the General Social Surveys 1972-2012 , we explore this possibility by testing whether sibship size during childhood predicts the probability of divorce in adulthood. We find that, among those who ever marry, each additional sibling is associated with a three percent decline in the likelihood of divorce, net of covariates.
A short proof that phylogenetic tree reconstruction by maximum likelihood is hard.
Roch, Sebastien
2006-01-01
Maximum likelihood is one of the most widely used techniques to infer evolutionary histories. Although it is thought to be intractable, a proof of its hardness has been lacking. Here, we give a short proof that computing the maximum likelihood tree is NP-hard by exploiting a connection between likelihood and parsimony observed by Tuffley and Steel.
A Short Proof that Phylogenetic Tree Reconstruction by Maximum Likelihood is Hard
Roch, S.
2005-01-01
Maximum likelihood is one of the most widely used techniques to infer evolutionary histories. Although it is thought to be intractable, a proof of its hardness has been lacking. Here, we give a short proof that computing the maximum likelihood tree is NP-hard by exploiting a connection between likelihood and parsimony observed by Tuffley and Steel.
Rizzo, Roberto Emanuele; Healy, David; De Siena, Luca
2017-04-01
The success of any predictive model is largely dependent on the accuracy with which its parameters are known. When characterising fracture networks in rocks, one of the main issues is accurately scaling the parameters governing the distribution of fracture attributes. Optimal characterisation and analysis of fracture lengths and apertures are fundamental to estimate bulk permeability and therefore fluid flow, especially for rocks with low primary porosity where most of the flow takes place within fractures. The main objective of this work is to demonstrate a more accurate statistical approach to increase utility, meaningfulness, and reliability of data from fractured outcrop analogues. We collected outcrop data from a fractured upper Miocene biosiliceous mudstone formation (California, USA), which exhibits seepage of bitumen-rich fluids through the fractures. The dataset was analysed using Maximum Likelihood Estimators to extract the underlying scaling parameters, and we found a log-normal distribution to be the best representative statistic for both fracture lengths and apertures in the study area. This result can be related to a characteristic length scale, probably the bedding within the sedimentary succession. Finding the best statistical distribution governing a dataset is of critical importance when predicting the tendency of fracture attributes towards small and large scales. The application of Maximum Likelihood Estimators allowed us firstly to individuate the best statistical distribution for fracture attributes measured on outcrop (specifically, length and aperture); secondly, we used the calculated scaling parameter to generate synthetic fracture networks, which by design are more likely to resemble the distribution and spatial organisation observed on outcrop. Finally, we employed the derived distributions for a 2D estimation of the bulk permeability tensor, yielding consistent values of anisotropic permeability for highly fractured rock masses
Directory of Open Access Journals (Sweden)
Wenyu Lv
2014-04-01
Full Text Available Undoubtedly an accident involving gas is one of the greater disasters that can occur in a coalmine, thus being able to predict when an accident involving gas might occur is an essential aspect in loss prevention and the reduction of safety hazards. However, the traditional methods concerning gas safety prediction is hindered by multi-objective and non-continuous problem. The coalmine gas prediction model based on multi-sensor data fusion technology (CGPM-MSDFT was established through analysis of accidents involving gas using artificial neural network to fuse multi- sensor data, using an improved algorithm designed to train the network and using an early stop method to resolve the over-fitting problem, the network test and field application results show that this model can provide a new direction for research into predicting the likelihood of a gas related incident within a coalmine. It will have a broad application prospect in coal mining.
Smidt, M.L.; Strobbe, L.J.; Groenewoud, J.M.M.; Wilt, G.J. van der; Zee, K.J. van; Wobbes, Th.
2007-01-01
BACKGROUND: In approximately 40% of the breast cancer patients with sentinel lymph node (SLN) metastases, additional nodal metastases are detected in the completion axillary lymph node dissection (cALND). The MSKCC nomogram can help to quantify a patient's individual risk for non-SLN metastases with
Predicting the likelihood of QA failure using treatment plan accuracy metrics
Kairn, T.; Crowe, S. B.; Kenny, J.; Knight, R. T.; Trapp, J. V.
2014-03-01
This study used automated data processing techniques to calculate a set of novel treatment plan accuracy metrics, and investigate their usefulness as predictors of quality assurance (QA) success and failure. A small sample of 151 beams from 23 prostate and cranial IMRT treatment plans were used in this study. These plans had been evaluated before treatment using measurements with a diode array system. The TADA software suite was adapted to allow automatic batch calculation of several proposed plan accuracy metrics, including mean field area, small-aperture, off-axis and closed-leaf factors. All of these results were compared to the gamma pass rates from the QA measurements and correlations were investigated. The mean field area factor provided a threshold field size (5 cm2, equivalent to a 2.2 × 2.2 cm2 square field), below which all beams failed the QA tests. The small aperture score provided a useful predictor of plan failure, when averaged over all beams, despite being weakly correlated with gamma pass rates for individual beams. By contrast, the closed leaf and off-axis factors provided information about the geometric arrangement of the beam segments but were not useful for distinguishing between plans that passed and failed QA. This study has provided some simple tests for plan accuracy, which may help minimise time spent on QA assessments of treatments that are unlikely to pass.
Likelihood ratio model for classification of forensic evidence
Energy Technology Data Exchange (ETDEWEB)
Zadora, G., E-mail: gzadora@ies.krakow.pl [Institute of Forensic Research, Westerplatte 9, 31-033 Krakow (Poland); Neocleous, T., E-mail: tereza@stats.gla.ac.uk [University of Glasgow, Department of Statistics, 15 University Gardens, Glasgow G12 8QW (United Kingdom)
2009-05-29
One of the problems of analysis of forensic evidence such as glass fragments, is the determination of their use-type category, e.g. does a glass fragment originate from an unknown window or container? Very small glass fragments arise during various accidents and criminal offences, and could be carried on the clothes, shoes and hair of participants. It is therefore necessary to obtain information on their physicochemical composition in order to solve the classification problem. Scanning Electron Microscopy coupled with an Energy Dispersive X-ray Spectrometer and the Glass Refractive Index Measurement method are routinely used in many forensic institutes for the investigation of glass. A natural form of glass evidence evaluation for forensic purposes is the likelihood ratio-LR = p(E|H{sub 1})/p(E|H{sub 2}). The main aim of this paper was to study the performance of LR models for glass object classification which considered one or two sources of data variability, i.e. between-glass-object variability and(or) within-glass-object variability. Within the proposed model a multivariate kernel density approach was adopted for modelling the between-object distribution and a multivariate normal distribution was adopted for modelling within-object distributions. Moreover, a graphical method of estimating the dependence structure was employed to reduce the highly multivariate problem to several lower-dimensional problems. The performed analysis showed that the best likelihood model was the one which allows to include information about between and within-object variability, and with variables derived from elemental compositions measured by SEM-EDX, and refractive values determined before (RI{sub b}) and after (RI{sub a}) the annealing process, in the form of dRI = log{sub 10}|RI{sub a} - RI{sub b}|. This model gave better results than the model with only between-object variability considered. In addition, when dRI and variables derived from elemental compositions were used, this
Likelihood ratio model for classification of forensic evidence
International Nuclear Information System (INIS)
Zadora, G.; Neocleous, T.
2009-01-01
One of the problems of analysis of forensic evidence such as glass fragments, is the determination of their use-type category, e.g. does a glass fragment originate from an unknown window or container? Very small glass fragments arise during various accidents and criminal offences, and could be carried on the clothes, shoes and hair of participants. It is therefore necessary to obtain information on their physicochemical composition in order to solve the classification problem. Scanning Electron Microscopy coupled with an Energy Dispersive X-ray Spectrometer and the Glass Refractive Index Measurement method are routinely used in many forensic institutes for the investigation of glass. A natural form of glass evidence evaluation for forensic purposes is the likelihood ratio-LR = p(E|H 1 )/p(E|H 2 ). The main aim of this paper was to study the performance of LR models for glass object classification which considered one or two sources of data variability, i.e. between-glass-object variability and(or) within-glass-object variability. Within the proposed model a multivariate kernel density approach was adopted for modelling the between-object distribution and a multivariate normal distribution was adopted for modelling within-object distributions. Moreover, a graphical method of estimating the dependence structure was employed to reduce the highly multivariate problem to several lower-dimensional problems. The performed analysis showed that the best likelihood model was the one which allows to include information about between and within-object variability, and with variables derived from elemental compositions measured by SEM-EDX, and refractive values determined before (RI b ) and after (RI a ) the annealing process, in the form of dRI = log 10 |RI a - RI b |. This model gave better results than the model with only between-object variability considered. In addition, when dRI and variables derived from elemental compositions were used, this model outperformed two other
A simulation study of likelihood inference procedures in rayleigh distribution with censored data
International Nuclear Information System (INIS)
Baklizi, S. A.; Baker, H. M.
2001-01-01
Inference procedures based on the likelihood function are considered for the one parameter Rayleigh distribution with type1 and type 2 censored data. Using simulation techniques, the finite sample performances of the maximum likelihood estimator and the large sample likelihood interval estimation procedures based on the Wald, the Rao, and the likelihood ratio statistics are investigated. It appears that the maximum likelihood estimator is unbiased. The approximate variance estimates obtained from the asymptotic normal distribution of the maximum likelihood estimator are accurate under type 2 censored data while they tend to be smaller than the actual variances when considering type1 censored data of small size. It appears also that interval estimation based on the Wald and Rao statistics need much more sample size than interval estimation based on the likelihood ratio statistic to attain reasonable accuracy. (authors). 15 refs., 4 tabs
Bronchoaspiration: incidence, consequences and management.
Beck-Schimmer, Beatrice; Bonvini, John M
2011-02-01
Aspiration is defined as the inhalation of oropharyngeal or gastric contents into the lower respiratory tract. Upon injury, epithelial cells and alveolar macrophages secrete chemical mediators, attracting and activating neutrophils, which in turn release proteases and reactive oxygen species, degrading the alveolocapillary unit. Aspiration can lead to a range of diseases such as infectious pneumonia, chemical pneumonitis or respiratory distress syndrome with significant morbidity and mortality. It occurs in approximately 3-10 per 10 000 operations with an increased incidence in obstetric and paediatric anaesthesia. Patients are most at risk during induction of anaesthesia and extubation, in particular in emergency situations. The likelihood of significant aspiration can be reduced by fasting, pharmacological intervention and correct anaesthetic management using a rapid sequence induction. Treatment of acid aspiration is by suctioning after witnessed aspiration; antibiotics are indicated in patients with aspiration pneumonia only. Steroids are not proven to improve outcome or reduce mortality. Patients with acute lung injury requiring mechanical ventilation should be ventilated using lung protective strategies with low tidal volumes and low plateau pressure values, attempting to limit peak lung distension and end-expiratory collapse.
Dark matter CMB constraints and likelihoods for poor particle physicists
Energy Technology Data Exchange (ETDEWEB)
Cline, James M.; Scott, Pat, E-mail: jcline@physics.mcgill.ca, E-mail: patscott@physics.mcgill.ca [Department of Physics, McGill University, 3600 rue University, Montréal, QC, H3A 2T8 (Canada)
2013-03-01
The cosmic microwave background provides constraints on the annihilation and decay of light dark matter at redshifts between 100 and 1000, the strength of which depends upon the fraction of energy ending up in the form of electrons and photons. The resulting constraints are usually presented for a limited selection of annihilation and decay channels. Here we provide constraints on the annihilation cross section and decay rate, at discrete values of the dark matter mass m{sub χ}, for all the annihilation and decay channels whose secondary spectra have been computed using PYTHIA in arXiv:1012.4515 (''PPPC 4 DM ID: a poor particle physicist cookbook for dark matter indirect detection''), namely e, μ, τ, V → e, V → μ, V → τ, u, d s, c, b, t, γ, g, W, Z and h. By interpolating in mass, these can be used to find the CMB constraints and likelihood functions from WMAP7 and Planck for a wide range of dark matter models, including those with annihilation or decay into a linear combination of different channels.
Physical activity may decrease the likelihood of children developing constipation.
Seidenfaden, Sandra; Ormarsson, Orri Thor; Lund, Sigrun H; Bjornsson, Einar S
2018-01-01
Childhood constipation is common. We evaluated children diagnosed with constipation, who were referred to an Icelandic paediatric emergency department, and determined the effect of lifestyle factors on its aetiology. The parents of children who were diagnosed with constipation and participated in a phase IIB clinical trial on laxative suppositories answered an online questionnaire about their children's lifestyle and constipation in March-April 2013. The parents of nonconstipated children that visited the paediatric department of Landspitali University Hospital or an Icelandic outpatient clinic answered the same questionnaire. We analysed responses regarding 190 children aged one year to 18 years: 60 with constipation and 130 without. We found that 40% of the constipated children had recurrent symptoms, 27% had to seek medical attention more than once and 33% received medication per rectum. The 47 of 130 control group subjects aged 10-18 were much more likely to exercise more than three times a week (72%) and for more than a hour (62%) than the 26 of 60 constipated children of the same age (42% and 35%, respectively). Constipation risk factors varied with age and many children diagnosed with constipation had recurrent symptoms. Physical activity may affect the likelihood of developing constipation in older children. ©2017 Foundation Acta Paediatrica. Published by John Wiley & Sons Ltd.
Maximum likelihood estimation for cytogenetic dose-response curves
International Nuclear Information System (INIS)
Frome, E.L.; DuFrain, R.J.
1986-01-01
In vitro dose-response curves are used to describe the relation between chromosome aberrations and radiation dose for human lymphocytes. The lymphocytes are exposed to low-LET radiation, and the resulting dicentric chromosome aberrations follow the Poisson distribution. The expected yield depends on both the magnitude and the temporal distribution of the dose. A general dose-response model that describes this relation has been presented by Kellerer and Rossi (1972, Current Topics on Radiation Research Quarterly 8, 85-158; 1978, Radiation Research 75, 471-488) using the theory of dual radiation action. Two special cases of practical interest are split-dose and continuous exposure experiments, and the resulting dose-time-response models are intrinsically nonlinear in the parameters. A general-purpose maximum likelihood estimation procedure is described, and estimation for the nonlinear models is illustrated with numerical examples from both experimental designs. Poisson regression analysis is used for estimation, hypothesis testing, and regression diagnostics. Results are discussed in the context of exposure assessment procedures for both acute and chronic human radiation exposure
Stable isotope analysis of white paints and likelihood ratios.
Farmer, N; Meier-Augenstein, W; Lucy, D
2009-06-01
Architectural paints are commonly found as trace evidence at scenes of crime. Currently the most widely used technique for the analysis of architectural paints is Fourier Transformed Infra-Red Spectroscopy (FTIR). There are, however, limitations to the forensic analysis of white paints, and the ability to discriminate between samples. Isotope ratio mass spectrometry (IRMS) has been investigated as a potential tool for the analysis of architectural white paints, where no preparation of samples prior to analysis is required. When stable isotope profiles (SIPs) are compared, there appears to be no relationship between paints from the same manufacturer, or between paints of the same type. Unlike existing techniques, IRMS does not differentiate resin samples solely on the basis of modifier or oil-type, but exploits additional factors linked to samples such as geo-location where oils added to alkyd formulations were grown. In combination with the use of likelihood ratios, IRMS shows potential, with a false positive rate of 2.6% from a total of 1275 comparisons.
Maximum likelihood sequence estimation for optical complex direct modulation.
Che, Di; Yuan, Feng; Shieh, William
2017-04-17
Semiconductor lasers are versatile optical transmitters in nature. Through the direct modulation (DM), the intensity modulation is realized by the linear mapping between the injection current and the light power, while various angle modulations are enabled by the frequency chirp. Limited by the direct detection, DM lasers used to be exploited only as 1-D (intensity or angle) transmitters by suppressing or simply ignoring the other modulation. Nevertheless, through the digital coherent detection, simultaneous intensity and angle modulations (namely, 2-D complex DM, CDM) can be realized by a single laser diode. The crucial technique of CDM is the joint demodulation of intensity and differential phase with the maximum likelihood sequence estimation (MLSE), supported by a closed-form discrete signal approximation of frequency chirp to characterize the MLSE transition probability. This paper proposes a statistical method for the transition probability to significantly enhance the accuracy of the chirp model. Using the statistical estimation, we demonstrate the first single-channel 100-Gb/s PAM-4 transmission over 1600-km fiber with only 10G-class DM lasers.
Quantifying uncertainty, variability and likelihood for ordinary differential equation models
LENUS (Irish Health Repository)
Weisse, Andrea Y
2010-10-28
Abstract Background In many applications, ordinary differential equation (ODE) models are subject to uncertainty or variability in initial conditions and parameters. Both, uncertainty and variability can be quantified in terms of a probability density function on the state and parameter space. Results The partial differential equation that describes the evolution of this probability density function has a form that is particularly amenable to application of the well-known method of characteristics. The value of the density at some point in time is directly accessible by the solution of the original ODE extended by a single extra dimension (for the value of the density). This leads to simple methods for studying uncertainty, variability and likelihood, with significant advantages over more traditional Monte Carlo and related approaches especially when studying regions with low probability. Conclusions While such approaches based on the method of characteristics are common practice in other disciplines, their advantages for the study of biological systems have so far remained unrecognized. Several examples illustrate performance and accuracy of the approach and its limitations.
Family Characteristics Associated with Likelihood of Varicella Vaccination.
Weinmann, Sheila; Mullooly, John P; Drew, Lois; Chun, Colleen S
2016-01-01
The introduction of the varicella vaccine as a routine pediatric immunization in the US, in 1995, provided an opportunity to assess factors associated with uptake of new vaccines in the member population of the Kaiser Permanente Northwest (KPNW) Health Plan. Identify factors associated with varicella vaccination in the KPNW population in the first five years after varicella vaccine was introduced. A retrospective cohort of children under age 13 years between June 1995 and December 1999, without a history of varicella disease was identified using KPNW automated data. Membership records were linked to vaccine databases. Cox regression was used to estimate likelihood of varicella vaccination during the study period in relation to age, sex, primary clinician's specialty, and Medicaid eligibility. For a subset whose parents answered a behavioral health survey, additional demographic and behavioral characteristics were evaluated. Varicella vaccination. We identified 88,646 children under age 13 years without a history of varicella; 22% were vaccinated during the study period. Varicella vaccination was more likely among children who were born after 1995, were not Medicaid recipients, or had pediatricians as primary clinicians. In the survey-linked cohort, positively associated family characteristics included smaller family size; higher socioeconomic status; and parents who were older, were college graduates, reported excellent health, and received influenza vaccination. Understanding predictors of early varicella vaccine-era vaccine acceptance may help in planning for introduction of new vaccines to routine schedules.
Efficient algorithms for maximum likelihood decoding in the surface code
Bravyi, Sergey; Suchara, Martin; Vargo, Alexander
2014-09-01
We describe two implementations of the optimal error correction algorithm known as the maximum likelihood decoder (MLD) for the two-dimensional surface code with a noiseless syndrome extraction. First, we show how to implement MLD exactly in time O (n2), where n is the number of code qubits. Our implementation uses a reduction from MLD to simulation of matchgate quantum circuits. This reduction however requires a special noise model with independent bit-flip and phase-flip errors. Secondly, we show how to implement MLD approximately for more general noise models using matrix product states (MPS). Our implementation has running time O (nχ3), where χ is a parameter that controls the approximation precision. The key step of our algorithm, borrowed from the density matrix renormalization-group method, is a subroutine for contracting a tensor network on the two-dimensional grid. The subroutine uses MPS with a bond dimension χ to approximate the sequence of tensors arising in the course of contraction. We benchmark the MPS-based decoder against the standard minimum weight matching decoder observing a significant reduction of the logical error probability for χ ≥4.
Affective mapping: An activation likelihood estimation (ALE) meta-analysis.
Kirby, Lauren A J; Robinson, Jennifer L
2017-11-01
Functional neuroimaging has the spatial resolution to explain the neural basis of emotions. Activation likelihood estimation (ALE), as opposed to traditional qualitative meta-analysis, quantifies convergence of activation across studies within affective categories. Others have used ALE to investigate a broad range of emotions, but without the convenience of the BrainMap database. We used the BrainMap database and analysis resources to run separate meta-analyses on coordinates reported for anger, anxiety, disgust, fear, happiness, humor, and sadness. Resultant ALE maps were compared to determine areas of convergence between emotions, as well as to identify affect-specific networks. Five out of the seven emotions demonstrated consistent activation within the amygdala, whereas all emotions consistently activated the right inferior frontal gyrus, which has been implicated as an integration hub for affective and cognitive processes. These data provide the framework for models of affect-specific networks, as well as emotional processing hubs, which can be used for future studies of functional or effective connectivity. Copyright © 2015 Elsevier Inc. All rights reserved.
Kinnear, John; Jackson, Ruth
2017-07-01
Although physicians are highly trained in the application of evidence-based medicine, and are assumed to make rational decisions, there is evidence that their decision making is prone to biases. One of the biases that has been shown to affect accuracy of judgements is that of representativeness and base-rate neglect, where the saliency of a person's features leads to overestimation of their likelihood of belonging to a group. This results in the substitution of 'subjective' probability for statistical probability. This study examines clinicians' propensity to make estimations of subjective probability when presented with clinical information that is considered typical of a medical condition. The strength of the representativeness bias is tested by presenting choices in textual and graphic form. Understanding of statistical probability is also tested by omitting all clinical information. For the questions that included clinical information, 46.7% and 45.5% of clinicians made judgements of statistical probability, respectively. Where the question omitted clinical information, 79.9% of clinicians made a judgement consistent with statistical probability. There was a statistically significant difference in responses to the questions with and without representativeness information (χ2 (1, n=254)=54.45, pprobability. One of the causes for this representativeness bias may be the way clinical medicine is taught where stereotypic presentations are emphasised in diagnostic decision making. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.
The Likelihood of Experiencing Relative Poverty over the Life Course
Rank, Mark R.; Hirschl, Thomas A.
2015-01-01
Research on poverty in the United States has largely consisted of examining cross-sectional levels of absolute poverty. In this analysis, we focus on understanding relative poverty within a life course context. Specifically, we analyze the likelihood of individuals falling below the 20th percentile and the 10th percentile of the income distribution between the ages of 25 and 60. A series of life tables are constructed using the nationally representative Panel Study of Income Dynamics data set. This includes panel data from 1968 through 2011. Results indicate that the prevalence of relative poverty is quite high. Consequently, between the ages of 25 to 60, 61.8 percent of the population will experience a year below the 20th percentile, and 42.1 percent will experience a year below the 10th percentile. Characteristics associated with experiencing these levels of poverty include those who are younger, nonwhite, female, not married, with 12 years or less of education, or who have a work disability. PMID:26200781
Clear: Composition of Likelihoods for Evolve and Resequence Experiments.
Iranmehr, Arya; Akbari, Ali; Schlötterer, Christian; Bafna, Vineet
2017-06-01
The advent of next generation sequencing technologies has made whole-genome and whole-population sampling possible, even for eukaryotes with large genomes. With this development, experimental evolution studies can be designed to observe molecular evolution "in action" via evolve-and-resequence (E&R) experiments. Among other applications, E&R studies can be used to locate the genes and variants responsible for genetic adaptation. Most existing literature on time-series data analysis often assumes large population size, accurate allele frequency estimates, or wide time spans. These assumptions do not hold in many E&R studies. In this article, we propose a method-composition of likelihoods for evolve-and-resequence experiments (Clear)-to identify signatures of selection in small population E&R experiments. Clear takes whole-genome sequences of pools of individuals as input, and properly addresses heterogeneous ascertainment bias resulting from uneven coverage. Clear also provides unbiased estimates of model parameters, including population size, selection strength, and dominance, while being computationally efficient. Extensive simulations show that Clear achieves higher power in detecting and localizing selection over a wide range of parameters, and is robust to variation of coverage. We applied the Clear statistic to multiple E&R experiments, including data from a study of adaptation of Drosophila melanogaster to alternating temperatures and a study of outcrossing yeast populations, and identified multiple regions under selection with genome-wide significance. Copyright © 2017 by the Genetics Society of America.
Maximum Likelihood Sequence Detection Receivers for Nonlinear Optical Channels
Directory of Open Access Journals (Sweden)
Gabriel N. Maggio
2015-01-01
Full Text Available The space-time whitened matched filter (ST-WMF maximum likelihood sequence detection (MLSD architecture has been recently proposed (Maggio et al., 2014. Its objective is reducing implementation complexity in transmissions over nonlinear dispersive channels. The ST-WMF-MLSD receiver (i drastically reduces the number of states of the Viterbi decoder (VD and (ii offers a smooth trade-off between performance and complexity. In this work the ST-WMF-MLSD receiver is investigated in detail. We show that the space compression of the nonlinear channel is an instrumental property of the ST-WMF-MLSD which results in a major reduction of the implementation complexity in intensity modulation and direct detection (IM/DD fiber optic systems. Moreover, we assess the performance of ST-WMF-MLSD in IM/DD optical systems with chromatic dispersion (CD and polarization mode dispersion (PMD. Numerical results for a 10 Gb/s, 700 km, and IM/DD fiber-optic link with 50 ps differential group delay (DGD show that the number of states of the VD in ST-WMF-MLSD can be reduced ~4 times compared to an oversampled MLSD. Finally, we analyze the impact of the imperfect channel estimation on the performance of the ST-WMF-MLSD. Our results show that the performance degradation caused by channel estimation inaccuracies is low and similar to that achieved by existing MLSD schemes (~0.2 dB.
Maximum likelihood estimation for cytogenetic dose-response curves
International Nuclear Information System (INIS)
Frome, E.L; DuFrain, R.J.
1983-10-01
In vitro dose-response curves are used to describe the relation between the yield of dicentric chromosome aberrations and radiation dose for human lymphocytes. The dicentric yields follow the Poisson distribution, and the expected yield depends on both the magnitude and the temporal distribution of the dose for low LET radiation. A general dose-response model that describes this relation has been obtained by Kellerer and Rossi using the theory of dual radiation action. The yield of elementary lesions is kappa[γd + g(t, tau)d 2 ], where t is the time and d is dose. The coefficient of the d 2 term is determined by the recovery function and the temporal mode of irradiation. Two special cases of practical interest are split-dose and continuous exposure experiments, and the resulting models are intrinsically nonlinear in the parameters. A general purpose maximum likelihood estimation procedure is described and illustrated with numerical examples from both experimental designs. Poisson regression analysis is used for estimation, hypothesis testing, and regression diagnostics. Results are discussed in the context of exposure assessment procedures for both acute and chronic human radiation exposure
Maximum likelihood pedigree reconstruction using integer linear programming.
Cussens, James; Bartlett, Mark; Jones, Elinor M; Sheehan, Nuala A
2013-01-01
Large population biobanks of unrelated individuals have been highly successful in detecting common genetic variants affecting diseases of public health concern. However, they lack the statistical power to detect more modest gene-gene and gene-environment interaction effects or the effects of rare variants for which related individuals are ideally required. In reality, most large population studies will undoubtedly contain sets of undeclared relatives, or pedigrees. Although a crude measure of relatedness might sometimes suffice, having a good estimate of the true pedigree would be much more informative if this could be obtained efficiently. Relatives are more likely to share longer haplotypes around disease susceptibility loci and are hence biologically more informative for rare variants than unrelated cases and controls. Distant relatives are arguably more useful for detecting variants with small effects because they are less likely to share masking environmental effects. Moreover, the identification of relatives enables appropriate adjustments of statistical analyses that typically assume unrelatedness. We propose to exploit an integer linear programming optimisation approach to pedigree learning, which is adapted to find valid pedigrees by imposing appropriate constraints. Our method is not restricted to small pedigrees and is guaranteed to return a maximum likelihood pedigree. With additional constraints, we can also search for multiple high-probability pedigrees and thus account for the inherent uncertainty in any particular pedigree reconstruction. The true pedigree is found very quickly by comparison with other methods when all individuals are observed. Extensions to more complex problems seem feasible. © 2012 Wiley Periodicals, Inc.
Directory of Open Access Journals (Sweden)
Emma L Snary
Full Text Available The genus Henipavirus includes Hendra virus (HeV and Nipah virus (NiV, for which fruit bats (particularly those of the genus Pteropus are considered to be the wildlife reservoir. The recognition of henipaviruses occurring across a wider geographic and host range suggests the possibility of the virus entering the United Kingdom (UK. To estimate the likelihood of henipaviruses entering the UK, a qualitative release assessment was undertaken. To facilitate the release assessment, the world was divided into four zones according to location of outbreaks of henipaviruses, isolation of henipaviruses, proximity to other countries where incidents of henipaviruses have occurred and the distribution of Pteropus spp. fruit bats. From this release assessment, the key findings are that the importation of fruit from Zone 1 and 2 and bat bushmeat from Zone 1 each have a Low annual probability of release of henipaviruses into the UK. Similarly, the importation of bat meat from Zone 2, horses and companion animals from Zone 1 and people travelling from Zone 1 and entering the UK was estimated to pose a Very Low probability of release. The annual probability of release for all other release routes was assessed to be Negligible. It is recommended that the release assessment be periodically re-assessed to reflect changes in knowledge and circumstances over time.
Alzahrani, Majed
2016-03-10
Disclosed are various embodiments for a prediction application to predict a stuck pipe. A linear regression model is generated from hook load readings at corresponding bit depths. A current hook load reading at a current bit depth is compared with a normal hook load reading from the linear regression model. A current hook load greater than a normal hook load for a given bit depth indicates the likelihood of a stuck pipe.
Incident Duration Modeling Using Flexible Parametric Hazard-Based Models
Directory of Open Access Journals (Sweden)
Ruimin Li
2014-01-01
Full Text Available Assessing and prioritizing the duration time and effects of traffic incidents on major roads present significant challenges for road network managers. This study examines the effect of numerous factors associated with various types of incidents on their duration and proposes an incident duration prediction model. Several parametric accelerated failure time hazard-based models were examined, including Weibull, log-logistic, log-normal, and generalized gamma, as well as all models with gamma heterogeneity and flexible parametric hazard-based models with freedom ranging from one to ten, by analyzing a traffic incident dataset obtained from the Incident Reporting and Dispatching System in Beijing in 2008. Results show that different factors significantly affect different incident time phases, whose best distributions were diverse. Given the best hazard-based models of each incident time phase, the prediction result can be reasonable for most incidents. The results of this study can aid traffic incident management agencies not only in implementing strategies that would reduce incident duration, and thus reduce congestion, secondary incidents, and the associated human and economic losses, but also in effectively predicting incident duration time.
Directory of Open Access Journals (Sweden)
César da Silva Chagas
2013-04-01
Full Text Available Soil surveys are the main source of spatial information on soils and have a range of different applications, mainly in agriculture. The continuity of this activity has however been severely compromised, mainly due to a lack of governmental funding. The purpose of this study was to evaluate the feasibility of two different classifiers (artificial neural networks and a maximum likelihood algorithm in the prediction of soil classes in the northwest of the state of Rio de Janeiro. Terrain attributes such as elevation, slope, aspect, plan curvature and compound topographic index (CTI and indices of clay minerals, iron oxide and Normalized Difference Vegetation Index (NDVI, derived from Landsat 7 ETM+ sensor imagery, were used as discriminating variables. The two classifiers were trained and validated for each soil class using 300 and 150 samples respectively, representing the characteristics of these classes in terms of the discriminating variables. According to the statistical tests, the accuracy of the classifier based on artificial neural networks (ANNs was greater than of the classic Maximum Likelihood Classifier (MLC. Comparing the results with 126 points of reference showed that the resulting ANN map (73.81 % was superior to the MLC map (57.94 %. The main errors when using the two classifiers were caused by: a the geological heterogeneity of the area coupled with problems related to the geological map; b the depth of lithic contact and/or rock exposure, and c problems with the environmental correlation model used due to the polygenetic nature of the soils. This study confirms that the use of terrain attributes together with remote sensing data by an ANN approach can be a tool to facilitate soil mapping in Brazil, primarily due to the availability of low-cost remote sensing data and the ease by which terrain attributes can be obtained.
Predictors of Likelihood of Speaking Up about Safety Concerns in Labour and Delivery
Lyndon, Audrey; Sexton, J. Bryan; Simpson, Kathleen Rice; Rosenstein, Alan; Lee, Kathryn A.; Wachter, Robert M.
2011-01-01
Background Despite widespread emphasis on promoting “assertive communication” by caregivers as essential to patient safety improvement efforts, fairly little is known about when and how clinicians speak up to address safety concerns. In this cross-sectional study we use a new measure of speaking up to begin exploring this issue in maternity care. Methods We developed a scenario-based measure of clinician’s assessment of potential harm and likelihood of speaking up in response to perceived harm. We embedded this scale in a survey with measures of safety climate, teamwork climate, disruptive behaviour, work stress, and personality traits of bravery and assertiveness. The survey was distributed to all registered nurses and obstetricians practicing in two US Labour & Delivery units. Results The response rate was 54% (125 of 230 potential respondents). Respondents were experienced clinicians (13.7 ± 11 years in specialty). Higher perception of harm, respondent role, specialty experience, and site predicted likelihood of speaking up when controlling for bravery and assertiveness. Physicians rated potential harm in common clinical scenarios lower than nurses did (7.5 vs. 8.4 on 2–10 scale; p<0.001). Some participants (12%) indicated they were unlikely to speak up despite perceiving high potential for harm in certain situations. Discussion This exploratory study found nurses and physicians differed in their harm ratings, and harm rating was a predictor of speaking up. This may partially explain persistent discrepancies between physicians and nurses in teamwork climate scores. Differing assessments of potential harms inherent in everyday practice may be a target for teamwork intervention in maternity care. PMID:22927492
DREAM3: network inference using dynamic context likelihood of relatedness and the inferelator.
Directory of Open Access Journals (Sweden)
Aviv Madar
2010-03-01
Full Text Available Many current works aiming to learn regulatory networks from systems biology data must balance model complexity with respect to data availability and quality. Methods that learn regulatory associations based on unit-less metrics, such as Mutual Information, are attractive in that they scale well and reduce the number of free parameters (model complexity per interaction to a minimum. In contrast, methods for learning regulatory networks based on explicit dynamical models are more complex and scale less gracefully, but are attractive as they may allow direct prediction of transcriptional dynamics and resolve the directionality of many regulatory interactions.We aim to investigate whether scalable information based methods (like the Context Likelihood of Relatedness method and more explicit dynamical models (like Inferelator 1.0 prove synergistic when combined. We test a pipeline where a novel modification of the Context Likelihood of Relatedness (mixed-CLR, modified to use time series data is first used to define likely regulatory interactions and then Inferelator 1.0 is used for final model selection and to build an explicit dynamical model.Our method ranked 2nd out of 22 in the DREAM3 100-gene in silico networks challenge. Mixed-CLR and Inferelator 1.0 are complementary, demonstrating a large performance gain relative to any single tested method, with precision being especially high at low recall values. Partitioning the provided data set into four groups (knock-down, knock-out, time-series, and combined revealed that using comprehensive knock-out data alone provides optimal performance. Inferelator 1.0 proved particularly powerful at resolving the directionality of regulatory interactions, i.e. "who regulates who" (approximately of identified true positives were correctly resolved. Performance drops for high in-degree genes, i.e. as the number of regulators per target gene increases, but not with out-degree, i.e. performance is not affected by
System Issues Leading to "Found-on-Floor" Incidents: A Multi-Incident Analysis.
Shaw, James; Bastawrous, Marina; Burns, Susan; McKay, Sandra
2016-11-02
Although attention to patient safety issues in the home care setting is growing, few studies have highlighted health system-level concerns that contribute to patient safety incidents in the home. Found-on-floor (FOF) incidents are a key patient safety issue that is unique to the home care setting and highlights a number of opportunities for system-level improvements to drive enhanced patient safety. We completed a multi-incident analysis of FOF incidents documented in the electronic record system of a home health care agency in Toronto, Canada, for the course of 1 year between January 2012 and February 2013. Length of stay (LOS) was identified as the cross-cutting theme, illustrating the following 3 key issues: (1) in the short LOS group, a lack of information continuity led to missed fall risk information by home care professionals; (2) in the medium LOS group, a lack of personal support worker/carer training in fall prevention led to inadequate fall prevention activity; and (3) in the long LOS group, a lack of accountability policy at a system level led to a lack of fall risk assessment follow-up. Our study suggests that considering LOS in the home care sector helps expose key system-level issues enabling safety incidents such as FOF to occur. Our multi-incident analysis identified a number of opportunities for system-level changes that might improve fall prevention practice and reduce the likelihood of FOF incidents in the home. Specifically, investment in electronic health records that are functional across the continuum of care, further research and understanding of the training and skills of personal support workers, and enhanced incentives or more punitive approaches (depending on the circumstances) to ensure accountability in home safety will strengthen the home care sector and help prevent FOF incidents among older people.
De novo likelihood-based measures for comparing genome assemblies.
Ghodsi, Mohammadreza; Hill, Christopher M; Astrovskaya, Irina; Lin, Henry; Sommer, Dan D; Koren, Sergey; Pop, Mihai
2013-08-22
The current revolution in genomics has been made possible by software tools called genome assemblers, which stitch together DNA fragments "read" by sequencing machines into complete or nearly complete genome sequences. Despite decades of research in this field and the development of dozens of genome assemblers, assessing and comparing the quality of assembled genome sequences still relies on the availability of independently determined standards, such as manually curated genome sequences, or independently produced mapping data. These "gold standards" can be expensive to produce and may only cover a small fraction of the genome, which limits their applicability to newly generated genome sequences. Here we introduce a de novo probabilistic measure of assembly quality which allows for an objective comparison of multiple assemblies generated from the same set of reads. We define the quality of a sequence produced by an assembler as the conditional probability of observing the sequenced reads from the assembled sequence. A key property of our metric is that the true genome sequence maximizes the score, unlike other commonly used metrics. We demonstrate that our de novo score can be computed quickly and accurately in a practical setting even for large datasets, by estimating the score from a relatively small sample of the reads. To demonstrate the benefits of our score, we measure the quality of the assemblies generated in the GAGE and Assemblathon 1 assembly "bake-offs" with our metric. Even without knowledge of the true reference sequence, our de novo metric closely matches the reference-based evaluation metrics used in the studies and outperforms other de novo metrics traditionally used to measure assembly quality (such as N50). Finally, we highlight the application of our score to optimize assembly parameters used in genome assemblers, which enables better assemblies to be produced, even without prior knowledge of the genome being assembled. Likelihood
Subpixel resolution in maximum-likelihood image restoration
Conchello, Jose-Angel; McNally, James G.
1997-04-01
A number of algorithms have been developed for three- dimensional (3D) deconvolution of fluorescence microscopical images. These algorithms use a mathematical-physics model for the process of image formation and try to estimate the specimen function, i.e. the distribution of fluorescent dye in the specimen. To keep the algorithms tractable and computational load practical, the algorithms rely on simplifying assumptions, and the extent to which these assumptions approximate the actual process of image formation and recording has a strong effect on the capabilities of the algorithms. The process of image formation is a continuous-space process, but the algorithms must be implemented using a discrete-space approximation to this process and render a sampled specimen function. A commonly-used assumption is that there is one pixel in the specimen for each pixel in the recorded image and that the pixel size in the recorded image is small compared to the size of the diffraction limited spot or Airy disk, a condition necessary to satisfy Nyquist sampling criterion. Modern CCD cameras, however, have large wells that integrate into a single pixel an area of the image that is significantly larger than the Airy disk. We derived a maximum-likelihood-based algorithm to accommodate for these large CCD pixel sizes. In this algorithm we assume that each pixel in the recorded image integrates several pixels that satisfy Nyquist criterion. The algorithm then attempts to estimate the specimen function at a resolution better than that allowed by the CCD camera. Preliminary results of this sub-pixel resolution algorithm are encouraging.
Maximum likelihood random galaxy catalogues and luminosity function estimation
Cole, Shaun
2011-09-01
We present a new algorithm to generate a random (unclustered) version of an magnitude limited observational galaxy redshift catalogue. It takes into account both galaxy evolution and the perturbing effects of large-scale structure. The key to the algorithm is a maximum likelihood (ML) method for jointly estimating both the luminosity function (LF) and the overdensity as a function of redshift. The random catalogue algorithm then works by cloning each galaxy in the original catalogue, with the number of clones determined by the ML solution. Each of these cloned galaxies is then assigned a random redshift uniformly distributed over the accessible survey volume, taking account of the survey magnitude limit(s) and, optionally, both luminosity and number density evolution. The resulting random catalogues, which can be employed in traditional estimates of galaxy clustering, make fuller use of the information available in the original catalogue and hence are superior to simply fitting a functional form to the observed redshift distribution. They are particularly well suited to studies of the dependence of galaxy clustering on galaxy properties as each galaxy in the random catalogue has the same list of attributes as measured for the galaxies in the genuine catalogue. The derivation of the joint overdensity and LF estimator reveals the limit in which the ML estimate reduces to the standard 1/Vmax LF estimate, namely when one makes the prior assumption that the are no fluctuations in the radial overdensity. The new ML estimator can be viewed as a generalization of the 1/Vmax estimate in which Vmax is replaced by a density corrected Vdc, max.
Smoking increases the likelihood of Helicobacter pylori treatment failure.
Itskoviz, David; Boltin, Doron; Leibovitzh, Haim; Tsadok Perets, Tsachi; Comaneshter, Doron; Cohen, Arnon; Niv, Yaron; Levi, Zohar
2017-07-01
Data regarding the impact of smoking on the success of Helicobacter pylori (H. pylori) eradication are conflicting, partially due to the fact that sociodemographic status is associated with both smoking and H. pylori treatment success. We aimed to assess the effect of smoking on H. pylori eradication rates after controlling for sociodemographic confounders. Included were subjects aged 15 years or older, with a first time positive C 13 -urea breath test (C 13 -UBT) between 2007 to 2014, who underwent a second C 13 -UBT after receiving clarithromycin-based triple therapy. Data regarding age, gender, socioeconomic status (SES), smoking (current smokers or "never smoked"), and drug use were extracted from the Clalit health maintenance organization database. Out of 120,914 subjects with a positive first time C 13 -UBT, 50,836 (42.0%) underwent a second C 13 -UBT test. After excluding former smokers, 48,130 remained who were eligible for analysis. The mean age was 44.3±18.2years, 69.2% were females, 87.8% were Jewish and 12.2% Arabs, 25.5% were current smokers. The overall eradication failure rates were 33.3%: 34.8% in current smokers and 32.8% in subjects who never smoked. In a multivariate analysis, eradication failure was positively associated with current smoking (Odds Ratio {OR} 1.15, 95% CI 1.10-1.20, psmoking was found to significantly increase the likelihood of unsuccessful first-line treatment for H. pylori infection. Copyright © 2017 Editrice Gastroenterologica Italiana S.r.l. Published by Elsevier Ltd. All rights reserved.
Cosmological Parameters from CMB Maps without Likelihood Approximation
Racine, B.; Jewell, J. B.; Eriksen, H. K.; Wehus, I. K.
2016-03-01
We propose an efficient Bayesian Markov chain Monte Carlo (MCMC) algorithm for estimating cosmological parameters from cosmic microwave background (CMB) data without the use of likelihood approximations. It builds on a previously developed Gibbs sampling framework that allows for exploration of the joint CMB sky signal and power spectrum posterior, P({\\boldsymbol{s}},{C}{\\ell }| {\\boldsymbol{d}}), and addresses a long-standing problem of efficient parameter estimation simultaneously in regimes of high and low signal-to-noise ratio. To achieve this, our new algorithm introduces a joint Markov chain move in which both the signal map and power spectrum are synchronously modified, by rescaling the map according to the proposed power spectrum before evaluating the Metropolis-Hastings accept probability. Such a move was already introduced by Jewell et al., who used it to explore low signal-to-noise posteriors. However, they also found that the same algorithm is inefficient in the high signal-to-noise regime, since a brute-force rescaling operation does not account for phase information. This problem is mitigated in the new algorithm by subtracting the Wiener filter mean field from the proposed map prior to rescaling, leaving high signal-to-noise information invariant in the joint step, and effectively only rescaling the low signal-to-noise component. To explore the full posterior, the new joint move is then interleaved with a standard conditional Gibbs move for the sky map. We apply our new algorithm to simplified simulations for which we can evaluate the exact posterior to study both its accuracy and its performance, and find good agreement with the exact posterior; marginal means agree to ≲0.006σ and standard deviations to better than ˜3%. The Markov chain correlation length is of the same order of magnitude as those obtained by other standard samplers in the field.
Acute incidents during anaesthesia
African Journals Online (AJOL)
Incidents can occur during induction, maintenance and emergence from anaesthesia. The following acute critical incidents are discussed in this article: • Anaphylaxis. • Aspiration ..... Already used in South Africa and Malawi, a scale-up of the technique is under way in Tanzania, Rwanda and Ghana. The report found that.
Radiological incidents in radiotherapy
International Nuclear Information System (INIS)
Hobzova, L.; Novotny, J.
2008-01-01
In many countries a reporting system of radiological incidents to national regulatory body exists and providers of radiotherapy treatment are obliged to report all major and/or in some countries all incidents occurring in institution. State Office for Nuclear Safety (SONS) is providing a systematic guidance for radiotherapy departments from 1997 by requiring inclusion of radiation safety problems into Quality assurance manual, which is the basic document for obtaining a license of SONS for handling with sources of ionizing radiation. For that purpose SONS also issued the recommendation 'Introduction of QA system for important sources in radiotherapy-radiological incidents' in which the radiological incidents are defined and the basic guidance for their classification (category A, B, C, D), investigation and reporting are given. At regular periods the SONS in co-operation with radiotherapy centers is making a survey of all radiological incidents occurring in institutions and it is presenting obtained information in synoptic communication (2003 Motolske dny, 2005 Novy Jicin). This presentation is another summary report of radiological incidents that occurred in our radiotherapy institutions during last 3 years. Emphasis is given not only to survey and statistics, but also to analysis of reasons of the radiological incidents and to their detection and prevention. Analyses of incidents in radiotherapy have led to a much broader understanding of incident causation. Information about the error should be shared as early as possible during or after investigation by all radiotherapy centers. Learning from incidents, errors and near misses should be a part of improvement of the QA system in institutions. Generally, it is recommended that all radiotherapy facilities should participate in the reporting, analyzing and learning system to facilitate the dissemination of knowledge throughout the whole country to prevent errors in radiotherapy.(authors)
Monte Carlo Simulation to Estimate Likelihood of Direct Lightning Strikes
Mata, Carlos; Medelius, Pedro
2008-01-01
A software tool has been designed to quantify the lightning exposure at launch sites of the stack at the pads under different configurations. In order to predict lightning strikes to generic structures, this model uses leaders whose origins (in the x-y plane) are obtained from a 2D random, normal distribution.
Additive nonlinear biomass equations: A likelihood-based approach
David L. R. Affleck; Ulises Dieguez-Aranda
2016-01-01
Since Parresolâs (Can. J. For. Res. 31:865-878, 2001) seminal article on the topic, it has become standard to develop nonlinear tree biomass equations to ensure compatibility among total and component predictions and to fit these equations using multistep generalized least-squares methods. In particular, many studies have specified equations for total tree...
Prachayakul, Varayu; Aswakul, Pitulak; Bhunthumkomol, Patommatat; Deesomsak, Morakod
2014-09-26
stones, and can prevent the unnecessary use of ERCP. This study found that use of clinical criteria alone might not provide a good prediction of the presence of CBD stones, even in patients who fulfill the criteria for a high likelihood of choledocholithiasis.
Castruccio, Stefano
2016-01-01
In multivariate or spatial extremes, inference for max-stable processes observed at a large collection of points is a very challenging problem and current approaches typically rely on less expensive composite likelihoods constructed from small subsets of data. In this work, we explore the limits of modern state-of-the-art computational facilities to perform full likelihood inference and to efficiently evaluate high-order composite likelihoods. With extensive simulations, we assess the loss of information of composite likelihood estimators with respect to a full likelihood approach for some widely used multivariate or spatial extreme models, we discuss how to choose composite likelihood truncation to improve the efficiency, and we also provide recommendations for practitioners. This article has supplementary material online.
High-order Composite Likelihood Inference for Max-Stable Distributions and Processes
Castruccio, Stefano
2015-09-29
In multivariate or spatial extremes, inference for max-stable processes observed at a large collection of locations is a very challenging problem in computational statistics, and current approaches typically rely on less expensive composite likelihoods constructed from small subsets of data. In this work, we explore the limits of modern state-of-the-art computational facilities to perform full likelihood inference and to efficiently evaluate high-order composite likelihoods. With extensive simulations, we assess the loss of information of composite likelihood estimators with respect to a full likelihood approach for some widely-used multivariate or spatial extreme models, we discuss how to choose composite likelihood truncation to improve the efficiency, and we also provide recommendations for practitioners. This article has supplementary material online.
Maximum Likelihood Estimation and Inference With Examples in R, SAS and ADMB
Millar, Russell B
2011-01-01
This book takes a fresh look at the popular and well-established method of maximum likelihood for statistical estimation and inference. It begins with an intuitive introduction to the concepts and background of likelihood, and moves through to the latest developments in maximum likelihood methodology, including general latent variable models and new material for the practical implementation of integrated likelihood using the free ADMB software. Fundamental issues of statistical inference are also examined, with a presentation of some of the philosophical debates underlying the choice of statis
Critical incident stress management.
Lim, J J; Childs, J; Gonsalves, K
2000-10-01
Recent studies have indicated implementation of the CISM Program has impacted and reduced the cost of workers' compensation claims for stress related conditions and the number of lost work days (Ott, 1997; Western Management Consultants, 1996). Occupational health professionals need to be ready to develop and implement a comprehensive critical incident stress management process in anticipation of a major event. The ability to organize, lead, or administer critical incident stress debriefings for affected employees is a key role for the occupational health professional. Familiarity with these concepts and the ability to identify a critical incident enhances value to the business by mitigating the stress and impact to the workplace. Critical Incident Stress Management Systems have the potential for decreasing stress and restoring employees to normal life function--a win/win situation for both the employees and the organization.
Marine Animal Incident Database
National Oceanic and Atmospheric Administration, Department of Commerce — Large whale stranding, death, ship strike and entanglement incidents are all recorded to monitor the health of each population and track anthropogenic factors that...
Police Incident Blotter (Archive)
Allegheny County / City of Pittsburgh / Western PA Regional Data Center — The Police Blotter Archive contains crime incident data after it has been validated and processed to meet Uniform Crime Reporting (UCR) standards, published on a...
2011 Japanese Nuclear Incident
EPA’s RadNet system monitored the environmental radiation levels in the United States and parts of the Pacific following the Japanese Nuclear Incident. Learn about EPA’s response and view historical laboratory data and news releases.
Information Security Incident Management
Directory of Open Access Journals (Sweden)
D. I. Persanov
2010-03-01
Full Text Available The present report highlights the points of information security incident management in an enterprise. Some aspects of the incident and event classification are given. The author presents his view of the process scheme over the monitoring and processing information security events. Also, the report determines a few critical points of the listed process and gives the practical recommendations over its development and optimization.
Syrjänen, Stina; Naud, Paulo; Sarian, Luis; Derchain, Sophie; Roteli-Martins, Cecilia; Longatto-Filho, Adhernar; Tatti, Silvio; Branca, Margerita; Erzen, Mojca; Hammes, Luciano S; Costa, Silvano; Syrjänen, Kari
2010-02-01
To assess whether the potentially high-risk (HR) human papillomavirus (HPV)-related up-regulation of 14-3-3sigma (stratifin) has implications in the outcome of HPV infections or cervical intraepithelial neoplasia (CIN) lesions, cervical biopsy specimens from 225 women in the Latin American Screening Study were analyzed for 14-3-3sigma expression using immunohistochemical analysis. We assessed its associations with CIN grade and HR HPV at baseline and value in predicting outcomes of HR-HPV infections and the development of incident CIN 1+ and CIN 2+. Expression of 14-3-3sigma increased in parallel with the lesion grade. Up-regulation was also significantly related to HR-HPV detection (P = .004; odds ratio, 2.71; 95% confidence interval, 1.37-5.35) and showed a linear relationship to HR-HPV loads (P = .003). 14-3-3sigma expression was of no value in predicting the outcomes (incident, persistent, clearance) of HR-HPV infections or incident CIN 1+ and CIN 2+. 14-3-3sigma is not inactivated in cervical carcinoma and CIN but is up-regulated on transition from CIN 2 to CIN 3. Its normal functions in controlling G(1)/S and G(2)/M checkpoints are being bypassed by HR HPV.
Krings, Franciska; Facchin, Stephanie
2009-01-01
This study demonstrated relations between men's perceptions of organizational justice and increased sexual harassment proclivities. Respondents reported higher likelihood to sexually harass under conditions of low interactional justice, suggesting that sexual harassment likelihood may increase as a response to perceived injustice. Moreover, the…
Ros, B.P.; Bijma, F.; de Munck, J.C.; de Gunst, M.C.M.
2016-01-01
This paper deals with multivariate Gaussian models for which the covariance matrix is a Kronecker product of two matrices. We consider maximum likelihood estimation of the model parameters, in particular of the covariance matrix. There is no explicit expression for the maximum likelihood estimator
Sampling variability in forensic likelihood-ratio computation: A simulation study
Ali, Tauseef; Spreeuwers, Lieuwe Jan; Veldhuis, Raymond N.J.; Meuwly, Didier
2015-01-01
Recently, in the forensic biometric community, there is a growing interest to compute a metric called “likelihood- ratio‿ when a pair of biometric specimens is compared using a biometric recognition system. Generally, a biomet- ric recognition system outputs a score and therefore a likelihood-ratio
Use of deterministic sampling for exploring likelihoods in linkage analysis for quantitative traits.
Mackinnon, M.J.; Beek, van der S.; Kinghorn, B.P.
1996-01-01
Deterministic sampling was used to numerically evaluate the expected log-likelihood surfaces of QTL-marker linkage models in large pedigrees with simple structures. By calculating the expected values of likelihoods, questions of power of experimental designs, bias in parameter estimates, approximate
Profile-Likelihood Approach for Estimating Generalized Linear Mixed Models with Factor Structures
Jeon, Minjeong; Rabe-Hesketh, Sophia
2012-01-01
In this article, the authors suggest a profile-likelihood approach for estimating complex models by maximum likelihood (ML) using standard software and minimal programming. The method works whenever setting some of the parameters of the model to known constants turns the model into a standard model. An important class of models that can be…
Fast inference in generalized linear models via expected log-likelihoods
Ramirez, Alexandro D.; Paninski, Liam
2015-01-01
Generalized linear models play an essential role in a wide variety of statistical applications. This paper discusses an approximation of the likelihood in these models that can greatly facilitate computation. The basic idea is to replace a sum that appears in the exact log-likelihood by an expectation over the model covariates; the resulting “expected log-likelihood” can in many cases be computed significantly faster than the exact log-likelihood. In many neuroscience experiments the distribution over model covariates is controlled by the experimenter and the expected log-likelihood approximation becomes particularly useful; for example, estimators based on maximizing this expected log-likelihood (or a penalized version thereof) can often be obtained with orders of magnitude computational savings compared to the exact maximum likelihood estimators. A risk analysis establishes that these maximum EL estimators often come with little cost in accuracy (and in some cases even improved accuracy) compared to standard maximum likelihood estimates. Finally, we find that these methods can significantly decrease the computation time of marginal likelihood calculations for model selection and of Markov chain Monte Carlo methods for sampling from the posterior parameter distribution. We illustrate our results by applying these methods to a computationally-challenging dataset of neural spike trains obtained via large-scale multi-electrode recordings in the primate retina. PMID:23832289
Item Parameter Estimation via Marginal Maximum Likelihood and an EM Algorithm: A Didactic.
Harwell, Michael R.; And Others
1988-01-01
The Bock and Aitkin Marginal Maximum Likelihood/EM (MML/EM) approach to item parameter estimation is an alternative to the classical joint maximum likelihood procedure of item response theory. This paper provides the essential mathematical details of a MML/EM solution and shows its use in obtaining consistent item parameter estimates. (TJH)
Xi, Li; Shah, Manas; Trout, Bernhardt L
2013-04-04
Diffusion of small molecules in amorphous polymers is known to follow a form of so-called hopping motion: penetrant molecules are trapped in microscopic cavities for extended time periods; diffusion is made possible by rare but fast jumps between neighboring cavities. Existing understanding of the hopping mechanism is based on the inspection of molecular images during individual molecular-dynamics trajectories. We focus on the diffusion of water molecules in a hydrophilic polymer below its glass transition temperature. The transition path ensemble of one hopping event is sampled with aimless shooting, a type of transition path sampling technique. In these trajectories, configurations of both the penetrant and the polymer change during the transition. Statistical analysis of the ensemble using likelihood maximization leads to a reaction coordinate of the transition, whose key components include the penetrant configuration and distances between penetrant-host atom pairs that have strong electrostatic interactions. Polymer motions do not contribute directly to the reaction coordinate. This result points toward a transition mechanism dominated by the penetrant movement. Molecular insights from this study can benefit the development of computational tools that better predict material transport properties, facilitating the design of new materials, including polymers with engineered drying properties.
Yang, Z
1994-09-01
Two approximate methods are proposed for maximum likelihood phylogenetic estimation, which allow variable rates of substitution across nucleotide sites. Three data sets with quite different characteristics were analyzed to examine empirically the performance of these methods. The first, called the "discrete gamma model," uses several categories of rates to approximate the gamma distribution, with equal probability for each category. The mean of each category is used to represent all the rates falling in the category. The performance of this method is found to be quite good, and four such categories appear to be sufficient to produce both an optimum, or near-optimum fit by the model to the data, and also an acceptable approximation to the continuous distribution. The second method, called "fixed-rates model", classifies sites into several classes according to their rates predicted assuming the star tree. Sites in different classes are then assumed to be evolving at these fixed rates when other tree topologies are evaluated. Analyses of the data sets suggest that this method can produce reasonable results, but it seems to share some properties of a least-squares pairwise comparison; for example, interior branch lengths in nonbest trees are often found to be zero. The computational requirements of the two methods are comparable to that of Felsenstein's (1981, J Mol Evol 17:368-376) model, which assumes a single rate for all the sites.
Kim, S.; Riazi, H.; Shin, C.; Seo, D.
2013-12-01
Due to the large dimensionality of the state vector and sparsity of observations, the initial conditions (IC) of water quality models are subject to large uncertainties. To reduce the IC uncertainties in operational water quality forecasting, an ensemble data assimilation (DA) procedure for the Hydrologic Simulation Program - Fortran (HSPF) model has been developed and evaluated for the Kumho River Subcatchment of the Nakdong River Basin in Korea. The procedure, referred to herein as MLEF-HSPF, uses maximum likelihood ensemble filter (MLEF) which combines strengths of variational assimilation (VAR) and ensemble Kalman filter (EnKF). The Control variables involved in the DA procedure include the bias correction factors for mean areal precipitation and mean areal potential evaporation, the hydrologic state variables, and the water quality state variables such as water temperature, dissolved oxygen (DO), biochemical oxygen demand (BOD), ammonium (NH4), nitrate (NO3), phosphate (PO4) and chlorophyll a (CHL-a). Due to the very large dimensionality of the inverse problem, accurately specifying the parameters for the DA procdedure is a challenge. Systematic sensitivity analysis is carried out for identifying the optimal parameter settings. To evaluate the robustness of MLEF-HSPF, we use multiple subcatchments of the Nakdong River Basin. In evaluation, we focus on the performance of MLEF-HSPF on prediction of extreme water quality events.
Likelihood-based modification of experimental crystal structure electron density maps
Terwilliger, Thomas C [Sante Fe, NM
2005-04-16
A maximum-likelihood method for improves an electron density map of an experimental crystal structure. A likelihood of a set of structure factors {F.sub.h } is formed for the experimental crystal structure as (1) the likelihood of having obtained an observed set of structure factors {F.sub.h.sup.OBS } if structure factor set {F.sub.h } was correct, and (2) the likelihood that an electron density map resulting from {F.sub.h } is consistent with selected prior knowledge about the experimental crystal structure. The set of structure factors {F.sub.h } is then adjusted to maximize the likelihood of {F.sub.h } for the experimental crystal structure. An improved electron density map is constructed with the maximized structure factors.
Likelihood-mapping: a simple method to visualize phylogenetic content of a sequence alignment.
Strimmer, K; von Haeseler, A
1997-06-24
We introduce a graphical method, likelihood-mapping, to visualize the phylogenetic content of a set of aligned sequences. The method is based on an analysis of the maximum likelihoods for the three fully resolved tree topologies that can be computed for four sequences. The three likelihoods are represented as one point inside an equilateral triangle. The triangle is partitioned in different regions. One region represents star-like evolution, three regions represent a well-resolved phylogeny, and three regions reflect the situation where it is difficult to distinguish between two of the three trees. The location of the likelihoods in the triangle defines the mode of sequence evolution. If n sequences are analyzed, then the likelihoods for each subset of four sequences are mapped onto the triangle. The resulting distribution of points shows whether the data are suitable for a phylogenetic reconstruction or not.
Likelihood-mapping: A simple method to visualize phylogenetic content of a sequence alignment
Strimmer, Korbinian; von Haeseler, Arndt
1997-01-01
We introduce a graphical method, likelihood-mapping, to visualize the phylogenetic content of a set of aligned sequences. The method is based on an analysis of the maximum likelihoods for the three fully resolved tree topologies that can be computed for four sequences. The three likelihoods are represented as one point inside an equilateral triangle. The triangle is partitioned in different regions. One region represents star-like evolution, three regions represent a well-resolved phylogeny, and three regions reflect the situation where it is difficult to distinguish between two of the three trees. The location of the likelihoods in the triangle defines the mode of sequence evolution. If n sequences are analyzed, then the likelihoods for each subset of four sequences are mapped onto the triangle. The resulting distribution of points shows whether the data are suitable for a phylogenetic reconstruction or not. PMID:9192648
Statistical modelling of survival data with random effects h-likelihood approach
Ha, Il Do; Lee, Youngjo
2017-01-01
This book provides a groundbreaking introduction to the likelihood inference for correlated survival data via the hierarchical (or h-) likelihood in order to obtain the (marginal) likelihood and to address the computational difficulties in inferences and extensions. The approach presented in the book overcomes shortcomings in the traditional likelihood-based methods for clustered survival data such as intractable integration. The text includes technical materials such as derivations and proofs in each chapter, as well as recently developed software programs in R (“frailtyHL”), while the real-world data examples together with an R package, “frailtyHL” in CRAN, provide readers with useful hands-on tools. Reviewing new developments since the introduction of the h-likelihood to survival analysis (methods for interval estimation of the individual frailty and for variable selection of the fixed effects in the general class of frailty models) and guiding future directions, the book is of interest to research...
Comparison of Prediction-Error-Modelling Criteria
DEFF Research Database (Denmark)
Jørgensen, John Bagterp; Jørgensen, Sten Bay
2007-01-01
Single and multi-step prediction-error-methods based on the maximum likelihood and least squares criteria are compared. The prediction-error methods studied are based on predictions using the Kalman filter and Kalman predictors for a linear discrete-time stochastic state space model, which is a r...
General practitioner reported incidence of Lyme carditis in the Netherlands.
Hofhuis, A; Arend, S M; Davids, C J; Tukkie, R; van Pelt, W
2015-11-01
Between 1994 and 2009, incidence rates of general practitioner (GP) consultations for tick bites and erythema migrans, the most common early manifestation of Lyme borreliosis, have increased substantially in the Netherlands. The current article aims to estimate and validate the incidence of GP-reported Lyme carditis in the Netherlands. We sent a questionnaire to all GPs in the Netherlands on clinical diagnoses of Lyme borreliosis in 2009 and 2010. To validate and adjust the obtained incidence rate, medical records of cases of Lyme carditis reported by GPs in this incidence survey were reviewed and categorised according to likelihood of the diagnosis of Lyme carditis. Lyme carditis occurred in 0.2 % of all patients with GP-reported Lyme borreliosis. The adjusted annual incidence was six GP-reported cases of Lyme carditis per 10 million inhabitants, i.e. approximately ten cases per year in 2009 and 2010. We report the first incidence estimate for Lyme carditis in the Netherlands, validated by a systematic review of the medical records. Although Lyme carditis is an uncommon manifestation of Lyme borreliosis, physicians need to be aware of this diagnosis, in particular in countries where the incidence of Lyme borreliosis has increased during the past decades.
Royle, J. Andrew; Chandler, Richard B.; Yackulic, Charles; Nichols, James D.
2012-01-01
1. Understanding the factors affecting species occurrence is a pre-eminent focus of applied ecological research. However, direct information about species occurrence is lacking for many species. Instead, researchers sometimes have to rely on so-called presence-only data (i.e. when no direct information about absences is available), which often results from opportunistic, unstructured sampling. MAXENT is a widely used software program designed to model and map species distribution using presence-only data. 2. We provide a critical review of MAXENT as applied to species distribution modelling and discuss how it can lead to inferential errors. A chief concern is that MAXENT produces a number of poorly defined indices that are not directly related to the actual parameter of interest – the probability of occurrence (ψ). This focus on an index was motivated by the belief that it is not possible to estimate ψ from presence-only data; however, we demonstrate that ψ is identifiable using conventional likelihood methods under the assumptions of random sampling and constant probability of species detection. 3. The model is implemented in a convenient r package which we use to apply the model to simulated data and data from the North American Breeding Bird Survey. We demonstrate that MAXENT produces extreme under-predictions when compared to estimates produced by logistic regression which uses the full (presence/absence) data set. We note that MAXENT predictions are extremely sensitive to specification of the background prevalence, which is not objectively estimated using the MAXENT method. 4. As with MAXENT, formal model-based inference requires a random sample of presence locations. Many presence-only data sets, such as those based on museum records and herbarium collections, may not satisfy this assumption. However, when sampling is random, we believe that inference should be based on formal methods that facilitate inference about interpretable ecological quantities
Radiation incidents in dentistry
International Nuclear Information System (INIS)
Lovelock, D.J.
1996-01-01
Most dental practitioners act as their own radiographer and radiologist, unlike their medical colleagues. Virtually all dental surgeons have a dental X-ray machine for intraoral radiography available to them and 40% of dental practices have equipment for dental panoramic tomography. Because of the low energy of X-ray equipment used in dentistry, radiation incidents tend to be less serious than those associated with other aspects of patient care. Details of 47 known incidents are given. The advent of the 1985 and 1988 Ionising Radiation Regulations has made dental surgeons more aware of the hazards of radiation. These regulations, and general health and safety legislation, have led to a few dental surgeons facing legal action. Because of the publicity associated with these court cases, it is expected that there will be a decrease in radiation incidents arising from the practice of dentistry. (author)
POPE: post optimization posterior evaluation of likelihood free models.
Meeds, Edward; Chiang, Michael; Lee, Mary; Cinquin, Olivier; Lowengrub, John; Welling, Max
2015-08-20
In many domains, scientists build complex simulators of natural phenomena that encode their hypotheses about the underlying processes. These simulators can be deterministic or stochastic, fast or slow, constrained or unconstrained, and so on. Optimizing the simulators with respect to a set of parameter values is common practice, resulting in a single parameter setting that minimizes an objective subject to constraints. We propose algorithms for post optimization posterior evaluation (POPE) of simulators. The algorithms compute and visualize all simulations that can generate results of the same or better quality than the optimum, subject to constraints. These optimization posteriors are desirable for a number of reasons among which are easy interpretability, automatic parameter sensitivity and correlation analysis, and posterior predictive analysis. Our algorithms are simple extensions to an existing simulation-based inference framework called approximate Bayesian computation. POPE is applied two biological simulators: a fast and stochastic simulator of stem-cell cycling and a slow and deterministic simulator of tumor growth patterns. POPE allows the scientist to explore and understand the role that constraints, both on the input and the output, have on the optimization posterior. As a Bayesian inference procedure, POPE provides a rigorous framework for the analysis of the uncertainty of an optimal simulation parameter setting.
Segmented polynomials for incidence rate estimation from prevalence data.
Mahiané, Severin Guy; Laeyendecker, Oliver
2017-01-30
The study considers the problem of estimating incidence of a non remissible infection (or disease) with possibly differential mortality using data from a(several) cross-sectional prevalence survey(s). Fitting segmented polynomial models is proposed to estimate the incidence as a function of age, using the maximum likelihood method. The approach allows automatic search for optimal position of knots, and model selection is performed using the Akaike information criterion. The method is applied to simulated data and to estimate HIV incidence among men in Zimbabwe using data from both the NIMH Project Accept (HPTN 043) and Zimbabwe Demographic Health Surveys (2005-2006). Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
The fine-tuning cost of the likelihood in SUSY models
Ghilencea, D M
2013-01-01
In SUSY models, the fine tuning of the electroweak (EW) scale with respect to their parameters gamma_i={m_0, m_{1/2}, mu_0, A_0, B_0,...} and the maximal likelihood L to fit the experimental data are usually regarded as two different problems. We show that, if one regards the EW minimum conditions as constraints that fix the EW scale, this commonly held view is not correct and that the likelihood contains all the information about fine-tuning. In this case we show that the corrected likelihood is equal to the ratio L/Delta of the usual likelihood L and the traditional fine tuning measure Delta of the EW scale. A similar result is obtained for the integrated likelihood over the set {gamma_i}, that can be written as a surface integral of the ratio L/Delta, with the surface in gamma_i space determined by the EW minimum constraints. As a result, a large likelihood actually demands a large ratio L/Delta or equivalently, a small chi^2_{new}=chi^2_{old}+2*ln(Delta). This shows the fine-tuning cost to the likelihood ...
Alsing, Justin; Wandelt, Benjamin; Feeney, Stephen
2018-03-01
Many statistical models in cosmology can be simulated forwards but have intractable likelihood functions. Likelihood-free inference methods allow us to perform Bayesian inference from these models using only forward simulations, free from any likelihood assumptions or approximations. Likelihood-free inference generically involves simulating mock data and comparing to the observed data; this comparison in data-space suffers from the curse of dimensionality and requires compression of the data to a small number of summary statistics to be tractable. In this paper we use massive asymptotically-optimal data compression to reduce the dimensionality of the data-space to just one number per parameter, providing a natural and optimal framework for summary statistic choice for likelihood-free inference. Secondly, we present the first cosmological application of Density Estimation Likelihood-Free Inference (DELFI), which learns a parameterized model for joint distribution of data and parameters, yielding both the parameter posterior and the model evidence. This approach is conceptually simple, requires less tuning than traditional Approximate Bayesian Computation approaches to likelihood-free inference and can give high-fidelity posteriors from orders of magnitude fewer forward simulations. As an additional bonus, it enables parameter inference and Bayesian model comparison simultaneously. We demonstrate Density Estimation Likelihood-Free Inference with massive data compression on an analysis of the joint light-curve analysis supernova data, as a simple validation case study. We show that high-fidelity posterior inference is possible for full-scale cosmological data analyses with as few as ˜104 simulations, with substantial scope for further improvement, demonstrating the scalability of likelihood-free inference to large and complex cosmological datasets.
Siddiqui, Md Zakaria; Donato, Ronald
2017-01-01
To investigate the extent to which individual-level as well as macro-level contextual factors influence the likelihood of underweight across adult sub-populations in India. Population-based cross-sectional survey included in India's National Health Family Survey conducted in 2005-06. We disaggregated into eight sub-populations. Multistage nationally representative household survey covering 99 % of India's population. The survey covered 124 385 females aged 15-49 years and 74 369 males aged 15-54 years. A social gradient in underweight exists in India. Even after allowing for wealth status, differences in the predicted probability of underweight persisted based upon rurality, age/maturity and gender. We found individual-level education lowered the likelihood of underweight for males, but no statistical association for females. Paradoxically, rural young (15-24 years) females from more educated villages had a higher likelihood of underweight relative to those in less educated villages; but for rural mature (>24 years) females the opposite was the case. Christians had a significantly lower likelihood of underweight relative to other socio-religious groups (OR=0·53-0·80). Higher state-level inequality increased the likelihood of underweight across most population groups, while neighbourhood inequality exhibited a similar relationship for the rural young population subgroups only. Individual states/neighbourhoods accounted for 5-9 % of the variation in the prediction of underweight. We found that rural young females represent a particularly highly vulnerable sub-population. Economic growth alone is unlikely to reduce the burden of malnutrition in India; accordingly, policy makers need to address the broader social determinants that contribute to higher underweight prevalence in specific demographic subgroups.
Incidents in nuclear installations
International Nuclear Information System (INIS)
Franzen, L.F.; Wienhold, W.
1976-09-01
With reference to the incident list of the Ministry for the period 1971-74, Prof. Bechert has expressed a lot of questions and statements in a letter to the Government. The letter is quoted in full. Inadequate conclusions drawn by Prof. Bechert in connection with quotations from daily newspapers and other documents are put right. (HP) [de
Fire Incident Reporting Manual
1984-02-01
the result of an incident that requires (or should require) treatment by a practitioner of medicine , a registered emergency medical technician, or a...UNANNOUNCED AIRCRAFT EMERGENCYS ~~PRIOR TO TAKE OFF OR AFTERLADN 5 FUEL OPERATIONS REQUIRING 1AREING G A FIRE10 ARRESTING GEAR’BARRIER FR . ENGAGEMENTS AND
Yokoyama, Yukihiro; Mizuno, Takashi; Sugawara, Gen; Asahara, Takashi; Nomoto, Koji; Igami, Tsuyoshi; Ebata, Tomoki; Nagino, Masato
2017-10-01
To investigate the association between preoperative fecal organic acid concentrations and the incidence of postoperative infectious complications in patients undergoing major hepatectomy with extrahepatic bile duct resection for biliary malignancies. The fecal samples of 44 patients were collected before undergoing hepatectomy with bile duct resection for biliary malignancies. The concentrations of fecal organic acids, including acetic acid, butyric acid, and lactic acid, and representative fecal bacteria were measured. The perioperative clinical characteristics and the concentrations of fecal organic acids were compared between patients with and without postoperative infectious complications. Among 44 patients, 13 (30%) developed postoperative infectious complications. Patient age and intraoperative bleeding were significantly greater in patients with postoperative infectious complications compared with those without postoperative infectious complications. The concentrations of fecal acetic acid and butyric acid were significantly less, whereas the concentration of fecal lactic acid tended to be greater in the patients with postoperative infectious complications. The calculated gap between the concentrations of fecal acetic acid plus butyric acid minus lactic acid gap was less in the patients with postoperative infectious complications (median 43.5 vs 76.1 μmol/g of feces, P = .011). Multivariate analysis revealed that an acetic acid plus butyric acid minus lactic acid gap acid profile (especially low acetic acid, low butyric acid, and high lactic acid) had a clinically important impact on the incidence of postoperative infectious complications in patients undergoing major hepatectomy with extrahepatic bile duct resection. Copyright © 2017. Published by Elsevier Inc.
Meteorological effects on the incidence of pneumococcal bacteremia in Denmark
DEFF Research Database (Denmark)
Tvedebrink, Torben; Lundbye-Christensen, Søren; Thomsen, Reimar W.
perform an 8-year longitudinal population-based ecological study in a Danish county to examine whether foregoing changes in meteorological parameters, including temperature, relative humidity, precipitation, and wind velocity, predicted variations in pneumococcal bacteremia (PB) incidence....
The incidence of induced abortion in Malawi.
Levandowski, Brooke A; Mhango, Chisale; Kuchingale, Edgar; Lunguzi, Juliana; Katengeza, Hans; Gebreselassie, Hailemichael; Singh, Susheela
2013-06-01
Abortion is legally restricted in Malawi, and no data are available on the incidence of the procedure. The Abortion Incidence Complications Methodology was used to estimate levels of induced abortion in Malawi in 2009. Data on provision of postabortion care were collected from 166 public, nongovernmental and private health facilities, and estimates of the likelihood that women who have abortions experience complications and seek care were obtained from 56 key informants. Data from these surveys and from the 2010 Malawi Demographic and Health Survey were used to calculate abortion rates and ratios, and rates of pregnancy and unintended pregnancy. Approximately 18,700 women in Malawi were treated in health facilities for complications of induced abortion in 2009. An estimated 67,300 induced abortions were performed, equivalent to a rate of 23 abortions per 1,000 women aged 15-44 and an abortion ratio of 12 per 100 live births. The abortion rate was higher in the North (35 per 1,000) than in the Central region or the South (20-23 per 1,000). The unintended pregnancy rate in 2010 was 139 per 1,000 women aged 15-44, and an estimated 52% of all pregnancies were unintended. Unsafe abortion is common in Malawi. Interventions are needed to help women and couples avoid unwanted pregnancy, reduce the need for unsafe abortion and decrease maternal mortality.
The Incidence of Abortion in Nigeria.
Bankole, Akinrinola; Adewole, Isaac F; Hussain, Rubina; Awolude, Olutosin; Singh, Susheela; Akinyemi, Joshua O
2015-12-01
Because of Nigeria's low contraceptive prevalence, a substantial number of women have unintended pregnancies, many of which are resolved through clandestine abortion, despite the country's restrictive abortion law. Up-to-date estimates of abortion incidence are needed. A widely used indirect methodology was used to estimate the incidence of abortion and unintended pregnancy in Nigeria in 2012. Data on provision of abortion and postabortion care were collected from a nationally representative sample of 772 health facilities, and estimates of the likelihood that women who have unsafe abortions experience complications and obtain treatment were collected from 194 health care professionals with a broad understanding of the abortion context in Nigeria. An estimated 1.25 million induced abortions occurred in Nigeria in 2012, equivalent to a rate of 33 abortions per 1,000 women aged 15-49. The estimated unintended pregnancy rate was 59 per 1,000 women aged 15-49. Fifty-six percent of unintended pregnancies were resolved by abortion. About 212,000 women were treated for complications of unsafe abortion, representing a treatment rate of 5.6 per 1,000 women of reproductive age, and an additional 285,000 experienced serious health consequences but did not receive the treatment they needed. Levels of unintended pregnancy and unsafe abortion continue to be high in Nigeria. Improvements in access to contraceptive services and in the provision of safe abortion and postabortion care services (as permitted by law) may help reduce maternal morbidity and mortality.
Marginal Maximum Likelihood Estimation of Item Parameters: Application of an EM Algorithm.
Bock, R. Darrell; Aitkin, Murray
1981-01-01
The practicality of using the EM algorithm for maximum likelihood estimation of item parameters in the marginal distribution is presented. The EM procedure is shown to apply to general item-response models. (Author/JKS)
A simple route to maximum-likelihood estimates of two-locus
Indian Academy of Sciences (India)
recombination fractions; maximum likelihood estimates; inequality restrictions; constrained numerical optimization. Author Affiliations. Iain L. Macdonald1 Philasande Nkalashe1. Actuarial Science, University of Cape Town, 7701 Rondebosch, South Africa. Dates. Manuscript received: 24 September 2014; Manuscript revised ...
Debris Likelihood, based on GhostNet, NASA Aqua MODIS, and GOES Imager, EXPERIMENTAL
National Oceanic and Atmospheric Administration, Department of Commerce — Debris Likelihood Index (Estimated) is calculated from GhostNet, NASA Aqua MODIS Chl a and NOAA GOES Imager SST data. THIS IS AN EXPERIMENTAL PRODUCT: intended...
National Aeronautics and Space Administration — NIMBUS7_NFOV_MLCE data are Nimbus 7 Narrow Field of View (NFOV) Maximum Likelihood Cloud Estimation (MLCE) Data in Native Format.The NIMBUS7_NFOV_MLCE data set uses...
A composite likelihood method for bivariate meta-analysis in diagnostic systematic reviews.
Chen, Yong; Liu, Yulun; Ning, Jing; Nie, Lei; Zhu, Hongjian; Chu, Haitao
2017-04-01
Diagnostic systematic review is a vital step in the evaluation of diagnostic technologies. In many applications, it involves pooling pairs of sensitivity and specificity of a dichotomized diagnostic test from multiple studies. We propose a composite likelihood (CL) method for bivariate meta-analysis in diagnostic systematic reviews. This method provides an alternative way to make inference on diagnostic measures such as sensitivity, specificity, likelihood ratios, and diagnostic odds ratio. Its main advantages over the standard likelihood method are the avoidance of the nonconvergence problem, which is nontrivial when the number of studies is relatively small, the computational simplicity, and some robustness to model misspecifications. Simulation studies show that the CL method maintains high relative efficiency compared to that of the standard likelihood method. We illustrate our method in a diagnostic review of the performance of contemporary diagnostic imaging technologies for detecting metastases in patients with melanoma.
Improved Likelihood Ratio Tests for Cointegration Rank in the VAR Model
DEFF Research Database (Denmark)
Boswijk, H. Peter; Jansson, Michael; Nielsen, Morten Ørregaard
We suggest improved tests for cointegration rank in the vector autoregressive (VAR) model and develop asymptotic distribution theory and local power results. The tests are (quasi-)likelihood ratio tests based on a Gaussian likelihood, but of course the asymptotic results apply more generally....... The power gains relative to existing tests are due to two factors. First, instead of basing our tests on the conditional (with respect to the initial observations) likelihood, we follow the recent unit root literature and base our tests on the full likelihood as in, e.g., Elliott, Rothenberg, and Stock...... (1996). Secondly, our tests incorporate a “sign”restriction which generalizes the one-sided unit root test. We show that the asymptotic local power of the proposed tests dominates that of existing cointegration rank tests....
Directory of Open Access Journals (Sweden)
Eugenio Alladio
2017-06-01
Full Text Available The concentration values of direct and indirect biomarkers of ethanol consumption were detected in blood (indirect or hair (direct samples from a pool of 125 individuals classified as either chronic (i.e. positive and non-chronic (i.e. negative alcohol drinkers. These experimental values formed the dataset under examination (Table 1. Indirect biomarkers included: aspartate transferase (AST, alanine transferase (ALT, gamma-glutamyl transferase (GGT, mean corpuscular volume of the erythrocytes (MCV, carbohydrate-deficient-transferrin (CDT. The following direct biomarkers were also detected in hair: ethyl myristate (E14:0, ethyl palmitate (E16:0, ethyl stearate (E18:1, ethyl oleate (E18:0, the sum of their four concentrations (FAEEs, i.e. Fatty Acid Ethyl Esters and ethyl glucuronide (EtG; pg/mg. Body mass index (BMI was also collected as a potential influencing factor. Likelihood ratio (LR approaches have been used to provide predictive models for the diagnosis of alcohol abuse, based on different combinations of direct and indirect alcohol biomarkers, as described in “Evaluation of direct and indirect ethanol biomarkers using a likelihood ratio approach to identify chronic alcohol abusers for forensic purposes” (E. Alladio, A. Martyna, A. Salomone, V. Pirro, M. Vincenti, G. Zadora, 2017 [1].
DEFF Research Database (Denmark)
Christensen, Ole Fredslund; Frydenberg, Morten; Jensen, Jens Ledet
2005-01-01
The large deviation modified likelihood ratio statistic is studied for testing a variance component equal to a specified value. Formulas are presented in the general balanced case, whereas in the unbalanced case only the one-way random effects model is studied. Simulation studies are presented......, showing that the normal approximation to the large deviation modified likelihood ratio statistic gives confidence intervals for variance components with coverage probabilities very close to the nominal confidence coefficient....
Directory of Open Access Journals (Sweden)
Azam Zaka
2014-10-01
Full Text Available This paper is concerned with the modifications of maximum likelihood, moments and percentile estimators of the two parameter Power function distribution. Sampling behavior of the estimators is indicated by Monte Carlo simulation. For some combinations of parameter values, some of the modified estimators appear better than the traditional maximum likelihood, moments and percentile estimators with respect to bias, mean square error and total deviation.
Maximum Likelihood Approach for RFID Tag Set Cardinality Estimation with Detection Errors
DEFF Research Database (Denmark)
Nguyen, Chuyen T.; Hayashi, Kazunori; Kaneko, Megumi
2013-01-01
Abstract Estimation schemes of Radio Frequency IDentification (RFID) tag set cardinality are studied in this paper using Maximum Likelihood (ML) approach. We consider the estimation problem under the model of multiple independent reader sessions with detection errors due to unreliable radio...... is evaluated under dierent system parameters and compared with that of the conventional method via computer simulations assuming flat Rayleigh fading environments and framed-slotted ALOHA based protocol. Keywords RFID tag cardinality estimation maximum likelihood detection error...
Practical Statistics for LHC Physicists: Descriptive Statistics, Probability and Likelihood (1/3)
CERN. Geneva
2015-01-01
These lectures cover those principles and practices of statistics that are most relevant for work at the LHC. The first lecture discusses the basic ideas of descriptive statistics, probability and likelihood. The second lecture covers the key ideas in the frequentist approach, including confidence limits, profile likelihoods, p-values, and hypothesis testing. The third lecture covers inference in the Bayesian approach. Throughout, real-world examples will be used to illustrate the practical application of the ideas. No previous knowledge is assumed.