WorldWideScience

Sample records for previous predictions based

  1. Five criteria for using a surrogate endpoint to predict treatment effect based on data from multiple previous trials.

    Science.gov (United States)

    Baker, Stuart G

    2018-02-20

    A surrogate endpoint in a randomized clinical trial is an endpoint that occurs after randomization and before the true, clinically meaningful, endpoint that yields conclusions about the effect of treatment on true endpoint. A surrogate endpoint can accelerate the evaluation of new treatments but at the risk of misleading conclusions. Therefore, criteria are needed for deciding whether to use a surrogate endpoint in a new trial. For the meta-analytic setting of multiple previous trials, each with the same pair of surrogate and true endpoints, this article formulates 5 criteria for using a surrogate endpoint in a new trial to predict the effect of treatment on the true endpoint in the new trial. The first 2 criteria, which are easily computed from a zero-intercept linear random effects model, involve statistical considerations: an acceptable sample size multiplier and an acceptable prediction separation score. The remaining 3 criteria involve clinical and biological considerations: similarity of biological mechanisms of treatments between the new trial and previous trials, similarity of secondary treatments following the surrogate endpoint between the new trial and previous trials, and a negligible risk of harmful side effects arising after the observation of the surrogate endpoint in the new trial. These 5 criteria constitute an appropriately high bar for using a surrogate endpoint to make a definitive treatment recommendation. Published 2017. This article is a U.S. Government work and is in the public domain in the USA.

  2. Two new prediction rules for spontaneous pregnancy leading to live birth among subfertile couples, based on the synthesis of three previous models.

    NARCIS (Netherlands)

    C.C. Hunault; J.D.F. Habbema (Dik); M.J.C. Eijkemans (René); J.A. Collins (John); J.L.H. Evers (Johannes); E.R. te Velde (Egbert)

    2004-01-01

    textabstractBACKGROUND: Several models have been published for the prediction of spontaneous pregnancy among subfertile patients. The aim of this study was to broaden the empirical basis for these predictions by making a synthesis of three previously published models. METHODS:

  3. Can GSTM1 and GSTT1 polymorphisms predict clinical outcomes of chemotherapy in gastric and colorectal cancers? A result based on the previous reports

    Directory of Open Access Journals (Sweden)

    Liu H

    2016-06-01

    Full Text Available Haixia Liu,1,* Wei Shi,2,* Lianli Zhao,3 Dianlu Dai,4 Jinghua Gao,5 Xiangjun Kong6 1Department of Ultrasound, 2Office of Medical Statistics, 3Human Resource Department, 4Department of Surgical Oncology, 5Department of Medical Oncology, 6Central Laboratory, Cangzhou Central Hospital, Yunhe District, Cangzhou, People’s Republic of China *These authors contributed equally to this study and should be considered cofirst authors Background: Gastric and colorectal cancers remain the major causes of cancer-related death. Although chemotherapy improves the prognosis of the patients with gastrointestinal cancers, some patients do not benefit from therapy and are exposed to the adverse effects. The polymorphisms in genes including GSTM1 and GSTT1 have been explored to predict therapeutic efficacy; however, the results were inconsistent and inconclusive. Materials and methods: A systematic review and meta-analysis was performed by searching relevant studies about the association between the GSTM1 and GSTT1 polymorphisms and chemotherapy efficacy in gastrointestinal cancers in databases such as PubMed, EMBASE, Web of Science, Chinese National Knowledge Infrastructure, and Wanfang database up to January 10, 2016. Subgroup analyses were also performed according to ethnicity, cancer type, evaluation criteria, study type, chemotherapy type, and age. Results: A total of 19 articles containing 3,217 cases were finally included. Overall analysis suggested that no significance was found between overall toxicity, neurotoxicity, neutropenia, gastrointestinal toxicity, tumor response, and progression-free survival, and the polymorphisms in GSTM1 and GSTT1, while GSTM1 polymorphism associated with overall survival (OS; hazard ratio =1.213, 95% confidence interval =1.060–1.388, P=0.005. Subgroup analyses suggested that neurotoxicity was associated with GSTM1 polymorphism in the Asian population, neutropenia was associated with GSTM1 polymorphism in palliative

  4. Predictability by recognizable road design. [previously called: Recognizable road design.

    NARCIS (Netherlands)

    2007-01-01

    One of the Sustainable Safety principles is that a road should have a recognizable design and a predictable alignment. If this is the case, road users know how they are expected to behave and what they can expect from other road users, so that crashes may be prevented. For roads to be recognizable,

  5. Predicting fruit consumption: the role of habits, previous behavior and mediation effects

    NARCIS (Netherlands)

    de Vries, H.; Eggers, S.M.; Lechner, L.; van Osch, L.; van Stralen, M.M.

    2014-01-01

    Background: This study assessed the role of habits and previous behavior in predicting fruit consumption as well as their additional predictive contribution besides socio-demographic and motivational factors. In the literature, habits are proposed as a stable construct that needs to be controlled

  6. Predictive factors for the development of diabetes in women with previous gestational diabetes mellitus

    DEFF Research Database (Denmark)

    Damm, P.; Kühl, C.; Bertelsen, Aksel

    1992-01-01

    OBJECTIVES: The purpose of this study was to determine the incidence of diabetes in women with previous dietary-treated gestational diabetes mellitus and to identify predictive factors for development of diabetes. STUDY DESIGN: Two to 11 years post partum, glucose tolerance was investigated in 241...... women with previous dietary-treated gestational diabetes mellitus and 57 women without previous gestational diabetes mellitus (control group). RESULTS: Diabetes developed in 42 (17.4%) women with previous gestational diabetes mellitus (3.7% insulin-dependent diabetes mellitus and 13.7% non...... of previous patients with gestational diabetes mellitus in whom plasma insulin was measured during an oral glucose tolerance test in late pregnancy a low insulin response at diagnosis was found to be an independent predictive factor for diabetes development. CONCLUSIONS: Women with previous dietary...

  7. Biased ART: a neural architecture that shifts attention toward previously disregarded features following an incorrect prediction.

    Science.gov (United States)

    Carpenter, Gail A; Gaddam, Sai Chaitanya

    2010-04-01

    Memories in Adaptive Resonance Theory (ART) networks are based on matched patterns that focus attention on those portions of bottom-up inputs that match active top-down expectations. While this learning strategy has proved successful for both brain models and applications, computational examples show that attention to early critical features may later distort memory representations during online fast learning. For supervised learning, biased ARTMAP (bARTMAP) solves the problem of over-emphasis on early critical features by directing attention away from previously attended features after the system makes a predictive error. Small-scale, hand-computed analog and binary examples illustrate key model dynamics. Two-dimensional simulation examples demonstrate the evolution of bARTMAP memories as they are learned online. Benchmark simulations show that featural biasing also improves performance on large-scale examples. One example, which predicts movie genres and is based, in part, on the Netflix Prize database, was developed for this project. Both first principles and consistent performance improvements on all simulation studies suggest that featural biasing should be incorporated by default in all ARTMAP systems. Benchmark datasets and bARTMAP code are available from the CNS Technology Lab Website: http://techlab.bu.edu/bART/. Copyright 2009 Elsevier Ltd. All rights reserved.

  8. Multispecies Coevolution Particle Swarm Optimization Based on Previous Search History

    Directory of Open Access Journals (Sweden)

    Danping Wang

    2017-01-01

    Full Text Available A hybrid coevolution particle swarm optimization algorithm with dynamic multispecies strategy based on K-means clustering and nonrevisit strategy based on Binary Space Partitioning fitness tree (called MCPSO-PSH is proposed. Previous search history memorized into the Binary Space Partitioning fitness tree can effectively restrain the individuals’ revisit phenomenon. The whole population is partitioned into several subspecies and cooperative coevolution is realized by an information communication mechanism between subspecies, which can enhance the global search ability of particles and avoid premature convergence to local optimum. To demonstrate the power of the method, comparisons between the proposed algorithm and state-of-the-art algorithms are grouped into two categories: 10 basic benchmark functions (10-dimensional and 30-dimensional, 10 CEC2005 benchmark functions (30-dimensional, and a real-world problem (multilevel image segmentation problems. Experimental results show that MCPSO-PSH displays a competitive performance compared to the other swarm-based or evolutionary algorithms in terms of solution accuracy and statistical tests.

  9. SONOGRAPHIC PREDICTION OF SCAR DEHISCENCE IN WOMEN WITH PREVIOUS CAESAREAN SECTION

    Directory of Open Access Journals (Sweden)

    Shubhada Suhas Jajoo

    2018-01-01

    Full Text Available BACKGROUND Caesarean section (Sectio Caesarea is a surgical method for the completion of delivery. After various historical modifications of operative techniques, modern approach consists in the transverse dissection of the anterior wall of the uterus. The rate of vaginal birth after caesarean section was significantly reduced from year to year and the rate of repeated caesarean section is increased during the past 10 years. Evaluation of scar thickness is done by ultrasound, but it is still debatable size of thick scar that would be guiding “cut-off value” for the completion of the delivery method. To better assess the risk of uterine rupture, some authors have proposed sonographic measurement of lower uterine segment thickness near term assuming that there is an inverse correlation between LUS thickness and the risk of uterine scar defect. Therefore, this assessment for the management of women with prior CS may increase safety during labour by selecting women with the lowest risk of uterine rupture. The aim of the study is to study the diagnostic accuracy of sonographic measurements of the Lower Uterine Segment (LUS thickness near term in predicting uterine scar defects in women with prior Caesarean Section (CS. We aim to ascertain the best cut-off values for predicting uterine rupture. MATERIALS AND METHODS 100 antenatal women with history of previous one LSCS who come to attend antenatal clinic will be assessed for scar thickness by transabdominal ultrasonography and its correlation with intraoperative findings. This prospective longitudinal study was conducted for 1 year after IEC approval with inclusion criteria previous one LSCS. Exclusion criteria- 1 Previous myomectomy scar; 2 Previous 2 LSCS; 3 Previous hysterotomy scar. RESULTS Our findings indicate that there is a strong association between degree of LUS thinning measured near term and the risk of uterine scar defect at birth. In our study, optimal cut-off value for predicting

  10. Prediction of successful trial of labour in patients with a previous caesarean section

    International Nuclear Information System (INIS)

    Shaheen, N.; Khalil, S.; Iftikhar, P.

    2014-01-01

    Objective: To determine the prediction rate of success in trial of labour after one previous caesarean section. Methods: The cross-sectional study was conducted at the Department of Obstetrics and Gynaecology, Cantonment General Hospital, Rawalpindi, from January 1, 2012 to January 31, 2013, and comprised women with one previous Caesarean section and with single alive foetus at 37-41 weeks of gestation. Women with more than one Caesarean section, unknown site of uterine scar, bony pelvic deformity, placenta previa, intra-uterine growth restriction, deep transverse arrest in previous labour and non-reassuring foetal status at the time of admission were excluded. Intrapartum risk assessment included Bishop score at admission, rate of cervical dilatation and scar tenderness. SPSS 21 was used for statistical analysis. Results: Out of a total of 95 women, the trial was successful in 68 (71.6%). Estimated foetal weight and number of prior vaginal deliveries had a high predictive value for successful trial of labour after Caesarean section. Estimated foetal weight had an odds ratio of 0.46 (p<0.001), while number of prior vaginal deliveries had an odds ratio of 0.85 with (p=0.010). Other factors found to be predictive of successful trial included Bishop score at the time of admission (p<0.037) and rate of cervical dilatation in the first stage of labour (p<0.021). Conclusion: History of prior vaginal deliveries, higher Bishop score at the time of admission, rapid rate of cervical dilatation and lower estimated foetal weight were predictive of a successful trial of labour after Caesarean section. (author)

  11. Dietary self-efficacy predicts AHEI diet quality in women with previous gestational diabetes.

    Science.gov (United States)

    Ferranti, Erin Poe; Narayan, K M Venkat; Reilly, Carolyn M; Foster, Jennifer; McCullough, Marjorie; Ziegler, Thomas R; Guo, Ying; Dunbar, Sandra B

    2014-01-01

    The purpose of this study was to examine the association of intrapersonal influences of diet quality as defined by the Health Belief Model constructs in women with recent histories of gestational diabetes. A descriptive, correlational, cross-sectional design was used to analyze relationships between diet quality and intrapersonal variables, including perceptions of threat of type 2 diabetes mellitus development, benefits and barriers of healthy eating, and dietary self-efficacy, in a convenience sample of 75 community-dwelling women (55% minority; mean age, 35.5 years; SD, 5.5 years) with previous gestational diabetes mellitus. Diet quality was defined by the Alternative Healthy Eating Index (AHEI). Multiple regression was used to identify predictors of AHEI diet quality. Women had moderate AHEI diet quality (mean score, 47.6; SD, 14.3). Only higher levels of education and self-efficacy significantly predicted better AHEI diet quality, controlling for other contributing variables. There is a significant opportunity to improve diet quality in women with previous gestational diabetes mellitus. Improving self-efficacy may be an important component to include in nutrition interventions. In addition to identifying other important individual components, future studies of diet quality in women with previous gestational diabetes mellitus are needed to investigate the scope of influence beyond the individual to potential family, social, and environmental factors. © 2014 The Author(s).

  12. Validation of a Previously Developed Geospatial Model That Predicts the Prevalence of Listeria monocytogenes in New York State Produce Fields.

    Science.gov (United States)

    Weller, Daniel; Shiwakoti, Suvash; Bergholz, Peter; Grohn, Yrjo; Wiedmann, Martin; Strawn, Laura K

    2016-02-01

    Technological advancements, particularly in the field of geographic information systems (GIS), have made it possible to predict the likelihood of foodborne pathogen contamination in produce production environments using geospatial models. Yet, few studies have examined the validity and robustness of such models. This study was performed to test and refine the rules associated with a previously developed geospatial model that predicts the prevalence of Listeria monocytogenes in produce farms in New York State (NYS). Produce fields for each of four enrolled produce farms were categorized into areas of high or low predicted L. monocytogenes prevalence using rules based on a field's available water storage (AWS) and its proximity to water, impervious cover, and pastures. Drag swabs (n = 1,056) were collected from plots assigned to each risk category. Logistic regression, which tested the ability of each rule to accurately predict the prevalence of L. monocytogenes, validated the rules based on water and pasture. Samples collected near water (odds ratio [OR], 3.0) and pasture (OR, 2.9) showed a significantly increased likelihood of L. monocytogenes isolation compared to that for samples collected far from water and pasture. Generalized linear mixed models identified additional land cover factors associated with an increased likelihood of L. monocytogenes isolation, such as proximity to wetlands. These findings validated a subset of previously developed rules that predict L. monocytogenes prevalence in produce production environments. This suggests that GIS and geospatial models can be used to accurately predict L. monocytogenes prevalence on farms and can be used prospectively to minimize the risk of preharvest contamination of produce. Copyright © 2016, American Society for Microbiology. All Rights Reserved.

  13. Validation of a Previously Developed Geospatial Model That Predicts the Prevalence of Listeria monocytogenes in New York State Produce Fields

    Science.gov (United States)

    Weller, Daniel; Shiwakoti, Suvash; Bergholz, Peter; Grohn, Yrjo; Wiedmann, Martin

    2015-01-01

    Technological advancements, particularly in the field of geographic information systems (GIS), have made it possible to predict the likelihood of foodborne pathogen contamination in produce production environments using geospatial models. Yet, few studies have examined the validity and robustness of such models. This study was performed to test and refine the rules associated with a previously developed geospatial model that predicts the prevalence of Listeria monocytogenes in produce farms in New York State (NYS). Produce fields for each of four enrolled produce farms were categorized into areas of high or low predicted L. monocytogenes prevalence using rules based on a field's available water storage (AWS) and its proximity to water, impervious cover, and pastures. Drag swabs (n = 1,056) were collected from plots assigned to each risk category. Logistic regression, which tested the ability of each rule to accurately predict the prevalence of L. monocytogenes, validated the rules based on water and pasture. Samples collected near water (odds ratio [OR], 3.0) and pasture (OR, 2.9) showed a significantly increased likelihood of L. monocytogenes isolation compared to that for samples collected far from water and pasture. Generalized linear mixed models identified additional land cover factors associated with an increased likelihood of L. monocytogenes isolation, such as proximity to wetlands. These findings validated a subset of previously developed rules that predict L. monocytogenes prevalence in produce production environments. This suggests that GIS and geospatial models can be used to accurately predict L. monocytogenes prevalence on farms and can be used prospectively to minimize the risk of preharvest contamination of produce. PMID:26590280

  14. Predicting United States Medical Licensure Examination Step 2 clinical knowledge scores from previous academic indicators

    Directory of Open Access Journals (Sweden)

    Monteiro KA

    2017-06-01

    Full Text Available Kristina A Monteiro, Paul George, Richard Dollase, Luba Dumenco Office of Medical Education, The Warren Alpert Medical School of Brown University, Providence, RI, USA Abstract: The use of multiple academic indicators to identify students at risk of experiencing difficulty completing licensure requirements provides an opportunity to increase support services prior to high-stakes licensure examinations, including the United States Medical Licensure Examination (USMLE Step 2 clinical knowledge (CK. Step 2 CK is becoming increasingly important in decision-making by residency directors because of increasing undergraduate medical enrollment and limited available residency vacancies. We created and validated a regression equation to predict students’ Step 2 CK scores from previous academic indicators to identify students at risk, with sufficient time to intervene with additional support services as necessary. Data from three cohorts of students (N=218 with preclinical mean course exam score, National Board of Medical Examination subject examinations, and USMLE Step 1 and Step 2 CK between 2011 and 2013 were used in analyses. The authors created models capable of predicting Step 2 CK scores from academic indicators to identify at-risk students. In model 1, preclinical mean course exam score and Step 1 score accounted for 56% of the variance in Step 2 CK score. The second series of models included mean preclinical course exam score, Step 1 score, and scores on three NBME subject exams, and accounted for 67%–69% of the variance in Step 2 CK score. The authors validated the findings on the most recent cohort of graduating students (N=89 and predicted Step 2 CK score within a mean of four points (SD=8. The authors suggest using the first model as a needs assessment to gauge the level of future support required after completion of preclinical course requirements, and rescreening after three of six clerkships to identify students who might benefit from

  15. Predicting Radiation Pneumonitis After Stereotactic Ablative Radiation Therapy in Patients Previously Treated With Conventional Thoracic Radiation Therapy

    International Nuclear Information System (INIS)

    Liu Hui; Zhang Xu; Vinogradskiy, Yevgeniy Y.; Swisher, Stephen G.; Komaki, Ritsuko; Chang, Joe Y.

    2012-01-01

    Purpose: To determine the incidence of and risk factors for radiation pneumonitis (RP) after stereotactic ablative radiation therapy (SABR) to the lung in patients who had previously undergone conventional thoracic radiation therapy. Methods and Materials: Seventy-two patients who had previously received conventionally fractionated radiation therapy to the thorax were treated with SABR (50 Gy in 4 fractions) for recurrent disease or secondary parenchymal lung cancer (T 10 and mean lung dose (MLD) of the previous plan and the V 10 -V 40 and MLD of the composite plan were also related to RP. Multivariate analysis revealed that ECOG PS scores of 2-3 before SABR (P=.009), FEV1 ≤65% before SABR (P=.012), V 20 ≥30% of the composite plan (P=.021), and an initial PTV in the bilateral mediastinum (P=.025) were all associated with RP. Conclusions: We found that severe RP was relatively common, occurring in 20.8% of patients, and could be predicted by an ECOG PS score of 2-3, an FEV1 ≤65%, a previous PTV spanning the bilateral mediastinum, and V 20 ≥30% on composite (previous RT+SABR) plans. Prospective studies are needed to validate these predictors and the scoring system on which they are based.

  16. Improved ability of biological and previous caries multimarkers to predict caries disease as revealed by multivariate PLS modelling

    Directory of Open Access Journals (Sweden)

    Ericson Thorild

    2009-11-01

    Full Text Available Abstract Background Dental caries is a chronic disease with plaque bacteria, diet and saliva modifying disease activity. Here we have used the PLS method to evaluate a multiplicity of such biological variables (n = 88 for ability to predict caries in a cross-sectional (baseline caries and prospective (2-year caries development setting. Methods Multivariate PLS modelling was used to associate the many biological variables with caries recorded in thirty 14-year-old children by measuring the numbers of incipient and manifest caries lesions at all surfaces. Results A wide but shallow gliding scale of one fifth caries promoting or protecting, and four fifths non-influential, variables occurred. The influential markers behaved in the order of plaque bacteria > diet > saliva, with previously known plaque bacteria/diet markers and a set of new protective diet markers. A differential variable patterning appeared for new versus progressing lesions. The influential biological multimarkers (n = 18 predicted baseline caries better (ROC area 0.96 than five markers (0.92 and a single lactobacilli marker (0.7 with sensitivity/specificity of 1.87, 1.78 and 1.13 at 1/3 of the subjects diagnosed sick, respectively. Moreover, biological multimarkers (n = 18 explained 2-year caries increment slightly better than reported before but predicted it poorly (ROC area 0.76. By contrast, multimarkers based on previous caries predicted alone (ROC area 0.88, or together with biological multimarkers (0.94, increment well with a sensitivity/specificity of 1.74 at 1/3 of the subjects diagnosed sick. Conclusion Multimarkers behave better than single-to-five markers but future multimarker strategies will require systematic searches for improved saliva and plaque bacteria markers.

  17. Predictive effects of previous episodes on the risk of recurrence in depressive and bipolar disorders

    DEFF Research Database (Denmark)

    Kessing, Lars Vedel; Andersen, Per Kragh

    2005-01-01

    Findings from several studies have suggested that the risk of recurrence increases with the number of previous episodes in depressive and bipolar disorders. However, a comprehensive and critical review of the literature published during the past century shows that in several previous studies...

  18. Predicting university performance in psychology: the role of previous performance and discipline-specific knowledge

    OpenAIRE

    Betts, LR; Elder, TJ; Hartley, J; Blurton, A

    2008-01-01

    Recent initiatives to enhance retention and widen participation ensure it is crucial to understand the factors that predict students' performance during their undergraduate degree. The present research used Structural Equation Modeling (SEM) to test three separate models that examined the extent to which British Psychology students' A-level entry qualifications predicted: (1) their performance in years 1-3 of their Psychology degree, and (2) their overall degree performance. Students' overall...

  19. Predicting intraindividual changes in learning strategies: The effects of previous achievement

    OpenAIRE

    Buško, Vesna; Mujagić, Amela

    2013-01-01

    Socio-cognitive models of self-regulated learning (e.g., Pintrich, 2000) emphasize contextualized nature oflearning process, and within-person variation in learning processes, along with between-person variability in selfregulation.Previous studies about contextual nature of learning strategies have mostly focused on the effects ofdifferent contextual factors on interindividual differences in learning strategies utilization. However, less attentionwas given to the question about contextual ef...

  20. Age, training, and previous experience predict race performance in long-distance inline skaters, not anthropometry.

    Science.gov (United States)

    Knechtle, Beat; Knechtle, Patrizia; Rüst, Christoph Alexander; Rosemann, Thomas; Lepers, Romuald

    2012-02-01

    The association of characteristics of anthropometry, training, and previous experience with race time in 84 recreational, long-distance, inline skaters at the longest inline marathon in Europe (111 km), the Inline One-eleven in Switzerland, was investigated to identify predictor variables for performance. Age, duration per training unit, and personal best time were the only three variables related to race time in a multiple regression, while none of the 16 anthropometric variables were related. Anthropometric characteristics seem to be of no importance for a fast race time in a long-distance inline skating race in contrast to training volume and previous experience, when controlled with covariates. Improving performance in a long-distance inline skating race might be related to a high training volume and previous race experience. Also, doing such a race requires a parallel psychological effort, mental stamina, focus, and persistence. This may be reflected in the preparation and training for the event. Future studies should investigate what motivates these athletes to train and compete.

  1. Prediction of Happy-Sad mood from daily behaviors and previous sleep history.

    Science.gov (United States)

    Sano, Akane; Yu, Amy Z; McHill, Andrew W; Phillips, Andrew J K; Taylor, Sara; Jaques, Natasha; Klerman, Elizabeth B; Picard, Rosalind W

    2015-01-01

    We collected and analyzed subjective and objective data using surveys and wearable sensors worn day and night from 68 participants for ~30 days each, to address questions related to the relationships among sleep duration, sleep irregularity, self-reported Happy-Sad mood and other daily behavioral factors in college students. We analyzed this behavioral and physiological data to (i) identify factors that classified the participants into Happy-Sad mood using support vector machines (SVMs); and (ii) analyze how accurately sleep duration and sleep regularity for the past 1-5 days classified morning Happy-Sad mood. We found statistically significant associations amongst Sad mood and poor health-related factors. Behavioral factors including the frequency of negative social interactions, and negative emails, and total academic activity hours showed the best performance in separating the Happy-Sad mood groups. Sleep regularity and sleep duration predicted daily Happy-Sad mood with 65-80% accuracy. The number of nights giving the best prediction of Happy-Sad mood varied for different individuals.

  2. Prediction of Happy-Sad Mood from Daily Behaviors and Previous Sleep History

    Science.gov (United States)

    Sano, Akane; Yu, Amy; McHill, Andrew W.; Phillips, Andrew J. K.; Taylor, Sara; Jaques, Natasha; Klerman, Elizabeth B.; Picard, Rosalind W.

    2016-01-01

    We collected and analyzed subjective and objective data using surveys and wearable sensors worn day and night from 68 participants, for 30 days each, to address questions related to the relationships among sleep duration, sleep irregularity, self-reported Happy-Sad mood and other factors in college students. We analyzed daily and monthly behavior and physiology and identified factors that affect mood, including how accurately sleep duration and sleep regularity for the past 1-5 days classified the participants into high/low mood using support vector machines. We found statistically significant associations among sad mood and poor health-related factors. Behavioral factors such as the percentage of neutral social interactions and the total academic activity hours showed the best performance in separating the Happy-Sad mood groups. Sleep regularity was a more important discriminator of mood than sleep duration for most participants, although both variables predicted happy/sad mood with from 70-82% accuracy. The number of nights giving the best prediction of happy/sad mood varied for different groups of individuals. PMID:26737854

  3. Estimation of Resting Energy Expenditure: Validation of Previous and New Predictive Equations in Obese Children and Adolescents.

    Science.gov (United States)

    Acar-Tek, Nilüfer; Ağagündüz, Duygu; Çelik, Bülent; Bozbulut, Rukiye

    2017-08-01

    Accurate estimation of resting energy expenditure (REE) in childrenand adolescents is important to establish estimated energy requirements. The aim of the present study was to measure REE in obese children and adolescents by indirect calorimetry method, compare these values with REE values estimated by equations, and develop the most appropriate equation for this group. One hundred and three obese children and adolescents (57 males, 46 females) between 7 and 17 years (10.6 ± 2.19 years) were recruited for the study. REE measurements of subjects were made with indirect calorimetry (COSMED, FitMatePro, Rome, Italy) and body compositions were analyzed. In females, the percentage of accurate prediction varied from 32.6 (World Health Organization [WHO]) to 43.5 (Molnar and Lazzer). The bias for equations was -0.2% (Kim), 3.7% (Molnar), and 22.6% (Derumeaux-Burel). Kim's (266 kcal/d), Schmelzle's (267 kcal/d), and Henry's equations (268 kcal/d) had the lowest root mean square error (RMSE; respectively 266, 267, 268 kcal/d). The equation that has the highest RMSE values among female subjects was the Derumeaux-Burel equation (394 kcal/d). In males, when the Institute of Medicine (IOM) had the lowest accurate prediction value (12.3%), the highest values were found using Schmelzle's (42.1%), Henry's (43.9%), and Müller's equations (fat-free mass, FFM; 45.6%). When Kim and Müller had the smallest bias (-0.6%, 9.9%), Schmelzle's equation had the smallest RMSE (331 kcal/d). The new specific equation based on FFM was generated as follows: REE = 451.722 + (23.202 * FFM). According to Bland-Altman plots, it has been found out that the new equations are distributed randomly in both males and females. Previously developed predictive equations mostly provided unaccurate and biased estimates of REE. However, the new predictive equations allow clinicians to estimate REE in an obese children and adolescents with sufficient and acceptable accuracy.

  4. Attribute and topology based change detection in a constellation of previously detected objects

    Science.gov (United States)

    Paglieroni, David W.; Beer, Reginald N.

    2016-01-19

    A system that applies attribute and topology based change detection to networks of objects that were detected on previous scans of a structure, roadway, or area of interest. The attributes capture properties or characteristics of the previously detected objects, such as location, time of detection, size, elongation, orientation, etc. The topology of the network of previously detected objects is maintained in a constellation database that stores attributes of previously detected objects and implicitly captures the geometrical structure of the network. A change detection system detects change by comparing the attributes and topology of new objects detected on the latest scan to the constellation database of previously detected objects.

  5. Efficacy of peg-interferon based treatment in patients with hepatitis C refractory to previous conventional interferon-based treatment

    International Nuclear Information System (INIS)

    Shaikh, S.; Devrajani, B.R.; Kalhoro, M.

    2012-01-01

    Objective: To determine the efficacy of peg-interferon-based therapy in patients refractory to previous conventional interferon-based treatment and factors predicting sustained viral response (SVR). Study Design: Analytical study. Place and Duration of Study: Medical Unit IV, Liaquat University Hospital, Jamshoro, from July 2009 to June 2011. Methodology: This study included consecutive patients of hepatitis C who were previously treated with conventional interferon-based treatment for 6 months but were either non-responders, relapsed or had virologic breakthrough and stage = 2 with fibrosis on liver biopsy. All eligible patients were provided peg-interferon at the dosage of 180 mu g weekly with ribavirin thrice a day for 6 months. Sustained Viral Response (SVR) was defined as absence of HCV RNA at twenty four week after treatment. All data was processed on SPSS version 16. Results: Out of 450 patients enrolled in the study, 192 were excluded from the study on the basis of minimal fibrosis (stage 0 and 1). Two hundred and fifty eight patients fulfilled the inclusion criteria and 247 completed the course of peg-interferon treatment. One hundred and sixty one (62.4%) were males and 97 (37.6%) were females. The mean age was 39.9 +- 6.1 years, haemoglobin was 11.49 +- 2.45 g/dl, platelet count was 127.2 +- 50.6 10/sup 3/ /mm/sup 3/, ALT was 99 +- 65 IU/L. SVR was achieved in 84 (32.6%). The strong association was found between SVR and the pattern of response (p = 0. 001), degree of fibrosis and early viral response (p = 0.001). Conclusion: Peg-interferon based treatment is an effective and safe treatment option for patients refractory to conventional interferon-based treatment. (author)

  6. The chronic obstructive pulmonary disease assessment test improves the predictive value of previous exacerbations for poor outcomes in COPD.

    Science.gov (United States)

    Miravitlles, Marc; García-Sidro, Patricia; Fernández-Nistal, Alonso; Buendía, María Jesús; Espinosa de Los Monteros, María José; Esquinas, Cristina; Molina, Jesús

    2015-01-01

    Chronic obstructive pulmonary disease (COPD) exacerbations have a negative impact on the quality of life of patients and the evolution of the disease. We have investigated the prognostic value of several health-related quality of life questionnaires to predict the appearance of a composite event (new ambulatory or emergency exacerbation, hospitalization, or death) over a 1-year follow-up. This was a multicenter, prospective, observational study. Patients completed four questionnaires after recovering from an exacerbation (COPD Assessment Test [CAT], a Clinical COPD Questionnaire [CCQ], COPD Severity Score [COPDSS], and Airways Questionnaire [AQ20]). Patients were followed-up until the appearance of the composite event or for 1 year, whichever came first. A total of 497 patients were included in the study. The majority of them were men (89.7%), with a mean age of 68.7 (SD 9.2) years, and a forced expiratory volume in 1 second of 47.1% (SD 17.5%). A total of 303 (61%) patients experienced a composite event. Patients with an event had worse mean scores of all questionnaires at baseline compared to patients without event: CAT=12.5 vs 11.3 (P=0.028); CCQ=2.2 vs 1.9 (P=0.013); COPDSS=12.3 vs 10.9 (P=0.001); AQ20=8.3 vs 7.5 (P=0.048). In the multivariate analysis, only previous history of exacerbations and CAT score ≥13.5 were significant risk factors for the composite event. A CAT score ≥13.5 increased the predictive value of previous exacerbations with an area under the receiver operating characteristic curve of 0.864 (95% CI: 0.829-0.899; P=0.001). The predictive value of previous exacerbations significantly increased only in one of the four trialled questionnaires, namely in the CAT questionnaire. However, previous history of exacerbations was the strongest predictor of the composite event.

  7. Prediction based on mean subset

    DEFF Research Database (Denmark)

    Øjelund, Henrik; Brown, P. J.; Madsen, Henrik

    2002-01-01

    , it is found that the proposed mean subset method has superior prediction performance than prediction based on the best subset method, and in some settings also better than the ridge regression and lasso methods. The conclusions drawn from the Monte Carlo study is corroborated in an example in which prediction......Shrinkage methods have traditionally been applied in prediction problems. In this article we develop a shrinkage method (mean subset) that forms an average of regression coefficients from individual subsets of the explanatory variables. A Bayesian approach is taken to derive an expression of how...... the coefficient vectors from each subset should be weighted. It is not computationally feasible to calculate the mean subset coefficient vector for larger problems, and thus we suggest an algorithm to find an approximation to the mean subset coefficient vector. In a comprehensive Monte Carlo simulation study...

  8. Survival Prediction in Patients Undergoing Open-Heart Mitral Valve Operation After Previous Failed MitraClip Procedures.

    Science.gov (United States)

    Geidel, Stephan; Wohlmuth, Peter; Schmoeckel, Michael

    2016-03-01

    The objective of this study was to analyze the results of open heart mitral valve operations for survival prediction in patients with previously unsuccessful MitraClip procedures. Thirty-three consecutive patients who underwent mitral valve surgery in our institution were studied. At a median of 41 days, they had previously undergone one to five futile MitraClip implantations. At the time of their operations, patients were 72.6 ± 10.3 years old, and the calculated risk, using the European System for Cardiac Operative Risk Evaluation (EuroSCORE) II, was a median of 26.5%. Individual outcomes were recorded, and all patients were monitored postoperatively. Thirty-day mortality was 9.1%, and the overall survival at 2.2 years was 60.6%. Seven cardiac-related and six noncardiac deaths occurred. Univariate survival regression models demonstrated a significant influence of the following variables on survival: EuroSCORE II (p = 0.0022), preoperative left ventricular end-diastolic dimension (p = 0.0052), left ventricular ejection fraction (p = 0.0249), coronary artery disease (p = 0.0385), and severe pulmonary hypertension (p = 0.0431). Survivors showed considerable improvements in their New York Heart Association class (p < 0.0001), left ventricular ejection fraction (p = 0.0080), grade of mitral regurgitation (p = 0.0350), and mitral valve area (p = 0.0486). Survival after mitral repair was not superior to survival after replacement. Indications for surgery after failed MitraClip procedures must be considered with the greatest of care. Variables predicting postoperative survival should be taken into account regarding the difficult decision as to whether to operate or not. Our data suggest that replacement of the pretreated mitral valve is probably the more reasonable concept rather than complex repairs. When the EuroSCORE II at the time of surgery exceeds 30%, conservative therapy is advisable. Copyright © 2016 The Society of Thoracic Surgeons. Published by Elsevier Inc

  9. Previous injuries and some training characteristics predict running-related injuries in recreational runners: a prospective cohort study.

    Science.gov (United States)

    Hespanhol Junior, Luiz Carlos; Pena Costa, Leonardo Oliveira; Lopes, Alexandre Dias

    2013-12-01

    What is the incidence of running-related injuries (RRIs) in recreational runners? Which personal and training characteristics predict RRIs in recreational runners? Prospective cohort study. A total of 200 recreational runners answered a fortnightly online survey containing questions about their running routine, races, and presence of RRI. These runners were followed-up for a period of 12 weeks. The primary outcome of this study was running-related injury. The incidence of injuries was calculated taking into account the exposure to running and was expressed by RRI/1000 hours. The association between potential predictive factors and RRIs was estimated using generalised estimating equation models. A total of 84 RRIs were registered in 60 (31%) of the 191 recreational runners who completed all follow-up surveys. Of the injured runners 30% (n=18/60) developed two or more RRIs, with 5/18 (28%) being recurrences. The incidence of RRI was 10 RRI/1000 hours of running exposure. The main type of RRI observed was muscle injuries (30%, n=25/84). The knee was the most commonly affected anatomical region (19%, n=16/84). The variables associated with RRI were: previous RRI (OR 1.88, 95% CI 1.01 to 3.51), duration of training although the effect was very small (OR 1.01, 95% CI 1.00 to 1.02), speed training (OR 1.46, 95% CI 1.02 to 2.10), and interval training (OR 0.61, 95% CI 0.43 to 0.88). Physiotherapists should be aware and advise runners that past RRI and speed training are associated with increased risk of further RRI, while interval training is associated with lower risk, although these associations may not be causative. Copyright © 2013 Australian Physiotherapy Association. Published by Elsevier B.V. All rights reserved.

  10. Cultivation-based multiplex phenotyping of human gut microbiota allows targeted recovery of previously uncultured bacteria

    DEFF Research Database (Denmark)

    Rettedal, Elizabeth; Gumpert, Heidi; Sommer, Morten

    2014-01-01

    The human gut microbiota is linked to a variety of human health issues and implicated in antibiotic resistance gene dissemination. Most of these associations rely on culture-independent methods, since it is commonly believed that gut microbiota cannot be easily or sufficiently cultured. Here, we...... microbiota. Based on the phenotypic mapping, we tailor antibiotic combinations to specifically select for previously uncultivated bacteria. Utilizing this method we cultivate and sequence the genomes of four isolates, one of which apparently belongs to the genus Oscillibacter; uncultivated Oscillibacter...

  11. In vivo dentate nucleus MRI relaxometry correlates with previous administration of Gadolinium-based contrast agents

    Energy Technology Data Exchange (ETDEWEB)

    Tedeschi, Enrico; Canna, Antonietta; Cocozza, Sirio; Russo, Carmela; Angelini, Valentina; Brunetti, Arturo [University ' ' Federico II' ' , Neuroradiology, Department of Advanced Biomedical Sciences, Naples (Italy); Palma, Giuseppe; Quarantelli, Mario [National Research Council, Institute of Biostructure and Bioimaging, Naples (Italy); Borrelli, Pasquale; Salvatore, Marco [IRCCS SDN, Naples (Italy); Lanzillo, Roberta; Postiglione, Emanuela; Morra, Vincenzo Brescia [University ' ' Federico II' ' , Department of Neurosciences, Reproductive and Odontostomatological Sciences, Naples (Italy)

    2016-12-15

    To evaluate changes in T1 and T2* relaxometry of dentate nuclei (DN) with respect to the number of previous administrations of Gadolinium-based contrast agents (GBCA). In 74 relapsing-remitting multiple sclerosis (RR-MS) patients with variable disease duration (9.8±6.8 years) and severity (Expanded Disability Status Scale scores:3.1±0.9), the DN R1 (1/T1) and R2* (1/T2*) relaxation rates were measured using two unenhanced 3D Dual-Echo spoiled Gradient-Echo sequences with different flip angles. Correlations of the number of previous GBCA administrations with DN R1 and R2* relaxation rates were tested, including gender and age effect, in a multivariate regression analysis. The DN R1 (normalized by brainstem) significantly correlated with the number of GBCA administrations (p<0.001), maintaining the same significance even when including MS-related factors. Instead, the DN R2* values correlated only with age (p=0.003), and not with GBCA administrations (p=0.67). In a subgroup of 35 patients for whom the administered GBCA subtype was known, the effect of GBCA on DN R1 appeared mainly related to linear GBCA. In RR-MS patients, the number of previous GBCA administrations correlates with R1 relaxation rates of DN, while R2* values remain unaffected, suggesting that T1-shortening in these patients is related to the amount of Gadolinium given. (orig.)

  12. Extending Theory-Based Quantitative Predictions to New Health Behaviors.

    Science.gov (United States)

    Brick, Leslie Ann D; Velicer, Wayne F; Redding, Colleen A; Rossi, Joseph S; Prochaska, James O

    2016-04-01

    Traditional null hypothesis significance testing suffers many limitations and is poorly adapted to theory testing. A proposed alternative approach, called Testing Theory-based Quantitative Predictions, uses effect size estimates and confidence intervals to directly test predictions based on theory. This paper replicates findings from previous smoking studies and extends the approach to diet and sun protection behaviors using baseline data from a Transtheoretical Model behavioral intervention (N = 5407). Effect size predictions were developed using two methods: (1) applying refined effect size estimates from previous smoking research or (2) using predictions developed by an expert panel. Thirteen of 15 predictions were confirmed for smoking. For diet, 7 of 14 predictions were confirmed using smoking predictions and 6 of 16 using expert panel predictions. For sun protection, 3 of 11 predictions were confirmed using smoking predictions and 5 of 19 using expert panel predictions. Expert panel predictions and smoking-based predictions poorly predicted effect sizes for diet and sun protection constructs. Future studies should aim to use previous empirical data to generate predictions whenever possible. The best results occur when there have been several iterations of predictions for a behavior, such as with smoking, demonstrating that expected values begin to converge on the population effect size. Overall, the study supports necessity in strengthening and revising theory with empirical data.

  13. A New Zealand based cohort study of anaesthetic trainees' career outcomes compared with previously expressed intentions.

    Science.gov (United States)

    Moran, E M L; French, R A; Kennedy, R R

    2011-09-01

    Predicting workforce requirements is a difficult but necessary part of health resource planning. A 'snapshot' workforce survey undertaken in 2002 examined issues that New Zealand anaesthesia trainees expected would influence their choice of future workplace. We have restudied the same cohort to see if that workforce survey was a good predictor of outcome. Seventy (51%) of 138 surveys were completed in 2009 compared with 100 (80%) of 138 in the 2002 survey. Eighty percent of the 2002 respondents planned consultant positions in New Zealand. We found 64% of respondents were working in New Zealand (P New Zealand based respondents but only 40% of those living outside New Zealand agreed or strongly agreed with this statement (P New Zealand but was important for only 2% of those resident in New Zealand (P New Zealand were predominantly between NZ$150,000 and $200,000 while those overseas received between NZ$300,000 and $400,000. Of those that are resident in New Zealand, 84% had studied in a New Zealand medical school compared with 52% of those currently working overseas (P < 0.01). Our study shows that stated career intentions in a group do not predict the actual group outcomes. We suggest that 'snapshot' studies examining workforce intentions are of little value for workforce planning. However we believe an ongoing program matching career aspirations against career outcomes would be a useful tool in workforce planning.

  14. Vaccinia-based influenza vaccine overcomes previously induced immunodominance hierarchy for heterosubtypic protection.

    Science.gov (United States)

    Kwon, Ji-Sun; Yoon, Jungsoon; Kim, Yeon-Jung; Kang, Kyuho; Woo, Sunje; Jung, Dea-Im; Song, Man Ki; Kim, Eun-Ha; Kwon, Hyeok-Il; Choi, Young Ki; Kim, Jihye; Lee, Jeewon; Yoon, Yeup; Shin, Eui-Cheol; Youn, Jin-Won

    2014-08-01

    Growing concerns about unpredictable influenza pandemics require a broadly protective vaccine against diverse influenza strains. One of the promising approaches was a T cell-based vaccine, but the narrow breadth of T-cell immunity due to the immunodominance hierarchy established by previous influenza infection and efficacy against only mild challenge condition are important hurdles to overcome. To model T-cell immunodominance hierarchy in humans in an experimental setting, influenza-primed C57BL/6 mice were chosen and boosted with a mixture of vaccinia recombinants, individually expressing consensus sequences from avian, swine, and human isolates of influenza internal proteins. As determined by IFN-γ ELISPOT and polyfunctional cytokine secretion, the vaccinia recombinants of influenza expanded the breadth of T-cell responses to include subdominant and even minor epitopes. Vaccine groups were successfully protected against 100 LD50 challenges with PR/8/34 and highly pathogenic avian influenza H5N1, which contained the identical dominant NP366 epitope. Interestingly, in challenge with pandemic A/Cal/04/2009 containing mutations in the dominant epitope, only the group vaccinated with rVV-NP + PA showed improved protection. Taken together, a vaccinia-based influenza vaccine expressing conserved internal proteins improved the breadth of influenza-specific T-cell immunity and provided heterosubtypic protection against immunologically close as well as distant influenza strains. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  15. The impact of previous knee injury on force plate and field-based measures of balance.

    Science.gov (United States)

    Baltich, Jennifer; Whittaker, Jackie; Von Tscharner, Vinzenz; Nettel-Aguirre, Alberto; Nigg, Benno M; Emery, Carolyn

    2015-10-01

    Individuals with post-traumatic osteoarthritis demonstrate increased sway during quiet stance. The prospective association between balance and disease onset is unknown. Improved understanding of balance in the period between joint injury and disease onset could inform secondary prevention strategies to prevent or delay the disease. This study examines the association between youth sport-related knee injury and balance, 3-10years post-injury. Participants included 50 individuals (ages 15-26years) with a sport-related intra-articular knee injury sustained 3-10years previously and 50 uninjured age-, sex- and sport-matched controls. Force-plate measures during single-limb stance (center-of-pressure 95% ellipse-area, path length, excursion, entropic half-life) and field-based balance scores (triple single-leg hop, star-excursion, unipedal dynamic balance) were collected. Descriptive statistics (mean within-pair difference; 95% confidence intervals) were used to compare groups. Linear regression (adjusted for injury history) was used to assess the relationship between ellipse-area and field-based scores. Injured participants on average demonstrated greater medio-lateral excursion [mean within-pair difference (95% confidence interval); 2.8mm (1.0, 4.5)], more regular medio-lateral position [10ms (2, 18)], and shorter triple single-leg hop distances [-30.9% (-8.1, -53.7)] than controls, while no between group differences existed for the remaining outcomes. After taking into consideration injury history, triple single leg hop scores demonstrated a linear association with ellipse area (β=0.52, 95% confidence interval 0.01, 1.01). On average the injured participants adjusted their position less frequently and demonstrated a larger magnitude of movement during single-limb stance compared to controls. These findings support the evaluation of balance outcomes in the period between knee injury and post-traumatic osteoarthritis onset. Copyright © 2015 Elsevier Ltd. All rights

  16. Metabolic activity by {sup 18}F-FDG-PET/CT is predictive of early response after nivolumab in previously treated NSCLC

    Energy Technology Data Exchange (ETDEWEB)

    Kaira, Kyoichi; Altan, Bolag [Gunma University Graduate School of Medicine, Department of Oncology Clinical Development, Maebashi, Gunma (Japan); Higuchi, Tetsuya; Arisaka, Yukiko; Tokue, Azusa [Gunma University Graduate School of Medicine, Department of Diagnostic Radiology and Nuclear Medicine, Maebashi, Gunma (Japan); Naruse, Ichiro [Hidaka Hospital, Department of Respiratory Medicine, Hidaka (Japan); Suda, Satoshi [Hidaka Hospital, Department of Radiology, Hidaka (Japan); Mogi, Akira; Shimizu, Kimihiro [Gunma University Graduate School of Medicine, Department of General Surgical Science, Maebashi, Gunma (Japan); Sunaga, Noriaki [Gunma University Hospital, Oncology Center, Maebashi, Gunma (Japan); Hisada, Takeshi [Gunma University Hospital, Department of Respiratory Medicine, Maebashi, Gunma (Japan); Kitano, Shigehisa [National Cancer Center Hospital, Department of Experimental Therapeutics, Tokyo (Japan); Obinata, Hideru; Asao, Takayuki [Gunma University Initiative for Advanced Research, Big Data Center for Integrative Analysis, Maebashi, Gunma (Japan); Yokobori, Takehiko [Gunma University Initiative for Advanced Research, Division of Integrated Oncology Research, Research Program for Omics-based Medical Science, Maebashi, Gunma (Japan); Mori, Keita [Clinical Research Support Center, Shizuoka Cancer Center, Suntou-gun (Japan); Nishiyama, Masahiko [Gunma University Graduate School of Medicine, Department of Molecular Pharmacology and Oncology, Maebashi, Gunma (Japan); Tsushima, Yoshihito [Gunma University Graduate School of Medicine, Department of Diagnostic Radiology and Nuclear Medicine, Maebashi, Gunma (Japan); Gunma University Initiative for Advanced Research (GIAR), Research Program for Diagnostic and Molecular Imaging, Division of Integrated Oncology Research, Maebashi, Gunma (Japan)

    2018-01-15

    Nivolumab, an anti-programmed death-1 (PD-1) antibody, is administered in patients with previously treated non-small cell lung cancer. However, little is known about the established biomarker predicting the efficacy of nivolumab. Here, we conducted a preliminary study to investigate whether {sup 18}F-FDG-PET/CT could predict the therapeutic response of nivolumab at the early phase. Twenty-four patients were enrolled in this study. {sup 18}F-FDG-PET/CT was carried out before and 1 month after nivolumab therapy. SUV{sub max}, metabolic tumour volume (MTV), and total lesion glycolysis (TLG) were calculated. Immunohistochemical analysis of PD-L1 expression and tumour-infiltrating lymphocytes was conducted. Among all patients, a partial metabolic response to nivolumab was observed in 29% on SUV{sub max}, 25% on MTV, and 33% on TLG, whereas seven (29%) patients achieved a partial response (PR) based on RECIST v1.1. The predictive probability of PR (100% vs. 29%, p = 0.021) and progressive disease (100% vs. 22.2%, p = 0.002) at 1 month after nivolumab initiation was significantly higher in {sup 18}F-FDG on PET/CT than in CT scans. Multivariate analysis confirmed that {sup 18}F-FDG uptake after administration of nivolumab was an independent prognostic factor. PD-L1 expression and nivolumab plasma concentration could not precisely predict the early therapeutic efficacy of nivolumab. Metabolic response by {sup 18}F-FDG was effective in predicting efficacy and survival at 1 month after nivolumab treatment. (orig.)

  17. Clinical Inquiry: What's the best way to predict the success of a trial of labor after a previous C-section?

    Science.gov (United States)

    Warren, Johanna B; Hamilton, Andrew

    2015-12-01

    Seven validated prospective scoring systems, and one unvalidated system, predict a successful TOLAC based on a variety of clinical factors. The systems use different outcome statistics, so their predictive accuracy can't be directly compared.

  18. Late preterm birth and previous cesarean section: a population-based cohort study.

    Science.gov (United States)

    Yasseen Iii, Abdool S; Bassil, Kate; Sprague, Ann; Urquia, Marcelo; Maguire, Jonathon L

    2018-02-21

    Late preterm birth (LPB) is increasingly common and associated with higher morbidity and mortality than term birth. Yet, little is known about the influence of previous cesarean section (PCS) and the occurrence of LPB in subsequent pregnancies. We aim to evaluate this association along with the potential mediation by cesarean sections in the current pregnancy. We use population-based birth registry data (2005-2012) to establish a cohort of live born singleton infants born between 34 and 41 gestational weeks to multiparous mothers. PCS was the primary exposure, LPB (34-36 weeks) was the primary outcome, and an unplanned or emergency cesarean section in the current pregnancy was the potential mediator. Associations were quantified using propensity weighted multivariable Poisson regression, and mediating associations were explored using the Baron-Kenny approach. The cohort included 481,531 births, 21,893 (4.5%) were LPB, and 119,983 (24.9%) were predated by at least one PCS. Among mothers with at least one PCS, 6307 (5.26%) were LPB. There was increased risk of LPB among women with at least one PCS (adjusted Relative Risk (aRR): 1.20 (95%CI [1.16, 1.23]). Unplanned or emergency cesarean section in the current pregnancy was identified as a strong mediator to this relationship (mediation ratio = 97%). PCS was associated with higher risk of LPB in subsequent pregnancies. This may be due to an increased risk of subsequent unplanned or emergency preterm cesarean sections. Efforts to minimize index cesarean sections may reduce the risk of LPB in subsequent pregnancies.

  19. Prediction-based dynamic load-sharing heuristics

    Science.gov (United States)

    Goswami, Kumar K.; Devarakonda, Murthy; Iyer, Ravishankar K.

    1993-01-01

    The authors present dynamic load-sharing heuristics that use predicted resource requirements of processes to manage workloads in a distributed system. A previously developed statistical pattern-recognition method is employed for resource prediction. While nonprediction-based heuristics depend on a rapidly changing system status, the new heuristics depend on slowly changing program resource usage patterns. Furthermore, prediction-based heuristics can be more effective since they use future requirements rather than just the current system state. Four prediction-based heuristics, two centralized and two distributed, are presented. Using trace driven simulations, they are compared against random scheduling and two effective nonprediction based heuristics. Results show that the prediction-based centralized heuristics achieve up to 30 percent better response times than the nonprediction centralized heuristic, and that the prediction-based distributed heuristics achieve up to 50 percent improvements relative to their nonprediction counterpart.

  20. Data-Based Predictive Control with Multirate Prediction Step

    Science.gov (United States)

    Barlow, Jonathan S.

    2010-01-01

    Data-based predictive control is an emerging control method that stems from Model Predictive Control (MPC). MPC computes current control action based on a prediction of the system output a number of time steps into the future and is generally derived from a known model of the system. Data-based predictive control has the advantage of deriving predictive models and controller gains from input-output data. Thus, a controller can be designed from the outputs of complex simulation code or a physical system where no explicit model exists. If the output data happens to be corrupted by periodic disturbances, the designed controller will also have the built-in ability to reject these disturbances without the need to know them. When data-based predictive control is implemented online, it becomes a version of adaptive control. One challenge of MPC is computational requirements increasing with prediction horizon length. This paper develops a closed-loop dynamic output feedback controller that minimizes a multi-step-ahead receding-horizon cost function with multirate prediction step. One result is a reduced influence of prediction horizon and the number of system outputs on the computational requirements of the controller. Another result is an emphasis on portions of the prediction window that are sampled more frequently. A third result is the ability to include more outputs in the feedback path than in the cost function.

  1. Allometric Models to Predict Aboveground Woody Biomass of Black Locust (Robinia pseudoacacia L. in Short Rotation Coppice in Previous Mining and Agricultural Areas in Germany

    Directory of Open Access Journals (Sweden)

    Christin Carl

    2017-09-01

    Full Text Available Black locust is a drought-resistant tree species with high biomass productivity during juvenility; it is able to thrive on wastelands, such as former brown coal fields and dry agricultural areas. However, research conducted on this species in such areas is limited. This paper aims to provide a basis for predicting tree woody biomass for black locust based on tree, competition, and site variables at 14 sites in northeast Germany that were previously utilized for mining or agriculture. The study areas, which are located in an area covering 320 km × 280 km, are characterized by a variety of climatic and soil conditions. Influential variables, including tree parameters, competition, and climatic parameters were considered. Allometric biomass models were employed. The findings show that the most important parameters are tree and competition variables. Different former land utilizations, such as mining or agriculture, as well as growth by cores or stumps, significantly influenced aboveground woody biomass production. The new biomass models developed as part of this study can be applied to calculate woody biomass production and carbon sequestration of Robinia pseudoacacia L. in short rotation coppices in previous mining and agricultural areas.

  2. Analysis of Product Buying Decision on Lazada E-commerce based on Previous Buyers’ Comments

    Directory of Open Access Journals (Sweden)

    Neil Aldrin

    2017-06-01

    Full Text Available The aims of the present research are: 1 to know that product buying decision possibly occurs, 2 to know how product buying decision occurs on Lazada e-commerce’s customers, 3 how previous buyers’ comments can increase product buying decision on Lazada e-commerce. This research utilizes qualitative research method. Qualitative research is a research that investigates other researches and makes assumption or discussion result so that other analysis results can be made in order to widen idea and opinion. Research result shows that product which has many ratings and reviews will trigger other buyers to purchase or get that product. The conclusion is that product buying decision may occur because there are some processes before making decision which are: looking for recognition and searching for problems, knowing the needs, collecting information, evaluating alternative, evaluating after buying. In those stages, buying decision on Lazada e-commerce is supported by price, promotion, service, and brand.

  3. Neural Fuzzy Inference System-Based Weather Prediction Model and Its Precipitation Predicting Experiment

    Directory of Open Access Journals (Sweden)

    Jing Lu

    2014-11-01

    Full Text Available We propose a weather prediction model in this article based on neural network and fuzzy inference system (NFIS-WPM, and then apply it to predict daily fuzzy precipitation given meteorological premises for testing. The model consists of two parts: the first part is the “fuzzy rule-based neural network”, which simulates sequential relations among fuzzy sets using artificial neural network; and the second part is the “neural fuzzy inference system”, which is based on the first part, but could learn new fuzzy rules from the previous ones according to the algorithm we proposed. NFIS-WPM (High Pro and NFIS-WPM (Ave are improved versions of this model. It is well known that the need for accurate weather prediction is apparent when considering the benefits. However, the excessive pursuit of accuracy in weather prediction makes some of the “accurate” prediction results meaningless and the numerical prediction model is often complex and time-consuming. By adapting this novel model to a precipitation prediction problem, we make the predicted outcomes of precipitation more accurate and the prediction methods simpler than by using the complex numerical forecasting model that would occupy large computation resources, be time-consuming and which has a low predictive accuracy rate. Accordingly, we achieve more accurate predictive precipitation results than by using traditional artificial neural networks that have low predictive accuracy.

  4. Predicting Liaison: an Example-Based Approach

    NARCIS (Netherlands)

    Greefhorst, A.P.M.; Bosch, A.P.J. van den

    2016-01-01

    Predicting liaison in French is a non-trivial problem to model. We compare a memory-based machine-learning algorithm with a rule-based baseline. The memory-based learner is trained to predict whether liaison occurs between two words on the basis of lexical, orthographic, morphosyntactic, and

  5. Trust-based collective view prediction

    CERN Document Server

    Luo, Tiejian; Xu, Guandong; Zhou, Jia

    2013-01-01

    Collective view prediction is to judge the opinions of an active web user based on unknown elements by referring to the collective mind of the whole community. Content-based recommendation and collaborative filtering are two mainstream collective view prediction techniques. They generate predictions by analyzing the text features of the target object or the similarity of users' past behaviors. Still, these techniques are vulnerable to the artificially-injected noise data, because they are not able to judge the reliability and credibility of the information sources. Trust-based Collective View

  6. Time since start of first-line therapy as a predictive clinical marker for nintedanib in patients with previously treated non-small cell lung cancer

    DEFF Research Database (Denmark)

    Gaschler-Markefski, Birgit; Sikken, Patricia; Heymach, John V

    2017-01-01

    INTRODUCTION: No predictive clinical or genetic markers have been identified or validated for antiangiogenic agents in lung cancer. We aimed to identify a predictive clinical marker of benefit for nintedanib, an angiokinase inhibitor, using data from two large second-line non-small cell lung cancer...... Phase III trials (LUME-Lung 1 ([LL1] and LUME-Lung 2). METHODS: Predictive marker identification was conducted in a multi-step process using data from both trials; a hypothesis was generated, confirmed and validated. Statistical analyses included a stepwise selection approach, a recursive partitioning...... method and the evaluation of HRs, including treatment-by-covariate interactions. The marker was finally validated using a prospectively defined hierarchical testing procedure and treatment-by-covariate interaction for overall survival (OS) based on LL1. RESULTS: Time since start of first-line therapy...

  7. Estimating the effect of current, previous and never use of drugs in studies based on prescription registries

    DEFF Research Database (Denmark)

    Nielsen, Lars Hougaard; Løkkegaard, Ellen; Andreasen, Anne Helms

    2009-01-01

    of this misclassification for analysing the risk of breast cancer. MATERIALS AND METHODS: Prescription data were obtained from Danish Registry of Medicinal Products Statistics and we applied various methods to approximate treatment episodes. We analysed the duration of HT episodes to study the ability to identify......PURPOSE: Many studies which investigate the effect of drugs categorize the exposure variable into never, current, and previous use of the study drug. When prescription registries are used to make this categorization, the exposure variable possibly gets misclassified since the registries do...... not carry any information on the time of discontinuation of treatment.In this study, we investigated the amount of misclassification of exposure (never, current, previous use) to hormone therapy (HT) when the exposure variable was based on prescription data. Furthermore, we evaluated the significance...

  8. Ensemble-based prediction of RNA secondary structures.

    Science.gov (United States)

    Aghaeepour, Nima; Hoos, Holger H

    2013-04-24

    Accurate structure prediction methods play an important role for the understanding of RNA function. Energy-based, pseudoknot-free secondary structure prediction is one of the most widely used and versatile approaches, and improved methods for this task have received much attention over the past five years. Despite the impressive progress that as been achieved in this area, existing evaluations of the prediction accuracy achieved by various algorithms do not provide a comprehensive, statistically sound assessment. Furthermore, while there is increasing evidence that no prediction algorithm consistently outperforms all others, no work has been done to exploit the complementary strengths of multiple approaches. In this work, we present two contributions to the area of RNA secondary structure prediction. Firstly, we use state-of-the-art, resampling-based statistical methods together with a previously published and increasingly widely used dataset of high-quality RNA structures to conduct a comprehensive evaluation of existing RNA secondary structure prediction procedures. The results from this evaluation clarify the performance relationship between ten well-known existing energy-based pseudoknot-free RNA secondary structure prediction methods and clearly demonstrate the progress that has been achieved in recent years. Secondly, we introduce AveRNA, a generic and powerful method for combining a set of existing secondary structure prediction procedures into an ensemble-based method that achieves significantly higher prediction accuracies than obtained from any of its component procedures. Our new, ensemble-based method, AveRNA, improves the state of the art for energy-based, pseudoknot-free RNA secondary structure prediction by exploiting the complementary strengths of multiple existing prediction procedures, as demonstrated using a state-of-the-art statistical resampling approach. In addition, AveRNA allows an intuitive and effective control of the trade-off between

  9. Speech Intelligibility Prediction Based on Mutual Information

    DEFF Research Database (Denmark)

    Jensen, Jesper; Taal, Cees H.

    2014-01-01

    This paper deals with the problem of predicting the average intelligibility of noisy and potentially processed speech signals, as observed by a group of normal hearing listeners. We propose a model which performs this prediction based on the hypothesis that intelligibility is monotonically related...... to the mutual information between critical-band amplitude envelopes of the clean signal and the corresponding noisy/processed signal. The resulting intelligibility predictor turns out to be a simple function of the mean-square error (mse) that arises when estimating a clean critical-band amplitude using...... a minimum mean-square error (mmse) estimator based on the noisy/processed amplitude. The proposed model predicts that speech intelligibility cannot be improved by any processing of noisy critical-band amplitudes. Furthermore, the proposed intelligibility predictor performs well ( ρ > 0.95) in predicting...

  10. Calorimeter prediction based on multiple exponentials

    International Nuclear Information System (INIS)

    Smith, M.K.; Bracken, D.S.

    2002-01-01

    Calorimetry allows very precise measurements of nuclear material to be carried out, but it also requires relatively long measurement times to do so. The ability to accurately predict the equilibrium response of a calorimeter would significantly reduce the amount of time required for calorimetric assays. An algorithm has been developed that is effective at predicting the equilibrium response. This multi-exponential prediction algorithm is based on an iterative technique using commercial fitting routines that fit a constant plus a variable number of exponential terms to calorimeter data. Details of the implementation and the results of trials on a large number of calorimeter data sets will be presented

  11. Comparison of Simple Versus Performance-Based Fall Prediction Models

    Directory of Open Access Journals (Sweden)

    Shekhar K. Gadkaree BS

    2015-05-01

    Full Text Available Objective: To compare the predictive ability of standard falls prediction models based on physical performance assessments with more parsimonious prediction models based on self-reported data. Design: We developed a series of fall prediction models progressing in complexity and compared area under the receiver operating characteristic curve (AUC across models. Setting: National Health and Aging Trends Study (NHATS, which surveyed a nationally representative sample of Medicare enrollees (age ≥65 at baseline (Round 1: 2011-2012 and 1-year follow-up (Round 2: 2012-2013. Participants: In all, 6,056 community-dwelling individuals participated in Rounds 1 and 2 of NHATS. Measurements: Primary outcomes were 1-year incidence of “ any fall ” and “ recurrent falls .” Prediction models were compared and validated in development and validation sets, respectively. Results: A prediction model that included demographic information, self-reported problems with balance and coordination, and previous fall history was the most parsimonious model that optimized AUC for both any fall (AUC = 0.69, 95% confidence interval [CI] = [0.67, 0.71] and recurrent falls (AUC = 0.77, 95% CI = [0.74, 0.79] in the development set. Physical performance testing provided a marginal additional predictive value. Conclusion: A simple clinical prediction model that does not include physical performance testing could facilitate routine, widespread falls risk screening in the ambulatory care setting.

  12. Energy based prediction models for building acoustics

    DEFF Research Database (Denmark)

    Brunskog, Jonas

    2012-01-01

    In order to reach robust and simplified yet accurate prediction models, energy based principle are commonly used in many fields of acoustics, especially in building acoustics. This includes simple energy flow models, the framework of statistical energy analysis (SEA) as well as more elaborated...... principles as, e.g., wave intensity analysis (WIA). The European standards for building acoustic predictions, the EN 12354 series, are based on energy flow and SEA principles. In the present paper, different energy based prediction models are discussed and critically reviewed. Special attention is placed...... on underlying basic assumptions, such as diffuse fields, high modal overlap, resonant field being dominant, etc., and the consequences of these in terms of limitations in the theory and in the practical use of the models....

  13. Knowledge-based Fragment Binding Prediction

    Science.gov (United States)

    Tang, Grace W.; Altman, Russ B.

    2014-01-01

    Target-based drug discovery must assess many drug-like compounds for potential activity. Focusing on low-molecular-weight compounds (fragments) can dramatically reduce the chemical search space. However, approaches for determining protein-fragment interactions have limitations. Experimental assays are time-consuming, expensive, and not always applicable. At the same time, computational approaches using physics-based methods have limited accuracy. With increasing high-resolution structural data for protein-ligand complexes, there is now an opportunity for data-driven approaches to fragment binding prediction. We present FragFEATURE, a machine learning approach to predict small molecule fragments preferred by a target protein structure. We first create a knowledge base of protein structural environments annotated with the small molecule substructures they bind. These substructures have low-molecular weight and serve as a proxy for fragments. FragFEATURE then compares the structural environments within a target protein to those in the knowledge base to retrieve statistically preferred fragments. It merges information across diverse ligands with shared substructures to generate predictions. Our results demonstrate FragFEATURE's ability to rediscover fragments corresponding to the ligand bound with 74% precision and 82% recall on average. For many protein targets, it identifies high scoring fragments that are substructures of known inhibitors. FragFEATURE thus predicts fragments that can serve as inputs to fragment-based drug design or serve as refinement criteria for creating target-specific compound libraries for experimental or computational screening. PMID:24762971

  14. Predicting Learned Helplessness Based on Personality

    Science.gov (United States)

    Maadikhah, Elham; Erfani, Nasrollah

    2014-01-01

    Learned helplessness as a negative motivational state can latently underlie repeated failures and create negative feelings toward the education as well as depression in students and other members of a society. The purpose of this paper is to predict learned helplessness based on students' personality traits. The research is a predictive…

  15. Evolutionary Analysis Predicts Sensitive Positions of MMP20 and Validates Newly- and Previously-Identified MMP20 Mutations Causing Amelogenesis Imperfecta.

    Science.gov (United States)

    Gasse, Barbara; Prasad, Megana; Delgado, Sidney; Huckert, Mathilde; Kawczynski, Marzena; Garret-Bernardin, Annelyse; Lopez-Cazaux, Serena; Bailleul-Forestier, Isabelle; Manière, Marie-Cécile; Stoetzel, Corinne; Bloch-Zupan, Agnès; Sire, Jean-Yves

    2017-01-01

    Amelogenesis imperfecta (AI) designates a group of genetic diseases characterized by a large range of enamel disorders causing important social and health problems. These defects can result from mutations in enamel matrix proteins or protease encoding genes. A range of mutations in the enamel cleavage enzyme matrix metalloproteinase-20 gene ( MMP20 ) produce enamel defects of varying severity. To address how various alterations produce a range of AI phenotypes, we performed a targeted analysis to find MMP20 mutations in French patients diagnosed with non-syndromic AI. Genomic DNA was isolated from saliva and MMP20 exons and exon-intron boundaries sequenced. We identified several homozygous or heterozygous mutations, putatively involved in the AI phenotypes. To validate missense mutations and predict sensitive positions in the MMP20 sequence, we evolutionarily compared 75 sequences extracted from the public databases using the Datamonkey webserver. These sequences were representative of mammalian lineages, covering more than 150 million years of evolution. This analysis allowed us to find 324 sensitive positions (out of the 483 MMP20 residues), pinpoint functionally important domains, and build an evolutionary chart of important conserved MMP20 regions. This is an efficient tool to identify new- and previously-identified mutations. We thus identified six functional MMP20 mutations in unrelated families, finding two novel mutated sites. The genotypes and phenotypes of these six mutations are described and compared. To date, 13 MMP20 mutations causing AI have been reported, making these genotypes and associated hypomature enamel phenotypes the most frequent in AI.

  16. Highway traffic noise prediction based on GIS

    Science.gov (United States)

    Zhao, Jianghua; Qin, Qiming

    2014-05-01

    Before building a new road, we need to predict the traffic noise generated by vehicles. Traditional traffic noise prediction methods are based on certain locations and they are not only time-consuming, high cost, but also cannot be visualized. Geographical Information System (GIS) can not only solve the problem of manual data processing, but also can get noise values at any point. The paper selected a road segment from Wenxi to Heyang. According to the geographical overview of the study area and the comparison between several models, we combine the JTG B03-2006 model and the HJ2.4-2009 model to predict the traffic noise depending on the circumstances. Finally, we interpolate the noise values at each prediction point and then generate contours of noise. By overlaying the village data on the noise contour layer, we can get the thematic maps. The use of GIS for road traffic noise prediction greatly facilitates the decision-makers because of GIS spatial analysis function and visualization capabilities. We can clearly see the districts where noise are excessive, and thus it becomes convenient to optimize the road line and take noise reduction measures such as installing sound barriers and relocating villages and so on.

  17. NMR-based phytochemical analysis of Vitis vinifera cv Falanghina leaves. Characterization of a previously undescribed biflavonoid with antiproliferative activity.

    Science.gov (United States)

    Tartaglione, Luciana; Gambuti, Angelita; De Cicco, Paola; Ercolano, Giuseppe; Ianaro, Angela; Taglialatela-Scafati, Orazio; Moio, Luigi; Forino, Martino

    2018-03-01

    Vitis vinifera cv Falanghina is an ancient grape variety of Southern Italy. A thorough phytochemical analysis of the Falanghina leaves was conducted to investigate its specialised metabolite content. Along with already known molecules, such as caftaric acid, quercetin-3-O-β-d-glucopyranoside, quercetin-3-O-β-d-glucuronide, kaempferol-3-O-β-d-glucopyranoside and kaempferol-3-O-β-d-glucuronide, a previously undescribed biflavonoid was identified. For this last compound, a moderate bioactivity against metastatic melanoma cells proliferation was discovered. This datum can be of some interest to researchers studying human melanoma. The high content in antioxidant glycosylated flavonoids supports the exploitation of grape vine leaves as an inexpensive source of natural products for the food industry and for both pharmaceutical and nutraceutical companies. Additionally, this study offers important insights into the plant physiology, thus prompting possible technological researches of genetic selection based on the vine adaptation to specific pedo-climatic environments. Copyright © 2017 Elsevier B.V. All rights reserved.

  18. Evolutionary Analysis Predicts Sensitive Positions of MMP20 and Validates Newly- and Previously-Identified MMP20 Mutations Causing Amelogenesis Imperfecta

    Directory of Open Access Journals (Sweden)

    Barbara Gasse

    2017-06-01

    Full Text Available Amelogenesis imperfecta (AI designates a group of genetic diseases characterized by a large range of enamel disorders causing important social and health problems. These defects can result from mutations in enamel matrix proteins or protease encoding genes. A range of mutations in the enamel cleavage enzyme matrix metalloproteinase-20 gene (MMP20 produce enamel defects of varying severity. To address how various alterations produce a range of AI phenotypes, we performed a targeted analysis to find MMP20 mutations in French patients diagnosed with non-syndromic AI. Genomic DNA was isolated from saliva and MMP20 exons and exon-intron boundaries sequenced. We identified several homozygous or heterozygous mutations, putatively involved in the AI phenotypes. To validate missense mutations and predict sensitive positions in the MMP20 sequence, we evolutionarily compared 75 sequences extracted from the public databases using the Datamonkey webserver. These sequences were representative of mammalian lineages, covering more than 150 million years of evolution. This analysis allowed us to find 324 sensitive positions (out of the 483 MMP20 residues, pinpoint functionally important domains, and build an evolutionary chart of important conserved MMP20 regions. This is an efficient tool to identify new- and previously-identified mutations. We thus identified six functional MMP20 mutations in unrelated families, finding two novel mutated sites. The genotypes and phenotypes of these six mutations are described and compared. To date, 13 MMP20 mutations causing AI have been reported, making these genotypes and associated hypomature enamel phenotypes the most frequent in AI.

  19. Dst Prediction Based on Solar Wind Parameters

    Directory of Open Access Journals (Sweden)

    Yoon-Kyung Park

    2009-12-01

    Full Text Available We reevaluate the Burton equation (Burton et al. 1975 of predicting Dst index using high quality hourly solar wind data supplied by the ACE satellite for the period from 1998 to 2006. Sixty magnetic storms with monotonously decreasing main phase are selected. In order to determine the injection term (Q and the decay time (tau of the equation, we examine the relationships between Dst* and VB_s, Delta Dst* and VB_s, and Delta Dst* and Dst* during the magnetic storms. For this analysis, we take into account one hour of the propagation time from the ACE satellite to the magnetopause, and a half hour of the response time of the magnetosphere/ring current to the solar wind forcing. The injection term is found to be Q({nT}/h=-3.56VB_s for VB_s>0.5mV/m and Q({nT}/h=0 for VB_s leq0.5mV/m. The tau (hour is estimated as 0.060 Dst* + 16.65 for Dst*>-175nT and 6.15 hours for Dst* leq -175nT. Based on these empirical relationships, we predict the 60 magnetic storms and find that the correlation coefficient between the observed and predicted Dst* is 0.88. To evaluate the performance of our prediction scheme, the 60 magnetic storms are predicted again using the models by Burton et al. (1975 and O'Brien & McPherron (2000a. The correlation coefficients thus obtained are 0.85, the same value for both of the two models. In this respect, our model is slightly improved over the other two models as far as the correlation coefficients is concerned. Particularly our model does a better job than the other two models in predicting intense magnetic storms (Dst* lesssim -200nT.

  20. Ontology-based prediction of surgical events in laparoscopic surgery

    Science.gov (United States)

    Katić, Darko; Wekerle, Anna-Laura; Gärtner, Fabian; Kenngott, Hannes; Müller-Stich, Beat Peter; Dillmann, Rüdiger; Speidel, Stefanie

    2013-03-01

    Context-aware technologies have great potential to help surgeons during laparoscopic interventions. Their underlying idea is to create systems which can adapt their assistance functions automatically to the situation in the OR, thus relieving surgeons from the burden of managing computer assisted surgery devices manually. To this purpose, a certain kind of understanding of the current situation in the OR is essential. Beyond that, anticipatory knowledge of incoming events is beneficial, e.g. for early warnings of imminent risk situations. To achieve the goal of predicting surgical events based on previously observed ones, we developed a language to describe surgeries and surgical events using Description Logics and integrated it with methods from computational linguistics. Using n-Grams to compute probabilities of followup events, we are able to make sensible predictions of upcoming events in real-time. The system was evaluated on professionally recorded and labeled surgeries and showed an average prediction rate of 80%.

  1. Wavelet-based prediction of oil prices

    International Nuclear Information System (INIS)

    Yousefi, Shahriar; Weinreich, Ilona; Reinarz, Dominik

    2005-01-01

    This paper illustrates an application of wavelets as a possible vehicle for investigating the issue of market efficiency in futures markets for oil. The paper provides a short introduction to the wavelets and a few interesting wavelet-based contributions in economics and finance are briefly reviewed. A wavelet-based prediction procedure is introduced and market data on crude oil is used to provide forecasts over different forecasting horizons. The results are compared with data from futures markets for oil and the relative performance of this procedure is used to investigate whether futures markets are efficiently priced

  2. Link prediction based on nonequilibrium cooperation effect

    Science.gov (United States)

    Li, Lanxi; Zhu, Xuzhen; Tian, Hui

    2018-04-01

    Link prediction in complex networks has become a common focus of many researchers. But most existing methods concentrate on neighbors, and rarely consider degree heterogeneity of two endpoints. Node degree represents the importance or status of endpoints. We describe the large-degree heterogeneity as the nonequilibrium between nodes. This nonequilibrium facilitates a stable cooperation between endpoints, so that two endpoints with large-degree heterogeneity tend to connect stably. We name such a phenomenon as the nonequilibrium cooperation effect. Therefore, this paper proposes a link prediction method based on the nonequilibrium cooperation effect to improve accuracy. Theoretical analysis will be processed in advance, and at the end, experiments will be performed in 12 real-world networks to compare the mainstream methods with our indices in the network through numerical analysis.

  3. Sunburn and sun-protective behaviors among adults with and without previous nonmelanoma skin cancer: a population-based study

    Science.gov (United States)

    Fischer, Alexander H.; Wang, Timothy S.; Yenokyan, Gayane; Kang, Sewon; Chien, Anna L.

    2016-01-01

    Background Individuals with previous nonmelanoma skin cancer (NMSC) are at increased risk for subsequent skin cancer, and should therefore limit UV exposure. Objective To determine whether individuals with previous NMSC engage in better sun protection than those with no skin cancer history. Methods We pooled self-reported data (2005 and 2010 National Health Interview Surveys) from US non-Hispanic white adults (758 with and 34,161 without previous NMSC). We calculated adjusted prevalence odds ratios (aPOR) and 95% confidence intervals (95% CI), taking into account the complex survey design. Results Individuals with previous NMSC versus no history of NMSC had higher rates of frequent use of shade (44.3% versus 27.0%; aPOR=1.41; 1.16–1.71), long sleeves (20.5% versus 7.7%; aPOR=1.55; 1.21–1.98), a wide-brimmed hat (26.1% versus 10.5%; aPOR=1.52; 1.24–1.87), and sunscreen (53.7% versus 33.1%; aPOR=2.11; 95% CI=1.73–2.59), but did not have significantly lower odds of recent sunburn (29.7% versus 40.7%; aPOR=0.95; 0.77–1.17). Among subjects with previous NMSC, recent sunburn was inversely associated with age, sun avoidance, and shade but not sunscreen. Limitations Self-reported cross-sectional data and unavailable information quantifying regular sun exposure. Conclusion Physicians should emphasize sunburn prevention when counseling patients with previous NMSC, especially younger adults, focusing on shade and sun avoidance over sunscreen. PMID:27198078

  4. TWT transmitter fault prediction based on ANFIS

    Science.gov (United States)

    Li, Mengyan; Li, Junshan; Li, Shuangshuang; Wang, Wenqing; Li, Fen

    2017-11-01

    Fault prediction is an important component of health management, and plays an important role in the reliability guarantee of complex electronic equipments. Transmitter is a unit with high failure rate. The cathode performance of TWT is a common fault of transmitter. In this dissertation, a model based on a set of key parameters of TWT is proposed. By choosing proper parameters and applying adaptive neural network training model, this method, combined with analytic hierarchy process (AHP), has a certain reference value for the overall health judgment of TWT transmitters.

  5. New population-based exome data question the pathogenicity of some genetic variants previously associated with Marfan syndrome

    DEFF Research Database (Denmark)

    Yang, Ren-Qiang; Jabbari, Javad; Cheng, Xiao-Shu

    2014-01-01

    BACKGROUND: Marfan syndrome (MFS) is a rare autosomal dominantly inherited connective tissue disorder with an estimated prevalence of 1:5,000. More than 1000 variants have been previously reported to be associated with MFS. However, the disease-causing effect of these variants may be questionable...

  6. Bankruptcy Prediction Based on the Autonomy Ratio

    Directory of Open Access Journals (Sweden)

    Daniel Brîndescu Olariu

    2016-11-01

    Full Text Available The theory and practice of the financial ratio analysis suggest the existence of a negative correlation between the autonomy ratio and the bankruptcy risk. Previous studies conducted on a sample of companies from Timis County (largest county in Romania confirm this hypothesis and recommend the autonomy ratio as a useful tool for measuring the bankruptcy risk two years in advance. The objective of the current research was to develop a methodology for measuring the bankruptcy risk that would be applicable for the companies from the Timis County (specific methodologies are considered necessary for each region. The target population consisted of all the companies from Timis County with annual sales of over 10,000 lei (aprox. 2,200 Euros. The research was performed over all the target population. The study has thus included 53,252 yearly financial statements from the period 2007 – 2010. The results of the study allow for the setting of benchmarks, as well as the configuration of a methodology of analysis. The proposed methodology cannot predict with perfect accuracy the state of the company, but it allows for a valuation of the risk level to which the company is subjected.

  7. Model-based prediction of myelosuppression and recovery based on frequent neutrophil monitoring.

    Science.gov (United States)

    Netterberg, Ida; Nielsen, Elisabet I; Friberg, Lena E; Karlsson, Mats O

    2017-08-01

    To investigate whether a more frequent monitoring of the absolute neutrophil counts (ANC) during myelosuppressive chemotherapy, together with model-based predictions, can improve therapy management, compared to the limited clinical monitoring typically applied today. Daily ANC in chemotherapy-treated cancer patients were simulated from a previously published population model describing docetaxel-induced myelosuppression. The simulated values were used to generate predictions of the individual ANC time-courses, given the myelosuppression model. The accuracy of the predicted ANC was evaluated under a range of conditions with reduced amount of ANC measurements. The predictions were most accurate when more data were available for generating the predictions and when making short forecasts. The inaccuracy of ANC predictions was highest around nadir, although a high sensitivity (≥90%) was demonstrated to forecast Grade 4 neutropenia before it occurred. The time for a patient to recover to baseline could be well forecasted 6 days (±1 day) before the typical value occurred on day 17. Daily monitoring of the ANC, together with model-based predictions, could improve anticancer drug treatment by identifying patients at risk for severe neutropenia and predicting when the next cycle could be initiated.

  8. New population-based exome data are questioning the pathogenicity of previously cardiomyopathy-associated genetic variants

    DEFF Research Database (Denmark)

    Andreasen, Charlotte Hartig; Nielsen, Jonas B; Refsgaard, Lena

    2013-01-01

    Cardiomyopathies are a heterogeneous group of diseases with various etiologies. We focused on three genetically determined cardiomyopathies: hypertrophic (HCM), dilated (DCM), and arrhythmogenic right ventricular cardiomyopathy (ARVC). Eighty-four genes have so far been associated with these card......Cardiomyopathies are a heterogeneous group of diseases with various etiologies. We focused on three genetically determined cardiomyopathies: hypertrophic (HCM), dilated (DCM), and arrhythmogenic right ventricular cardiomyopathy (ARVC). Eighty-four genes have so far been associated...... with these cardiomyopathies, but the disease-causing effect of reported variants is often dubious. In order to identify possible false-positive variants, we investigated the prevalence of previously reported cardiomyopathy-associated variants in recently published exome data. We searched for reported missense and nonsense...... variants in the NHLBI-Go Exome Sequencing Project (ESP) containing exome data from 6500 individuals. In ESP, we identified 94 variants out of 687 (14%) variants previously associated with HCM, 58 out of 337 (17%) variants associated with DCM, and 38 variants out of 209 (18%) associated with ARVC...

  9. Reasons for placement of restorations on previously unrestored tooth surfaces by dentists in The Dental Practice-Based Research Network

    DEFF Research Database (Denmark)

    Nascimento, Marcelle M; Gordan, Valeria V; Qvist, Vibeke

    2010-01-01

    The authors conducted a study to identify and quantify the reasons used by dentists in The Dental Practice-Based Research Network (DPBRN) for placing restorations on unrestored permanent tooth surfaces and the dental materials they used in doing so....

  10. Based on BP Neural Network Stock Prediction

    Science.gov (United States)

    Liu, Xiangwei; Ma, Xin

    2012-01-01

    The stock market has a high profit and high risk features, on the stock market analysis and prediction research has been paid attention to by people. Stock price trend is a complex nonlinear function, so the price has certain predictability. This article mainly with improved BP neural network (BPNN) to set up the stock market prediction model, and…

  11. A probabilistic fragment-based protein structure prediction algorithm.

    Directory of Open Access Journals (Sweden)

    David Simoncini

    Full Text Available Conformational sampling is one of the bottlenecks in fragment-based protein structure prediction approaches. They generally start with a coarse-grained optimization where mainchain atoms and centroids of side chains are considered, followed by a fine-grained optimization with an all-atom representation of proteins. It is during this coarse-grained phase that fragment-based methods sample intensely the conformational space. If the native-like region is sampled more, the accuracy of the final all-atom predictions may be improved accordingly. In this work we present EdaFold, a new method for fragment-based protein structure prediction based on an Estimation of Distribution Algorithm. Fragment-based approaches build protein models by assembling short fragments from known protein structures. Whereas the probability mass functions over the fragment libraries are uniform in the usual case, we propose an algorithm that learns from previously generated decoys and steers the search toward native-like regions. A comparison with Rosetta AbInitio protocol shows that EdaFold is able to generate models with lower energies and to enhance the percentage of near-native coarse-grained decoys on a benchmark of [Formula: see text] proteins. The best coarse-grained models produced by both methods were refined into all-atom models and used in molecular replacement. All atom decoys produced out of EdaFold's decoy set reach high enough accuracy to solve the crystallographic phase problem by molecular replacement for some test proteins. EdaFold showed a higher success rate in molecular replacement when compared to Rosetta. Our study suggests that improving low resolution coarse-grained decoys allows computational methods to avoid subsequent sampling issues during all-atom refinement and to produce better all-atom models. EdaFold can be downloaded from http://www.riken.jp/zhangiru/software.html [corrected].

  12. Biotin IgM Antibodies in Human Blood: A Previously Unknown Factor Eliciting False Results in Biotinylation-Based Immunoassays

    Science.gov (United States)

    Chen, Tingting; Hedman, Lea; Mattila, Petri S.; Jartti, Laura; Jartti, Tuomas; Ruuskanen, Olli; Söderlund-Venermo, Maria; Hedman, Klaus

    2012-01-01

    Biotin is an essential vitamin that binds streptavidin or avidin with high affinity and specificity. As biotin is a small molecule that can be linked to proteins without affecting their biological activity, biotinylation is applied widely in biochemical assays. In our laboratory, IgM enzyme immuno assays (EIAs) of µ-capture format have been set up against many viruses, using as antigen biotinylated virus like particles (VLPs) detected by horseradish peroxidase-conjugated streptavidin. We recently encountered one serum sample reacting with the biotinylated VLP but not with the unbiotinylated one, suggesting in human sera the occurrence of biotin-reactive antibodies. In the present study, we search the general population (612 serum samples from adults and 678 from children) for IgM antibodies reactive with biotin and develop an indirect EIA for quantification of their levels and assessment of their seroprevalence. These IgM antibodies were present in 3% adults regardless of age, but were rarely found in children. The adverse effects of the biotin IgM on biotinylation-based immunoassays were assessed, including four inhouse and one commercial virus IgM EIAs, showing that biotin IgM do cause false positivities. The biotin can not bind IgM and streptavidin or avidin simultaneously, suggesting that these biotin-interactive compounds compete for the common binding site. In competitive inhibition assays, the affinities of biotin IgM antibodies ranged from 2.1×10−3 to 1.7×10−4 mol/L. This is the first report on biotin antibodies found in humans, providing new information on biotinylation-based immunoassays as well as new insights into the biomedical effects of vitamins. PMID:22879954

  13. Prediction of mortality based on facial characteristics

    Directory of Open Access Journals (Sweden)

    Arnaud Delorme

    2016-05-01

    Full Text Available Recent studies have shown that characteristics of the face contain a wealth of information about health, age and chronic clinical conditions. Such studies involve objective measurement of facial features correlated with historical health information. But some individuals also claim to be adept at gauging mortality based on a glance at a person’s photograph. To test this claim, we invited 12 such individuals to see if they could determine if a person was alive or dead based solely on a brief examination of facial photographs. All photos used in the experiment were transformed into a uniform gray scale and then counterbalanced across eight categories: gender, age, gaze direction, glasses, head position, smile, hair color, and image resolution. Participants examined 404 photographs displayed on a computer monitor, one photo at a time, each shown for a maximum of 8 seconds. Half of the individuals in the photos were deceased, and half were alive at the time the experiment was conducted. Participants were asked to press a button if they thought the person in a photo was living or deceased. Overall mean accuracy on this task was 53.8%, where 50% was expected by chance (p < 0.004, two-tail. Statistically significant accuracy was independently obtained in 5 of the 12 participants. We also collected 32-channel electrophysiological recordings and observed a robust difference between images of deceased individuals correctly vs. incorrectly classified in the early event related potential at 100 ms post-stimulus onset. Our results support claims of individuals who report that some as-yet unknown features of the face predict mortality. The results are also compatible with claims about clairvoyance and warrants further investigation.

  14. Hemoglobin-Based Oxygen Carrier (HBOC) Development in Trauma: Previous Regulatory Challenges, Lessons Learned, and a Path Forward.

    Science.gov (United States)

    Keipert, Peter E

    2017-01-01

    Historically, hemoglobin-based oxygen carriers (HBOCs) were being developed as "blood substitutes," despite their transient circulatory half-life (~ 24 h) vs. transfused red blood cells (RBCs). More recently, HBOC commercial development focused on "oxygen therapeutic" indications to provide a temporary oxygenation bridge until medical or surgical interventions (including RBC transfusion, if required) can be initiated. This included the early trauma trials with HemAssist ® (BAXTER), Hemopure ® (BIOPURE) and PolyHeme ® (NORTHFIELD) for resuscitating hypotensive shock. These trials all failed due to safety concerns (e.g., cardiac events, mortality) and certain protocol design limitations. In 2008 the Food and Drug Administration (FDA) put all HBOC trials in the US on clinical hold due to the unfavorable benefit:risk profile demonstrated by various HBOCs in different clinical studies in a meta-analysis published by Natanson et al. (2008). During standard resuscitation in trauma, organ dysfunction and failure can occur due to ischemia in critical tissues, which can be detected by the degree of lactic acidosis. SANGART'S Phase 2 trauma program with MP4OX therefore added lactate >5 mmol/L as an inclusion criterion to enroll patients who had lost sufficient blood to cause a tissue oxygen debt. This was key to the successful conduct of their Phase 2 program (ex-US, from 2009 to 2012) to evaluate MP4OX as an adjunct to standard fluid resuscitation and transfusion of RBCs. In 2013, SANGART shared their Phase 2b results with the FDA, and succeeded in getting the FDA to agree that a planned Phase 2c higher dose comparison study of MP4OX in trauma could include clinical sites in the US. Unfortunately, SANGART failed to secure new funding and was forced to terminate development and operations in Dec 2013, even though a regulatory path forward with FDA approval to proceed in trauma had been achieved.

  15. Protein structure based prediction of catalytic residues.

    Science.gov (United States)

    Fajardo, J Eduardo; Fiser, Andras

    2013-02-22

    Worldwide structural genomics projects continue to release new protein structures at an unprecedented pace, so far nearly 6000, but only about 60% of these proteins have any sort of functional annotation. We explored a range of features that can be used for the prediction of functional residues given a known three-dimensional structure. These features include various centrality measures of nodes in graphs of interacting residues: closeness, betweenness and page-rank centrality. We also analyzed the distance of functional amino acids to the general center of mass (GCM) of the structure, relative solvent accessibility (RSA), and the use of relative entropy as a measure of sequence conservation. From the selected features, neural networks were trained to identify catalytic residues. We found that using distance to the GCM together with amino acid type provide a good discriminant function, when combined independently with sequence conservation. Using an independent test set of 29 annotated protein structures, the method returned 411 of the initial 9262 residues as the most likely to be involved in function. The output 411 residues contain 70 of the annotated 111 catalytic residues. This represents an approximately 14-fold enrichment of catalytic residues on the entire input set (corresponding to a sensitivity of 63% and a precision of 17%), a performance competitive with that of other state-of-the-art methods. We found that several of the graph based measures utilize the same underlying feature of protein structures, which can be simply and more effectively captured with the distance to GCM definition. This also has the added the advantage of simplicity and easy implementation. Meanwhile sequence conservation remains by far the most influential feature in identifying functional residues. We also found that due the rapid changes in size and composition of sequence databases, conservation calculations must be recalibrated for specific reference databases.

  16. Data Based Prediction of Blood Glucose Concentrations Using Evolutionary Methods.

    Science.gov (United States)

    Hidalgo, J Ignacio; Colmenar, J Manuel; Kronberger, Gabriel; Winkler, Stephan M; Garnica, Oscar; Lanchares, Juan

    2017-08-08

    Predicting glucose values on the basis of insulin and food intakes is a difficult task that people with diabetes need to do daily. This is necessary as it is important to maintain glucose levels at appropriate values to avoid not only short-term, but also long-term complications of the illness. Artificial intelligence in general and machine learning techniques in particular have already lead to promising results in modeling and predicting glucose concentrations. In this work, several machine learning techniques are used for the modeling and prediction of glucose concentrations using as inputs the values measured by a continuous monitoring glucose system as well as also previous and estimated future carbohydrate intakes and insulin injections. In particular, we use the following four techniques: genetic programming, random forests, k-nearest neighbors, and grammatical evolution. We propose two new enhanced modeling algorithms for glucose prediction, namely (i) a variant of grammatical evolution which uses an optimized grammar, and (ii) a variant of tree-based genetic programming which uses a three-compartment model for carbohydrate and insulin dynamics. The predictors were trained and tested using data of ten patients from a public hospital in Spain. We analyze our experimental results using the Clarke error grid metric and see that 90% of the forecasts are correct (i.e., Clarke error categories A and B), but still even the best methods produce 5 to 10% of serious errors (category D) and approximately 0.5% of very serious errors (category E). We also propose an enhanced genetic programming algorithm that incorporates a three-compartment model into symbolic regression models to create smoothed time series of the original carbohydrate and insulin time series.

  17. Theoretical bases analysis of scientific prediction on marketing principles

    OpenAIRE

    A.S. Rosohata

    2012-01-01

    The article presents an overview categorical apparatus of scientific predictions and theoretical foundations results of scientific forecasting. They are integral part of effective management of economic activities. The approaches to the prediction of scientists in different fields of Social science and the categories modification of scientific prediction, based on principles of marketing are proposed.

  18. Speaker Prediction based on Head Orientations

    NARCIS (Netherlands)

    Rienks, R.J.; Poppe, Ronald Walter; van Otterlo, M.; Poel, Mannes; Poel, M.; Nijholt, A.; Nijholt, Antinus

    2005-01-01

    To gain insight into gaze behavior in meetings, this paper compares the results from a Naive Bayes classifier, Neural Networks and humans on speaker prediction in four-person meetings given solely the azimuth head angles. The Naive Bayes classifier scored 69.4% correctly, Neural Networks 62.3% and

  19. Prediction-error of Prediction Error (PPE)-based Reversible Data Hiding

    OpenAIRE

    Wu, Han-Zhou; Wang, Hong-Xia; Shi, Yun-Qing

    2016-01-01

    This paper presents a novel reversible data hiding (RDH) algorithm for gray-scaled images, in which the prediction-error of prediction error (PPE) of a pixel is used to carry the secret data. In the proposed method, the pixels to be embedded are firstly predicted with their neighboring pixels to obtain the corresponding prediction errors (PEs). Then, by exploiting the PEs of the neighboring pixels, the prediction of the PEs of the pixels can be determined. And, a sorting technique based on th...

  20. Effectiveness of Ritonavir-Boosted Protease Inhibitor Monotherapy in Clinical Practice Even with Previous Virological Failures to Protease Inhibitor-Based Regimens.

    Directory of Open Access Journals (Sweden)

    Luis F López-Cortés

    Full Text Available Significant controversy still exists about ritonavir-boosted protease inhibitor monotherapy (mtPI/rtv as a simplification strategy that is used up to now to treat patients that have not experienced previous virological failure (VF while on protease inhibitor (PI -based regimens. We have evaluated the effectiveness of two mtPI/rtv regimens in an actual clinical practice setting, including patients that had experienced previous VF with PI-based regimens.This retrospective study analyzed 1060 HIV-infected patients with undetectable viremia that were switched to lopinavir/ritonavir or darunavir/ritonavir monotherapy. In cases in which the patient had previously experienced VF while on a PI-based regimen, the lack of major HIV protease resistance mutations to lopinavir or darunavir, respectively, was mandatory. The primary endpoint of this study was the percentage of participants with virological suppression after 96 weeks according to intention-to-treat analysis (non-complete/missing = failure.A total of 1060 patients were analyzed, including 205 with previous VF while on PI-based regimens, 90 of whom were on complex therapies due to extensive resistance. The rates of treatment effectiveness (intention-to-treat analysis and virological efficacy (on-treatment analysis at week 96 were 79.3% (CI95, 76.8-81.8 and 91.5% (CI95, 89.6-93.4, respectively. No relationships were found between VF and earlier VF while on PI-based regimens, the presence of major or minor protease resistance mutations, the previous time on viral suppression, CD4+ T-cell nadir, and HCV-coinfection. Genotypic resistance tests were available in 49 out of the 74 patients with VFs and only four patients presented new major protease resistance mutations.Switching to mtPI/rtv achieves sustained virological control in most patients, even in those with previous VF on PI-based regimens as long as no major resistance mutations are present for the administered drug.

  1. An auxiliary optimization method for complex public transit route network based on link prediction

    Science.gov (United States)

    Zhang, Lin; Lu, Jian; Yue, Xianfei; Zhou, Jialin; Li, Yunxuan; Wan, Qian

    2018-02-01

    Inspired by the missing (new) link prediction and the spurious existing link identification in link prediction theory, this paper establishes an auxiliary optimization method for public transit route network (PTRN) based on link prediction. First, link prediction applied to PTRN is described, and based on reviewing the previous studies, the summary indices set and its algorithms set are collected for the link prediction experiment. Second, through analyzing the topological properties of Jinan’s PTRN established by the Space R method, we found that this is a typical small-world network with a relatively large average clustering coefficient. This phenomenon indicates that the structural similarity-based link prediction will show a good performance in this network. Then, based on the link prediction experiment of the summary indices set, three indices with maximum accuracy are selected for auxiliary optimization of Jinan’s PTRN. Furthermore, these link prediction results show that the overall layout of Jinan’s PTRN is stable and orderly, except for a partial area that requires optimization and reconstruction. The above pattern conforms to the general pattern of the optimal development stage of PTRN in China. Finally, based on the missing (new) link prediction and the spurious existing link identification, we propose optimization schemes that can be used not only to optimize current PTRN but also to evaluate PTRN planning.

  2. The wind power prediction research based on mind evolutionary algorithm

    Science.gov (United States)

    Zhuang, Ling; Zhao, Xinjian; Ji, Tianming; Miao, Jingwen; Cui, Haina

    2018-04-01

    When the wind power is connected to the power grid, its characteristics of fluctuation, intermittent and randomness will affect the stability of the power system. The wind power prediction can guarantee the power quality and reduce the operating cost of power system. There were some limitations in several traditional wind power prediction methods. On the basis, the wind power prediction method based on Mind Evolutionary Algorithm (MEA) is put forward and a prediction model is provided. The experimental results demonstrate that MEA performs efficiently in term of the wind power prediction. The MEA method has broad prospect of engineering application.

  3. Size-based predictions of food web patterns

    DEFF Research Database (Denmark)

    Zhang, Lai; Hartvig, Martin; Knudsen, Kim

    2014-01-01

    We employ size-based theoretical arguments to derive simple analytic predictions of ecological patterns and properties of natural communities: size-spectrum exponent, maximum trophic level, and susceptibility to invasive species. The predictions are brought about by assuming that an infinite number...... of species are continuously distributed on a size-trait axis. It is, however, an open question whether such predictions are valid for a food web with a finite number of species embedded in a network structure. We address this question by comparing the size-based predictions to results from dynamic food web...... simulations with varying species richness. To this end, we develop a new size- and trait-based food web model that can be simplified into an analytically solvable size-based model. We confirm existing solutions for the size distribution and derive novel predictions for maximum trophic level and invasion...

  4. Base Oils Biodegradability Prediction with Data Mining Techniques

    Directory of Open Access Journals (Sweden)

    Malika Trabelsi

    2010-02-01

    Full Text Available In this paper, we apply various data mining techniques including continuous numeric and discrete classification prediction models of base oils biodegradability, with emphasis on improving prediction accuracy. The results show that highly biodegradable oils can be better predicted through numeric models. In contrast, classification models did not uncover a similar dichotomy. With the exception of Memory Based Reasoning and Decision Trees, tested classification techniques achieved high classification prediction. However, the technique of Decision Trees helped uncover the most significant predictors. A simple classification rule derived based on this predictor resulted in good classification accuracy. The application of this rule enables efficient classification of base oils into either low or high biodegradability classes with high accuracy. For the latter, a higher precision biodegradability prediction can be obtained using continuous modeling techniques.

  5. A Realistic Seizure Prediction Study Based on Multiclass SVM.

    Science.gov (United States)

    Direito, Bruno; Teixeira, César A; Sales, Francisco; Castelo-Branco, Miguel; Dourado, António

    2017-05-01

    A patient-specific algorithm, for epileptic seizure prediction, based on multiclass support-vector machines (SVM) and using multi-channel high-dimensional feature sets, is presented. The feature sets, combined with multiclass classification and post-processing schemes aim at the generation of alarms and reduced influence of false positives. This study considers 216 patients from the European Epilepsy Database, and includes 185 patients with scalp EEG recordings and 31 with intracranial data. The strategy was tested over a total of 16,729.80[Formula: see text]h of inter-ictal data, including 1206 seizures. We found an overall sensitivity of 38.47% and a false positive rate per hour of 0.20. The performance of the method achieved statistical significance in 24 patients (11% of the patients). Despite the encouraging results previously reported in specific datasets, the prospective demonstration on long-term EEG recording has been limited. Our study presents a prospective analysis of a large heterogeneous, multicentric dataset. The statistical framework based on conservative assumptions, reflects a realistic approach compared to constrained datasets, and/or in-sample evaluations. The improvement of these results, with the definition of an appropriate set of features able to improve the distinction between the pre-ictal and nonpre-ictal states, hence minimizing the effect of confounding variables, remains a key aspect.

  6. Copula-based prediction of economic movements

    Science.gov (United States)

    García, J. E.; González-López, V. A.; Hirsh, I. D.

    2016-06-01

    In this paper we model the discretized returns of two paired time series BM&FBOVESPA Dividend Index and BM&FBOVESPA Public Utilities Index using multivariate Markov models. The discretization corresponds to three categories, high losses, high profits and the complementary periods of the series. In technical terms, the maximal memory that can be considered for a Markov model, can be derived from the size of the alphabet and dataset. The number of parameters needed to specify a discrete multivariate Markov chain grows exponentially with the order and dimension of the chain. In this case the size of the database is not large enough for a consistent estimation of the model. We apply a strategy to estimate a multivariate process with an order greater than the order achieved using standard procedures. The new strategy consist on obtaining a partition of the state space which is constructed from a combination, of the partitions corresponding to the two marginal processes and the partition corresponding to the multivariate Markov chain. In order to estimate the transition probabilities, all the partitions are linked using a copula. In our application this strategy provides a significant improvement in the movement predictions.

  7. Prediction of pipeline corrosion rate based on grey Markov models

    International Nuclear Information System (INIS)

    Chen Yonghong; Zhang Dafa; Peng Guichu; Wang Yuemin

    2009-01-01

    Based on the model that combined by grey model and Markov model, the prediction of corrosion rate of nuclear power pipeline was studied. Works were done to improve the grey model, and the optimization unbiased grey model was obtained. This new model was used to predict the tendency of corrosion rate, and the Markov model was used to predict the residual errors. In order to improve the prediction precision, rolling operation method was used in these prediction processes. The results indicate that the improvement to the grey model is effective and the prediction precision of the new model combined by the optimization unbiased grey model and Markov model is better, and the use of rolling operation method may improve the prediction precision further. (authors)

  8. Churn prediction based on text mining and CRM data analysis

    OpenAIRE

    Schatzmann, Anders; Heitz, Christoph; Münch, Thomas

    2014-01-01

    Within quantitative marketing, churn prediction on a single customer level has become a major issue. An extensive body of literature shows that, today, churn prediction is mainly based on structured CRM data. However, in the past years, more and more digitized customer text data has become available, originating from emails, surveys or scripts of phone calls. To date, this data source remains vastly untapped for churn prediction, and corresponding methods are rarely described in literature. ...

  9. Statistical model based gender prediction for targeted NGS clinical panels

    Directory of Open Access Journals (Sweden)

    Palani Kannan Kandavel

    2017-12-01

    The reference test dataset are being used to test the model. The sensitivity on predicting the gender has been increased from the current “genotype composition in ChrX” based approach. In addition, the prediction score given by the model can be used to evaluate the quality of clinical dataset. The higher prediction score towards its respective gender indicates the higher quality of sequenced data.

  10. Power Load Prediction Based on Fractal Theory

    OpenAIRE

    Jian-Kai, Liang; Cattani, Carlo; Wan-Qing, Song

    2015-01-01

    The basic theories of load forecasting on the power system are summarized. Fractal theory, which is a new algorithm applied to load forecasting, is introduced. Based on the fractal dimension and fractal interpolation function theories, the correlation algorithms are applied to the model of short-term load forecasting. According to the process of load forecasting, the steps of every process are designed, including load data preprocessing, similar day selecting, short-term load forecasting, and...

  11. Prediction of residential radon exposure of the whole Swiss population: comparison of model-based predictions with measurement-based predictions.

    Science.gov (United States)

    Hauri, D D; Huss, A; Zimmermann, F; Kuehni, C E; Röösli, M

    2013-10-01

    Radon plays an important role for human exposure to natural sources of ionizing radiation. The aim of this article is to compare two approaches to estimate mean radon exposure in the Swiss population: model-based predictions at individual level and measurement-based predictions based on measurements aggregated at municipality level. A nationwide model was used to predict radon levels in each household and for each individual based on the corresponding tectonic unit, building age, building type, soil texture, degree of urbanization, and floor. Measurement-based predictions were carried out within a health impact assessment on residential radon and lung cancer. Mean measured radon levels were corrected for the average floor distribution and weighted with population size of each municipality. Model-based predictions yielded a mean radon exposure of the Swiss population of 84.1 Bq/m(3) . Measurement-based predictions yielded an average exposure of 78 Bq/m(3) . This study demonstrates that the model- and the measurement-based predictions provided similar results. The advantage of the measurement-based approach is its simplicity, which is sufficient for assessing exposure distribution in a population. The model-based approach allows predicting radon levels at specific sites, which is needed in an epidemiological study, and the results do not depend on how the measurement sites have been selected. © 2013 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  12. Model-based uncertainty in species range prediction

    DEFF Research Database (Denmark)

    Pearson, R. G.; Thuiller, Wilfried; Bastos Araujo, Miguel

    2006-01-01

    Aim Many attempts to predict the potential range of species rely on environmental niche (or 'bioclimate envelope') modelling, yet the effects of using different niche-based methodologies require further investigation. Here we investigate the impact that the choice of model can have on predictions...

  13. Prediction based chaos control via a new neural network

    International Nuclear Information System (INIS)

    Shen Liqun; Wang Mao; Liu Wanyu; Sun Guanghui

    2008-01-01

    In this Letter, a new chaos control scheme based on chaos prediction is proposed. To perform chaos prediction, a new neural network architecture for complex nonlinear approximation is proposed. And the difficulty in building and training the neural network is also reduced. Simulation results of Logistic map and Lorenz system show the effectiveness of the proposed chaos control scheme and the proposed neural network

  14. Moment based model predictive control for systems with additive uncertainty

    NARCIS (Netherlands)

    Saltik, M.B.; Ozkan, L.; Weiland, S.; Ludlage, J.H.A.

    2017-01-01

    In this paper, we present a model predictive control (MPC) strategy based on the moments of the state variables and the cost functional. The statistical properties of the state predictions are calculated through the open loop iteration of dynamics and used in the formulation of MPC cost function. We

  15. Model-free and model-based reward prediction errors in EEG.

    Science.gov (United States)

    Sambrook, Thomas D; Hardwick, Ben; Wills, Andy J; Goslin, Jeremy

    2018-05-24

    Learning theorists posit two reinforcement learning systems: model-free and model-based. Model-based learning incorporates knowledge about structure and contingencies in the world to assign candidate actions with an expected value. Model-free learning is ignorant of the world's structure; instead, actions hold a value based on prior reinforcement, with this value updated by expectancy violation in the form of a reward prediction error. Because they use such different learning mechanisms, it has been previously assumed that model-based and model-free learning are computationally dissociated in the brain. However, recent fMRI evidence suggests that the brain may compute reward prediction errors to both model-free and model-based estimates of value, signalling the possibility that these systems interact. Because of its poor temporal resolution, fMRI risks confounding reward prediction errors with other feedback-related neural activity. In the present study, EEG was used to show the presence of both model-based and model-free reward prediction errors and their place in a temporal sequence of events including state prediction errors and action value updates. This demonstration of model-based prediction errors questions a long-held assumption that model-free and model-based learning are dissociated in the brain. Copyright © 2018 Elsevier Inc. All rights reserved.

  16. Slope Deformation Prediction Based on Support Vector Machine

    Directory of Open Access Journals (Sweden)

    Lei JIA

    2013-07-01

    Full Text Available This paper principally studies the prediction of slope deformation based on Support Vector Machine (SVM. In the prediction process,explore how to reconstruct the phase space. The geological body’s displacement data obtained from chaotic time series are used as SVM’s training samples. Slope displacement caused by multivariable coupling is predicted by means of single variable. Results show that this model is of high fitting accuracy and generalization, and provides reference for deformation prediction in slope engineering.

  17. Protein-Based Urine Test Predicts Kidney Transplant Outcomes

    Science.gov (United States)

    ... News Releases News Release Thursday, August 22, 2013 Protein-based urine test predicts kidney transplant outcomes NIH- ... supporting development of noninvasive tests. Levels of a protein in the urine of kidney transplant recipients can ...

  18. Climate Based Predictability of Oil Palm Tree Yield in Malaysia.

    Science.gov (United States)

    Oettli, Pascal; Behera, Swadhin K; Yamagata, Toshio

    2018-02-02

    The influence of local conditions and remote climate modes on the interannual variability of oil palm fresh fruit bunches (FFB) total yields in Malaysia and two major regions (Peninsular Malaysia and Sabah/Sarawak) is explored. On a country scale, the state of sea-surface temperatures (SST) in the tropical Pacific Ocean during the previous boreal winter is found to influence the regional climate. When El Niño occurs in the Pacific Ocean, rainfall in Malaysia reduces but air temperature increases, generating a high level of water stress for palm trees. As a result, the yearly production of FFB becomes lower than that of a normal year since the water stress during the boreal spring has an important impact on the total annual yields of FFB. Conversely, La Niña sets favorable conditions for palm trees to produce more FFB by reducing chances of water stress risk. The region of the Leeuwin current also seems to play a secondary role through the Ningaloo Niño/ Niña in the interannual variability of FFB yields. Based on these findings, a linear model is constructed and its ability to reproduce the interannual signal is assessed. This model has shown some skills in predicting the total FFB yield.

  19. Predicting response to incretin-based therapy

    Directory of Open Access Journals (Sweden)

    Agrawal N

    2011-04-01

    Full Text Available Sanjay Kalra1, Bharti Kalra2, Rakesh Sahay3, Navneet Agrawal41Department of Endocrinology, 2Department of Diabetology, Bharti Hospital, Karnal, India; 3Department of Endocrinology, Osmania Medical College, Hyderabad, India; 4Department of Medicine, GR Medical College, Gwalior, IndiaAbstract: There are two important incretin hormones, glucose-dependent insulin tropic polypeptide (GIP and glucagon-like peptide-1 (GLP-1. The biological activities of GLP-1 include stimulation of glucose-dependent insulin secretion and insulin biosynthesis, inhibition of glucagon secretion and gastric emptying, and inhibition of food intake. GLP-1 appears to have a number of additional effects in the gastrointestinal tract and central nervous system. Incretin based therapy includes GLP-1 receptor agonists like human GLP-1 analogs (liraglutide and exendin-4 based molecules (exenatide, as well as DPP-4 inhibitors like sitagliptin, vildagliptin and saxagliptin. Most of the published studies showed a significant reduction in HbA1c using these drugs. A critical analysis of reported data shows that the response rate in terms of target achievers of these drugs is average. One of the first actions identified for GLP-1 was the glucose-dependent stimulation of insulin secretion from islet cell lines. Following the detection of GLP-1 receptors on islet beta cells, a large body of evidence has accumulated illustrating that GLP-1 exerts multiple actions on various signaling pathways and gene products in the ß cell. GLP-1 controls glucose homeostasis through well-defined actions on the islet ß cell via stimulation of insulin secretion and preservation and expansion of ß cell mass. In summary, there are several factors determining the response rate to incretin therapy. Currently minimal clinical data is available to make a conclusion. Key factors appear to be duration of diabetes, obesity, presence of autonomic neuropathy, resting energy expenditure, plasma glucagon levels and

  20. Microarray-based cancer prediction using soft computing approach.

    Science.gov (United States)

    Wang, Xiaosheng; Gotoh, Osamu

    2009-05-26

    One of the difficulties in using gene expression profiles to predict cancer is how to effectively select a few informative genes to construct accurate prediction models from thousands or ten thousands of genes. We screen highly discriminative genes and gene pairs to create simple prediction models involved in single genes or gene pairs on the basis of soft computing approach and rough set theory. Accurate cancerous prediction is obtained when we apply the simple prediction models for four cancerous gene expression datasets: CNS tumor, colon tumor, lung cancer and DLBCL. Some genes closely correlated with the pathogenesis of specific or general cancers are identified. In contrast with other models, our models are simple, effective and robust. Meanwhile, our models are interpretable for they are based on decision rules. Our results demonstrate that very simple models may perform well on cancerous molecular prediction and important gene markers of cancer can be detected if the gene selection approach is chosen reasonably.

  1. Deep-Learning-Based Approach for Prediction of Algal Blooms

    Directory of Open Access Journals (Sweden)

    Feng Zhang

    2016-10-01

    Full Text Available Algal blooms have recently become a critical global environmental concern which might put economic development and sustainability at risk. However, the accurate prediction of algal blooms remains a challenging scientific problem. In this study, a novel prediction approach for algal blooms based on deep learning is presented—a powerful tool to represent and predict highly dynamic and complex phenomena. The proposed approach constructs a five-layered model to extract detailed relationships between the density of phytoplankton cells and various environmental parameters. The algal blooms can be predicted by the phytoplankton density obtained from the output layer. A case study is conducted in coastal waters of East China using both our model and a traditional back-propagation neural network for comparison. The results show that the deep-learning-based model yields better generalization and greater accuracy in predicting algal blooms than a traditional shallow neural network does.

  2. Modeling of Complex Life Cycle Prediction Based on Cell Division

    Directory of Open Access Journals (Sweden)

    Fucheng Zhang

    2017-01-01

    Full Text Available Effective fault diagnosis and reasonable life expectancy are of great significance and practical engineering value for the safety, reliability, and maintenance cost of equipment and working environment. At present, the life prediction methods of the equipment are equipment life prediction based on condition monitoring, combined forecasting model, and driven data. Most of them need to be based on a large amount of data to achieve the problem. For this issue, we propose learning from the mechanism of cell division in the organism. We have established a moderate complexity of life prediction model across studying the complex multifactor correlation life model. In this paper, we model the life prediction of cell division. Experiments show that our model can effectively simulate the state of cell division. Through the model of reference, we will use it for the equipment of the complex life prediction.

  3. A state-based probabilistic model for tumor respiratory motion prediction

    International Nuclear Information System (INIS)

    Kalet, Alan; Sandison, George; Schmitz, Ruth; Wu Huanmei

    2010-01-01

    This work proposes a new probabilistic mathematical model for predicting tumor motion and position based on a finite state representation using the natural breathing states of exhale, inhale and end of exhale. Tumor motion was broken down into linear breathing states and sequences of states. Breathing state sequences and the observables representing those sequences were analyzed using a hidden Markov model (HMM) to predict the future sequences and new observables. Velocities and other parameters were clustered using a k-means clustering algorithm to associate each state with a set of observables such that a prediction of state also enables a prediction of tumor velocity. A time average model with predictions based on average past state lengths was also computed. State sequences which are known a priori to fit the data were fed into the HMM algorithm to set a theoretical limit of the predictive power of the model. The effectiveness of the presented probabilistic model has been evaluated for gated radiation therapy based on previously tracked tumor motion in four lung cancer patients. Positional prediction accuracy is compared with actual position in terms of the overall RMS errors. Various system delays, ranging from 33 to 1000 ms, were tested. Previous studies have shown duty cycles for latencies of 33 and 200 ms at around 90% and 80%, respectively, for linear, no prediction, Kalman filter and ANN methods as averaged over multiple patients. At 1000 ms, the previously reported duty cycles range from approximately 62% (ANN) down to 34% (no prediction). Average duty cycle for the HMM method was found to be 100% and 91 ± 3% for 33 and 200 ms latency and around 40% for 1000 ms latency in three out of four breathing motion traces. RMS errors were found to be lower than linear and no prediction methods at latencies of 1000 ms. The results show that for system latencies longer than 400 ms, the time average HMM prediction outperforms linear, no prediction, and the more

  4. Accurate bearing remaining useful life prediction based on Weibull distribution and artificial neural network

    Science.gov (United States)

    Ben Ali, Jaouher; Chebel-Morello, Brigitte; Saidi, Lotfi; Malinowski, Simon; Fnaiech, Farhat

    2015-05-01

    Accurate remaining useful life (RUL) prediction of critical assets is an important challenge in condition based maintenance to improve reliability and decrease machine's breakdown and maintenance's cost. Bearing is one of the most important components in industries which need to be monitored and the user should predict its RUL. The challenge of this study is to propose an original feature able to evaluate the health state of bearings and to estimate their RUL by Prognostics and Health Management (PHM) techniques. In this paper, the proposed method is based on the data-driven prognostic approach. The combination of Simplified Fuzzy Adaptive Resonance Theory Map (SFAM) neural network and Weibull distribution (WD) is explored. WD is used just in the training phase to fit measurement and to avoid areas of fluctuation in the time domain. SFAM training process is based on fitted measurements at present and previous inspection time points as input. However, the SFAM testing process is based on real measurements at present and previous inspections. Thanks to the fuzzy learning process, SFAM has an important ability and a good performance to learn nonlinear time series. As output, seven classes are defined; healthy bearing and six states for bearing degradation. In order to find the optimal RUL prediction, a smoothing phase is proposed in this paper. Experimental results show that the proposed method can reliably predict the RUL of rolling element bearings (REBs) based on vibration signals. The proposed prediction approach can be applied to prognostic other various mechanical assets.

  5. An automated patient recognition method based on an image-matching technique using previous chest radiographs in the picture archiving and communication system environment

    International Nuclear Information System (INIS)

    Morishita, Junji; Katsuragawa, Shigehiko; Kondo, Keisuke; Doi, Kunio

    2001-01-01

    An automated patient recognition method for correcting 'wrong' chest radiographs being stored in a picture archiving and communication system (PACS) environment has been developed. The method is based on an image-matching technique that uses previous chest radiographs. For identification of a 'wrong' patient, the correlation value was determined for a previous image of a patient and a new, current image of the presumed corresponding patient. The current image was shifted horizontally and vertically and rotated, so that we could determine the best match between the two images. The results indicated that the correlation values between the current and previous images for the same, 'correct' patients were generally greater than those for different, 'wrong' patients. Although the two histograms for the same patient and for different patients overlapped at correlation values greater than 0.80, most parts of the histograms were separated. The correlation value was compared with a threshold value that was determined based on an analysis of the histograms of correlation values obtained for the same patient and for different patients. If the current image is considered potentially to belong to a 'wrong' patient, then a warning sign with the probability for a 'wrong' patient is provided to alert radiology personnel. Our results indicate that at least half of the 'wrong' images in our database can be identified correctly with the method described in this study. The overall performance in terms of a receiver operating characteristic curve showed a high performance of the system. The results also indicate that some readings of 'wrong' images for a given patient in the PACS environment can be prevented by use of the method we developed. Therefore an automated warning system for patient recognition would be useful in correcting 'wrong' images being stored in the PACS environment

  6. Is previous disaster experience a good predictor for disaster preparedness in extreme poverty households in remote Muslim minority based community in China?

    Science.gov (United States)

    Chan, Emily Y Y; Kim, Jean H; Lin, Cherry; Cheung, Eliza Y L; Lee, Polly P Y

    2014-06-01

    Disaster preparedness is an important preventive strategy for protecting health and mitigating adverse health effects of unforeseen disasters. A multi-site based ethnic minority project (2009-2015) is set up to examine health and disaster preparedness related issues in remote, rural, disaster prone communities in China. The primary objective of this reported study is to examine if previous disaster experience significantly increases household disaster preparedness levels in remote villages in China. A cross-sectional, household survey was conducted in January 2011 in Gansu Province, in a predominately Hui minority-based village. Factors related to disaster preparedness were explored using quantitative methods. Two focus groups were also conducted to provide additional contextual explanations to the quantitative findings of this study. The village household response rate was 62.4 % (n = 133). Although previous disaster exposure was significantly associated with perception of living in a high disaster risk area (OR = 6.16), only 10.7 % households possessed a disaster emergency kit. Of note, for households with members who had non-communicable diseases, 9.6 % had prepared extra medications to sustain clinical management of their chronic conditions. This is the first study that examined disaster preparedness in an ethnic minority population in remote communities in rural China. Our results indicate the need of disaster mitigation education to promote preparedness in remote, resource-poor communities.

  7. Laparoscopy After Previous Laparotomy

    Directory of Open Access Journals (Sweden)

    Zulfo Godinjak

    2006-11-01

    Full Text Available Following the abdominal surgery, extensive adhesions often occur and they can cause difficulties during laparoscopic operations. However, previous laparotomy is not considered to be a contraindication for laparoscopy. The aim of this study is to present that an insertion of Veres needle in the region of umbilicus is a safe method for creating a pneumoperitoneum for laparoscopic operations after previous laparotomy. In the last three years, we have performed 144 laparoscopic operations in patients that previously underwent one or two laparotomies. Pathology of digestive system, genital organs, Cesarean Section or abdominal war injuries were the most common causes of previouslaparotomy. During those operations or during entering into abdominal cavity we have not experienced any complications, while in 7 patients we performed conversion to laparotomy following the diagnostic laparoscopy. In all patients an insertion of Veres needle and trocar insertion in the umbilical region was performed, namely a technique of closed laparoscopy. Not even in one patient adhesions in the region of umbilicus were found, and no abdominal organs were injured.

  8. Uncertainties in model-based outcome predictions for treatment planning

    International Nuclear Information System (INIS)

    Deasy, Joseph O.; Chao, K.S. Clifford; Markman, Jerry

    2001-01-01

    Purpose: Model-based treatment-plan-specific outcome predictions (such as normal tissue complication probability [NTCP] or the relative reduction in salivary function) are typically presented without reference to underlying uncertainties. We provide a method to assess the reliability of treatment-plan-specific dose-volume outcome model predictions. Methods and Materials: A practical method is proposed for evaluating model prediction based on the original input data together with bootstrap-based estimates of parameter uncertainties. The general framework is applicable to continuous variable predictions (e.g., prediction of long-term salivary function) and dichotomous variable predictions (e.g., tumor control probability [TCP] or NTCP). Using bootstrap resampling, a histogram of the likelihood of alternative parameter values is generated. For a given patient and treatment plan we generate a histogram of alternative model results by computing the model predicted outcome for each parameter set in the bootstrap list. Residual uncertainty ('noise') is accounted for by adding a random component to the computed outcome values. The residual noise distribution is estimated from the original fit between model predictions and patient data. Results: The method is demonstrated using a continuous-endpoint model to predict long-term salivary function for head-and-neck cancer patients. Histograms represent the probabilities for the level of posttreatment salivary function based on the input clinical data, the salivary function model, and the three-dimensional dose distribution. For some patients there is significant uncertainty in the prediction of xerostomia, whereas for other patients the predictions are expected to be more reliable. In contrast, TCP and NTCP endpoints are dichotomous, and parameter uncertainties should be folded directly into the estimated probabilities, thereby improving the accuracy of the estimates. Using bootstrap parameter estimates, competing treatment

  9. Meta-path based heterogeneous combat network link prediction

    Science.gov (United States)

    Li, Jichao; Ge, Bingfeng; Yang, Kewei; Chen, Yingwu; Tan, Yuejin

    2017-09-01

    The combat system-of-systems in high-tech informative warfare, composed of many interconnected combat systems of different types, can be regarded as a type of complex heterogeneous network. Link prediction for heterogeneous combat networks (HCNs) is of significant military value, as it facilitates reconfiguring combat networks to represent the complex real-world network topology as appropriate with observed information. This paper proposes a novel integrated methodology framework called HCNMP (HCN link prediction based on meta-path) to predict multiple types of links simultaneously for an HCN. More specifically, the concept of HCN meta-paths is introduced, through which the HCNMP can accumulate information by extracting different features of HCN links for all the six defined types. Next, an HCN link prediction model, based on meta-path features, is built to predict all types of links of the HCN simultaneously. Then, the solution algorithm for the HCN link prediction model is proposed, in which the prediction results are obtained by iteratively updating with the newly predicted results until the results in the HCN converge or reach a certain maximum iteration number. Finally, numerical experiments on the dataset of a real HCN are conducted to demonstrate the feasibility and effectiveness of the proposed HCNMP, in comparison with 30 baseline methods. The results show that the performance of the HCNMP is superior to those of the baseline methods.

  10. Real-time prediction of respiratory motion based on local regression methods

    International Nuclear Information System (INIS)

    Ruan, D; Fessler, J A; Balter, J M

    2007-01-01

    Recent developments in modulation techniques enable conformal delivery of radiation doses to small, localized target volumes. One of the challenges in using these techniques is real-time tracking and predicting target motion, which is necessary to accommodate system latencies. For image-guided-radiotherapy systems, it is also desirable to minimize sampling rates to reduce imaging dose. This study focuses on predicting respiratory motion, which can significantly affect lung tumours. Predicting respiratory motion in real-time is challenging, due to the complexity of breathing patterns and the many sources of variability. We propose a prediction method based on local regression. There are three major ingredients of this approach: (1) forming an augmented state space to capture system dynamics, (2) local regression in the augmented space to train the predictor from previous observation data using semi-periodicity of respiratory motion, (3) local weighting adjustment to incorporate fading temporal correlations. To evaluate prediction accuracy, we computed the root mean square error between predicted tumor motion and its observed location for ten patients. For comparison, we also investigated commonly used predictive methods, namely linear prediction, neural networks and Kalman filtering to the same data. The proposed method reduced the prediction error for all imaging rates and latency lengths, particularly for long prediction lengths

  11. A CBR-Based and MAHP-Based Customer Value Prediction Model for New Product Development

    Science.gov (United States)

    Zhao, Yu-Jie; Luo, Xin-xing; Deng, Li

    2014-01-01

    In the fierce market environment, the enterprise which wants to meet customer needs and boost its market profit and share must focus on the new product development. To overcome the limitations of previous research, Chan et al. proposed a dynamic decision support system to predict the customer lifetime value (CLV) for new product development. However, to better meet the customer needs, there are still some deficiencies in their model, so this study proposes a CBR-based and MAHP-based customer value prediction model for a new product (C&M-CVPM). CBR (case based reasoning) can reduce experts' workload and evaluation time, while MAHP (multiplicative analytic hierarchy process) can use actual but average influencing factor's effectiveness in stimulation, and at same time C&M-CVPM uses dynamic customers' transition probability which is more close to reality. This study not only introduces the realization of CBR and MAHP, but also elaborates C&M-CVPM's three main modules. The application of the proposed model is illustrated and confirmed to be sensible and convincing through a stimulation experiment. PMID:25162050

  12. NAPR: a Cloud-Based Framework for Neuroanatomical Age Prediction.

    Science.gov (United States)

    Pardoe, Heath R; Kuzniecky, Ruben

    2018-01-01

    The availability of cloud computing services has enabled the widespread adoption of the "software as a service" (SaaS) approach for software distribution, which utilizes network-based access to applications running on centralized servers. In this paper we apply the SaaS approach to neuroimaging-based age prediction. Our system, named "NAPR" (Neuroanatomical Age Prediction using R), provides access to predictive modeling software running on a persistent cloud-based Amazon Web Services (AWS) compute instance. The NAPR framework allows external users to estimate the age of individual subjects using cortical thickness maps derived from their own locally processed T1-weighted whole brain MRI scans. As a demonstration of the NAPR approach, we have developed two age prediction models that were trained using healthy control data from the ABIDE, CoRR, DLBS and NKI Rockland neuroimaging datasets (total N = 2367, age range 6-89 years). The provided age prediction models were trained using (i) relevance vector machines and (ii) Gaussian processes machine learning methods applied to cortical thickness surfaces obtained using Freesurfer v5.3. We believe that this transparent approach to out-of-sample evaluation and comparison of neuroimaging age prediction models will facilitate the development of improved age prediction models and allow for robust evaluation of the clinical utility of these methods.

  13. Next Place Prediction Based on Spatiotemporal Pattern Mining of Mobile Device Logs

    Directory of Open Access Journals (Sweden)

    Sungjun Lee

    2016-01-01

    Full Text Available Due to the recent explosive growth of location-aware services based on mobile devices, predicting the next places of a user is of increasing importance to enable proactive information services. In this paper, we introduce a data-driven framework that aims to predict the user’s next places using his/her past visiting patterns analyzed from mobile device logs. Specifically, the notion of the spatiotemporal-periodic (STP pattern is proposed to capture the visits with spatiotemporal periodicity by focusing on a detail level of location for each individual. Subsequently, we present algorithms that extract the STP patterns from a user’s past visiting behaviors and predict the next places based on the patterns. The experiment results obtained by using a real-world dataset show that the proposed methods are more effective in predicting the user’s next places than the previous approaches considered in most cases.

  14. Next Place Prediction Based on Spatiotemporal Pattern Mining of Mobile Device Logs.

    Science.gov (United States)

    Lee, Sungjun; Lim, Junseok; Park, Jonghun; Kim, Kwanho

    2016-01-23

    Due to the recent explosive growth of location-aware services based on mobile devices, predicting the next places of a user is of increasing importance to enable proactive information services. In this paper, we introduce a data-driven framework that aims to predict the user's next places using his/her past visiting patterns analyzed from mobile device logs. Specifically, the notion of the spatiotemporal-periodic (STP) pattern is proposed to capture the visits with spatiotemporal periodicity by focusing on a detail level of location for each individual. Subsequently, we present algorithms that extract the STP patterns from a user's past visiting behaviors and predict the next places based on the patterns. The experiment results obtained by using a real-world dataset show that the proposed methods are more effective in predicting the user's next places than the previous approaches considered in most cases.

  15. Using the genome aggregation database, computational pathogenicity prediction tools, and patch clamp heterologous expression studies to demote previously published long QT syndrome type 1 mutations from pathogenic to benign.

    Science.gov (United States)

    Clemens, Daniel J; Lentino, Anne R; Kapplinger, Jamie D; Ye, Dan; Zhou, Wei; Tester, David J; Ackerman, Michael J

    2018-04-01

    Mutations in the KCNQ1-encoded Kv7.1 potassium channel cause long QT syndrome (LQTS) type 1 (LQT1). It has been suggested that ∼10%-20% of rare LQTS case-derived variants in the literature may have been published erroneously as LQT1-causative mutations and may be "false positives." The purpose of this study was to determine which previously published KCNQ1 case variants are likely false positives. A list of all published, case-derived KCNQ1 missense variants (MVs) was compiled. The occurrence of each MV within the Genome Aggregation Database (gnomAD) was assessed. Eight in silico tools were used to predict each variant's pathogenicity. Case-derived variants that were either (1) too frequently found in gnomAD or (2) absent in gnomAD but predicted to be pathogenic by ≤2 tools were considered potential false positives. Three of these variants were characterized functionally using whole-cell patch clamp technique. Overall, there were 244 KCNQ1 case-derived MVs. Of these, 29 (12%) were seen in ≥10 individuals in gnomAD and are demotable. However, 157 of 244 MVs (64%) were absent in gnomAD. Of these, 7 (4%) were predicted to be pathogenic by ≤2 tools, 3 of which we characterized functionally. There was no significant difference in current density between heterozygous KCNQ1-F127L, -P477L, or -L619M variant-containing channels compared to KCNQ1-WT. This study offers preliminary evidence for the demotion of 32 (13%) previously published LQT1 MVs. Of these, 29 were demoted because of their frequent sighting in gnomAD. Additionally, in silico analysis and in vitro functional studies have facilitated the demotion of 3 ultra-rare MVs (F127L, P477L, L619M). Copyright © 2017 Heart Rhythm Society. Published by Elsevier Inc. All rights reserved.

  16. Implementation of neural network based non-linear predictive

    DEFF Research Database (Denmark)

    Sørensen, Paul Haase; Nørgård, Peter Magnus; Ravn, Ole

    1998-01-01

    The paper describes a control method for non-linear systems based on generalized predictive control. Generalized predictive control (GPC) was developed to control linear systems including open loop unstable and non-minimum phase systems, but has also been proposed extended for the control of non......-linear systems. GPC is model-based and in this paper we propose the use of a neural network for the modeling of the system. Based on the neural network model a controller with extended control horizon is developed and the implementation issues are discussed, with particular emphasis on an efficient Quasi......-Newton optimization algorithm. The performance is demonstrated on a pneumatic servo system....

  17. Prediction-based Dynamic Energy Management in Wireless Sensor Networks

    Science.gov (United States)

    Wang, Xue; Ma, Jun-Jie; Wang, Sheng; Bi, Dao-Wei

    2007-01-01

    Energy consumption is a critical constraint in wireless sensor networks. Focusing on the energy efficiency problem of wireless sensor networks, this paper proposes a method of prediction-based dynamic energy management. A particle filter was introduced to predict a target state, which was adopted to awaken wireless sensor nodes so that their sleep time was prolonged. With the distributed computing capability of nodes, an optimization approach of distributed genetic algorithm and simulated annealing was proposed to minimize the energy consumption of measurement. Considering the application of target tracking, we implemented target position prediction, node sleep scheduling and optimal sensing node selection. Moreover, a routing scheme of forwarding nodes was presented to achieve extra energy conservation. Experimental results of target tracking verified that energy-efficiency is enhanced by prediction-based dynamic energy management.

  18. Prediction-based Dynamic Energy Management in Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Dao-Wei Bi

    2007-03-01

    Full Text Available Energy consumption is a critical constraint in wireless sensor networks. Focusing on the energy efficiency problem of wireless sensor networks, this paper proposes a method of prediction-based dynamic energy management. A particle filter was introduced to predict a target state, which was adopted to awaken wireless sensor nodes so that their sleep time was prolonged. With the distributed computing capability of nodes, an optimization approach of distributed genetic algorithm and simulated annealing was proposed to minimize the energy consumption of measurement. Considering the application of target tracking, we implemented target position prediction, node sleep scheduling and optimal sensing node selection. Moreover, a routing scheme of forwarding nodes was presented to achieve extra energy conservation. Experimental results of target tracking verified that energy-efficiency is enhanced by prediction-based dynamic energy management.

  19. Time-sensitive Customer Churn Prediction based on PU Learning

    OpenAIRE

    Wang, Li; Chen, Chaochao; Zhou, Jun; Li, Xiaolong

    2018-01-01

    With the fast development of Internet companies throughout the world, customer churn has become a serious concern. To better help the companies retain their customers, it is important to build a customer churn prediction model to identify the customers who are most likely to churn ahead of time. In this paper, we propose a Time-sensitive Customer Churn Prediction (TCCP) framework based on Positive and Unlabeled (PU) learning technique. Specifically, we obtain the recent data by shortening the...

  20. Cloud Based Metalearning System for Predictive Modeling of Biomedical Data

    Directory of Open Access Journals (Sweden)

    Milan Vukićević

    2014-01-01

    Full Text Available Rapid growth and storage of biomedical data enabled many opportunities for predictive modeling and improvement of healthcare processes. On the other side analysis of such large amounts of data is a difficult and computationally intensive task for most existing data mining algorithms. This problem is addressed by proposing a cloud based system that integrates metalearning framework for ranking and selection of best predictive algorithms for data at hand and open source big data technologies for analysis of biomedical data.

  1. Accurate Multisteps Traffic Flow Prediction Based on SVM

    Directory of Open Access Journals (Sweden)

    Zhang Mingheng

    2013-01-01

    Full Text Available Accurate traffic flow prediction is prerequisite and important for realizing intelligent traffic control and guidance, and it is also the objective requirement for intelligent traffic management. Due to the strong nonlinear, stochastic, time-varying characteristics of urban transport system, artificial intelligence methods such as support vector machine (SVM are now receiving more and more attentions in this research field. Compared with the traditional single-step prediction method, the multisteps prediction has the ability that can predict the traffic state trends over a certain period in the future. From the perspective of dynamic decision, it is far important than the current traffic condition obtained. Thus, in this paper, an accurate multi-steps traffic flow prediction model based on SVM was proposed. In which, the input vectors were comprised of actual traffic volume and four different types of input vectors were compared to verify their prediction performance with each other. Finally, the model was verified with actual data in the empirical analysis phase and the test results showed that the proposed SVM model had a good ability for traffic flow prediction and the SVM-HPT model outperformed the other three models for prediction.

  2. Gene function prediction based on Gene Ontology Hierarchy Preserving Hashing.

    Science.gov (United States)

    Zhao, Yingwen; Fu, Guangyuan; Wang, Jun; Guo, Maozu; Yu, Guoxian

    2018-02-23

    Gene Ontology (GO) uses structured vocabularies (or terms) to describe the molecular functions, biological roles, and cellular locations of gene products in a hierarchical ontology. GO annotations associate genes with GO terms and indicate the given gene products carrying out the biological functions described by the relevant terms. However, predicting correct GO annotations for genes from a massive set of GO terms as defined by GO is a difficult challenge. To combat with this challenge, we introduce a Gene Ontology Hierarchy Preserving Hashing (HPHash) based semantic method for gene function prediction. HPHash firstly measures the taxonomic similarity between GO terms. It then uses a hierarchy preserving hashing technique to keep the hierarchical order between GO terms, and to optimize a series of hashing functions to encode massive GO terms via compact binary codes. After that, HPHash utilizes these hashing functions to project the gene-term association matrix into a low-dimensional one and performs semantic similarity based gene function prediction in the low-dimensional space. Experimental results on three model species (Homo sapiens, Mus musculus and Rattus norvegicus) for interspecies gene function prediction show that HPHash performs better than other related approaches and it is robust to the number of hash functions. In addition, we also take HPHash as a plugin for BLAST based gene function prediction. From the experimental results, HPHash again significantly improves the prediction performance. The codes of HPHash are available at: http://mlda.swu.edu.cn/codes.php?name=HPHash. Copyright © 2018 Elsevier Inc. All rights reserved.

  3. The effect of genealogy-based haplotypes on genomic prediction

    DEFF Research Database (Denmark)

    Edriss, Vahid; Fernando, Rohan L.; Su, Guosheng

    2013-01-01

    on haplotypes instead of regression on individual markers. The aim of this study was to investigate the accuracy of genomic prediction using haplotypes based on local genealogy information. Methods A total of 4429 Danish Holstein bulls were genotyped with the 50K SNP chip. Haplotypes were constructed using...... local genealogical trees. Effects of haplotype covariates were estimated with two types of prediction models: (1) assuming that effects had the same distribution for all haplotype covariates, i.e. the GBLUP method and (2) assuming that a large proportion (pi) of the haplotype covariates had zero effect......, i.e. a Bayesian mixture method. Results About 7.5 times more covariate effects were estimated when fitting haplotypes based on local genealogical trees compared to fitting individuals markers. Genealogy-based haplotype clustering slightly increased the accuracy of genomic prediction and, in some...

  4. SNBRFinder: A Sequence-Based Hybrid Algorithm for Enhanced Prediction of Nucleic Acid-Binding Residues.

    Science.gov (United States)

    Yang, Xiaoxia; Wang, Jia; Sun, Jun; Liu, Rong

    2015-01-01

    Protein-nucleic acid interactions are central to various fundamental biological processes. Automated methods capable of reliably identifying DNA- and RNA-binding residues in protein sequence are assuming ever-increasing importance. The majority of current algorithms rely on feature-based prediction, but their accuracy remains to be further improved. Here we propose a sequence-based hybrid algorithm SNBRFinder (Sequence-based Nucleic acid-Binding Residue Finder) by merging a feature predictor SNBRFinderF and a template predictor SNBRFinderT. SNBRFinderF was established using the support vector machine whose inputs include sequence profile and other complementary sequence descriptors, while SNBRFinderT was implemented with the sequence alignment algorithm based on profile hidden Markov models to capture the weakly homologous template of query sequence. Experimental results show that SNBRFinderF was clearly superior to the commonly used sequence profile-based predictor and SNBRFinderT can achieve comparable performance to the structure-based template methods. Leveraging the complementary relationship between these two predictors, SNBRFinder reasonably improved the performance of both DNA- and RNA-binding residue predictions. More importantly, the sequence-based hybrid prediction reached competitive performance relative to our previous structure-based counterpart. Our extensive and stringent comparisons show that SNBRFinder has obvious advantages over the existing sequence-based prediction algorithms. The value of our algorithm is highlighted by establishing an easy-to-use web server that is freely accessible at http://ibi.hzau.edu.cn/SNBRFinder.

  5. Improving local clustering based top-L link prediction methods via asymmetric link clustering information

    Science.gov (United States)

    Wu, Zhihao; Lin, Youfang; Zhao, Yiji; Yan, Hongyan

    2018-02-01

    Networks can represent a wide range of complex systems, such as social, biological and technological systems. Link prediction is one of the most important problems in network analysis, and has attracted much research interest recently. Many link prediction methods have been proposed to solve this problem with various techniques. We can note that clustering information plays an important role in solving the link prediction problem. In previous literatures, we find node clustering coefficient appears frequently in many link prediction methods. However, node clustering coefficient is limited to describe the role of a common-neighbor in different local networks, because it cannot distinguish different clustering abilities of a node to different node pairs. In this paper, we shift our focus from nodes to links, and propose the concept of asymmetric link clustering (ALC) coefficient. Further, we improve three node clustering based link prediction methods via the concept of ALC. The experimental results demonstrate that ALC-based methods outperform node clustering based methods, especially achieving remarkable improvements on food web, hamster friendship and Internet networks. Besides, comparing with other methods, the performance of ALC-based methods are very stable in both globalized and personalized top-L link prediction tasks.

  6. Model Predictive Control based on Finite Impulse Response Models

    DEFF Research Database (Denmark)

    Prasath, Guru; Jørgensen, John Bagterp

    2008-01-01

    We develop a regularized l2 finite impulse response (FIR) predictive controller with input and input-rate constraints. Feedback is based on a simple constant output disturbance filter. The performance of the predictive controller in the face of plant-model mismatch is investigated by simulations...... and related to the uncertainty of the impulse response coefficients. The simulations can be used to benchmark l2 MPC against FIR based robust MPC as well as to estimate the maximum performance improvements by robust MPC....

  7. A novel pH-responsive hydrogel-based on calcium alginate engineered by the previous formation of polyelectrolyte complexes (PECs) intended to vaginal administration.

    Science.gov (United States)

    Ferreira, Natália Noronha; Perez, Taciane Alvarenga; Pedreiro, Liliane Neves; Prezotti, Fabíola Garavello; Boni, Fernanda Isadora; Cardoso, Valéria Maria de Oliveira; Venâncio, Tiago; Gremião, Maria Palmira Daflon

    2017-10-01

    This work aimed to develop a calcium alginate hydrogel as a pH responsive delivery system for polymyxin B (PMX) sustained-release through the vaginal route. Two samples of sodium alginate from different suppliers were characterized. The molecular weight and M/G ratio determined were, approximately, 107 KDa and 1.93 for alginate_S and 32 KDa and 1.36 for alginate_V. Polymer rheological investigations were further performed through the preparation of hydrogels. Alginate_V was selected for subsequent incorporation of PMX due to the acquisition of pseudoplastic viscous system able to acquiring a differential structure in simulated vaginal microenvironment (pH 4.5). The PMX-loaded hydrogel (hydrogel_PMX) was engineered based on polyelectrolyte complexes (PECs) formation between alginate and PMX followed by crosslinking with calcium chloride. This system exhibited a morphology with variable pore sizes, ranging from 100 to 200 μm and adequate syringeability. The hydrogel liquid uptake ability in an acid environment was minimized by the previous PECs formation. In vitro tests evidenced the hydrogels mucoadhesiveness. PMX release was pH-dependent and the system was able to sustain the release up to 6 days. A burst release was observed at pH 7.4 and drug release was driven by an anomalous transport, as determined by the Korsmeyer-Peppas model. At pH 4.5, drug release correlated with Weibull model and drug transport was driven by Fickian diffusion. The calcium alginate hydrogels engineered by the previous formation of PECs showed to be a promising platform for sustained release of cationic drugs through vaginal administration.

  8. Trial of labour and vaginal birth after previous caesarean section: A population based study of Eastern African immigrants in Victoria, Australia.

    Science.gov (United States)

    Belihu, Fetene B; Small, Rhonda; Davey, Mary-Ann

    2017-03-01

    Variations in caesarean section (CS) between some immigrant groups and receiving country populations have been widely reported. Often, African immigrant women are at higher risk of CS than the receiving population in developed countries. However, evidence about subsequent mode of birth following CS for African women post-migration is lacking. The objective of this study was to examine differences in attempted and successful vaginal birth after previous caesarean (VBAC) for Eastern African immigrants (Eritrea, Ethiopia, Somalia and Sudan) compared with Australian-born women. A population-based observational study was conducted using the Victorian Perinatal Data Collection. Pearson's chi-square test and logistic regression analysis were performed to generate adjusted odds ratios for attempted and successful VBAC. Victoria, Australia. 554 Eastern African immigrants and 24,587 Australian-born eligible women with previous CS having singleton births in public care. 41.5% of Eastern African immigrant women and 26.1% Australian-born women attempted a VBAC with 50.9% of Eastern African immigrants and 60.5% of Australian-born women being successful. After adjusting for maternal demographic characteristics and available clinical confounding factors, Eastern African immigrants were more likely to attempt (OR adj 1.94, 95% CI 1.57-2.47) but less likely to succeed (OR adj 0.54 95% CI 0.41-0.71) in having a VBAC. There are disparities in attempted and successful VBAC between Eastern African origin and Australian-born women. Unsuccessful VBAC attempt is more common among Eastern African immigrants, suggesting the need for improved strategies to select and support potential candidates for vaginal birth among these immigrants to enhance success and reduce potential complications associated with failed VBAC attempt. Crown Copyright © 2017. Published by Elsevier Ltd. All rights reserved.

  9. Comparisons of Faulting-Based Pavement Performance Prediction Models

    Directory of Open Access Journals (Sweden)

    Weina Wang

    2017-01-01

    Full Text Available Faulting prediction is the core of concrete pavement maintenance and design. Highway agencies are always faced with the problem of lower accuracy for the prediction which causes costly maintenance. Although many researchers have developed some performance prediction models, the accuracy of prediction has remained a challenge. This paper reviews performance prediction models and JPCP faulting models that have been used in past research. Then three models including multivariate nonlinear regression (MNLR model, artificial neural network (ANN model, and Markov Chain (MC model are tested and compared using a set of actual pavement survey data taken on interstate highway with varying design features, traffic, and climate data. It is found that MNLR model needs further recalibration, while the ANN model needs more data for training the network. MC model seems a good tool for pavement performance prediction when the data is limited, but it is based on visual inspections and not explicitly related to quantitative physical parameters. This paper then suggests that the further direction for developing the performance prediction model is incorporating the advantages and disadvantages of different models to obtain better accuracy.

  10. Knowledge-based prediction of three-dimensional dose distributions for external beam radiotherapy

    Energy Technology Data Exchange (ETDEWEB)

    Shiraishi, Satomi; Moore, Kevin L., E-mail: kevinmoore@ucsd.edu [Department of Radiation Medicine and Applied Sciences, University of California, San Diego, La Jolla, California 92093 (United States)

    2016-01-15

    Purpose: To demonstrate knowledge-based 3D dose prediction for external beam radiotherapy. Methods: Using previously treated plans as training data, an artificial neural network (ANN) was trained to predict a dose matrix based on patient-specific geometric and planning parameters, such as the closest distance (r) to planning target volume (PTV) and organ-at-risks (OARs). Twenty-three prostate and 43 stereotactic radiosurgery/radiotherapy (SRS/SRT) cases with at least one nearby OAR were studied. All were planned with volumetric-modulated arc therapy to prescription doses of 81 Gy for prostate and 12–30 Gy for SRS. Using these clinically approved plans, ANNs were trained to predict dose matrix and the predictive accuracy was evaluated using the dose difference between the clinical plan and prediction, δD = D{sub clin} − D{sub pred}. The mean (〈δD{sub r}〉), standard deviation (σ{sub δD{sub r}}), and their interquartile range (IQR) for the training plans were evaluated at a 2–3 mm interval from the PTV boundary (r{sub PTV}) to assess prediction bias and precision. Initially, unfiltered models which were trained using all plans in the cohorts were created for each treatment site. The models predict approximately the average quality of OAR sparing. Emphasizing a subset of plans that exhibited superior to the average OAR sparing during training, refined models were created to predict high-quality rectum sparing for prostate and brainstem sparing for SRS. Using the refined model, potentially suboptimal plans were identified where the model predicted further sparing of the OARs was achievable. Replans were performed to test if the OAR sparing could be improved as predicted by the model. Results: The refined models demonstrated highly accurate dose distribution prediction. For prostate cases, the average prediction bias for all voxels irrespective of organ delineation ranged from −1% to 0% with maximum IQR of 3% over r{sub PTV} ∈ [ − 6, 30] mm. The

  11. Knowledge-based prediction of three-dimensional dose distributions for external beam radiotherapy

    International Nuclear Information System (INIS)

    Shiraishi, Satomi; Moore, Kevin L.

    2016-01-01

    Purpose: To demonstrate knowledge-based 3D dose prediction for external beam radiotherapy. Methods: Using previously treated plans as training data, an artificial neural network (ANN) was trained to predict a dose matrix based on patient-specific geometric and planning parameters, such as the closest distance (r) to planning target volume (PTV) and organ-at-risks (OARs). Twenty-three prostate and 43 stereotactic radiosurgery/radiotherapy (SRS/SRT) cases with at least one nearby OAR were studied. All were planned with volumetric-modulated arc therapy to prescription doses of 81 Gy for prostate and 12–30 Gy for SRS. Using these clinically approved plans, ANNs were trained to predict dose matrix and the predictive accuracy was evaluated using the dose difference between the clinical plan and prediction, δD = D clin − D pred . The mean (〈δD r 〉), standard deviation (σ δD r ), and their interquartile range (IQR) for the training plans were evaluated at a 2–3 mm interval from the PTV boundary (r PTV ) to assess prediction bias and precision. Initially, unfiltered models which were trained using all plans in the cohorts were created for each treatment site. The models predict approximately the average quality of OAR sparing. Emphasizing a subset of plans that exhibited superior to the average OAR sparing during training, refined models were created to predict high-quality rectum sparing for prostate and brainstem sparing for SRS. Using the refined model, potentially suboptimal plans were identified where the model predicted further sparing of the OARs was achievable. Replans were performed to test if the OAR sparing could be improved as predicted by the model. Results: The refined models demonstrated highly accurate dose distribution prediction. For prostate cases, the average prediction bias for all voxels irrespective of organ delineation ranged from −1% to 0% with maximum IQR of 3% over r PTV ∈ [ − 6, 30] mm. The average prediction error was less

  12. Static Formation Temperature Prediction Based on Bottom Hole Temperature

    Directory of Open Access Journals (Sweden)

    Changwei Liu

    2016-08-01

    Full Text Available Static formation temperature (SFT is required to determine the thermophysical properties and production parameters in geothermal and oil reservoirs. However, it is not easy to determine SFT by both experimental and physical methods. In this paper, a mathematical approach to predicting SFT, based on a new model describing the relationship between bottom hole temperature (BHT and shut-in time, has been proposed. The unknown coefficients of the model were derived from the least squares fit by the particle swarm optimization (PSO algorithm. Additionally, the ability to predict SFT using a few BHT data points (such as the first three, four, or five points of a data set was evaluated. The accuracy of the proposed method to predict SFT was confirmed by a deviation percentage less than ±4% and a high regression coefficient R2 (>0.98. The proposed method could be used as a practical tool to predict SFT in both geothermal and oil wells.

  13. Prediction on carbon dioxide emissions based on fuzzy rules

    Science.gov (United States)

    Pauzi, Herrini; Abdullah, Lazim

    2014-06-01

    There are several ways to predict air quality, varying from simple regression to models based on artificial intelligence. Most of the conventional methods are not sufficiently able to provide good forecasting performances due to the problems with non-linearity uncertainty and complexity of the data. Artificial intelligence techniques are successfully used in modeling air quality in order to cope with the problems. This paper describes fuzzy inference system (FIS) to predict CO2 emissions in Malaysia. Furthermore, adaptive neuro-fuzzy inference system (ANFIS) is used to compare the prediction performance. Data of five variables: energy use, gross domestic product per capita, population density, combustible renewable and waste and CO2 intensity are employed in this comparative study. The results from the two model proposed are compared and it is clearly shown that the ANFIS outperforms FIS in CO2 prediction.

  14. Drug-target interaction prediction from PSSM based evolutionary information.

    Science.gov (United States)

    Mousavian, Zaynab; Khakabimamaghani, Sahand; Kavousi, Kaveh; Masoudi-Nejad, Ali

    2016-01-01

    The labor-intensive and expensive experimental process of drug-target interaction prediction has motivated many researchers to focus on in silico prediction, which leads to the helpful information in supporting the experimental interaction data. Therefore, they have proposed several computational approaches for discovering new drug-target interactions. Several learning-based methods have been increasingly developed which can be categorized into two main groups: similarity-based and feature-based. In this paper, we firstly use the bi-gram features extracted from the Position Specific Scoring Matrix (PSSM) of proteins in predicting drug-target interactions. Our results demonstrate the high-confidence prediction ability of the Bigram-PSSM model in terms of several performance indicators specifically for enzymes and ion channels. Moreover, we investigate the impact of negative selection strategy on the performance of the prediction, which is not widely taken into account in the other relevant studies. This is important, as the number of non-interacting drug-target pairs are usually extremely large in comparison with the number of interacting ones in existing drug-target interaction data. An interesting observation is that different levels of performance reduction have been attained for four datasets when we change the sampling method from the random sampling to the balanced sampling. Copyright © 2015 Elsevier Inc. All rights reserved.

  15. Comparison of Ablation Predictions for Carbonaceous Materials Using CEA and JANAF-Based Species Thermodynamics

    Science.gov (United States)

    Milos, Frank S.

    2011-01-01

    In most previous work at NASA Ames Research Center, ablation predictions for carbonaceous materials were obtained using a species thermodynamics database developed by Aerotherm Corporation. This database is derived mostly from the JANAF thermochemical tables. However, the CEA thermodynamics database, also used by NASA, is considered more up to date. In this work, the FIAT code was modified to use CEA-based curve fits for species thermodynamics, then analyses using both the JANAF and CEA thermodynamics were performed for carbon and carbon phenolic materials over a range of test conditions. The ablation predictions are comparable at lower heat fluxes where the dominant mechanism is carbon oxidation. However, the predictions begin to diverge in the sublimation regime, with the CEA model predicting lower recession. The disagreement is more significant for carbon phenolic than for carbon, and this difference is attributed to hydrocarbon species that may contribute to the ablation rate.

  16. A time series based sequence prediction algorithm to detect activities of daily living in smart home.

    Science.gov (United States)

    Marufuzzaman, M; Reaz, M B I; Ali, M A M; Rahman, L F

    2015-01-01

    The goal of smart homes is to create an intelligent environment adapting the inhabitants need and assisting the person who needs special care and safety in their daily life. This can be reached by collecting the ADL (activities of daily living) data and further analysis within existing computing elements. In this research, a very recent algorithm named sequence prediction via enhanced episode discovery (SPEED) is modified and in order to improve accuracy time component is included. The modified SPEED or M-SPEED is a sequence prediction algorithm, which modified the previous SPEED algorithm by using time duration of appliance's ON-OFF states to decide the next state. M-SPEED discovered periodic episodes of inhabitant behavior, trained it with learned episodes, and made decisions based on the obtained knowledge. The results showed that M-SPEED achieves 96.8% prediction accuracy, which is better than other time prediction algorithms like PUBS, ALZ with temporal rules and the previous SPEED. Since human behavior shows natural temporal patterns, duration times can be used to predict future events more accurately. This inhabitant activity prediction system will certainly improve the smart homes by ensuring safety and better care for elderly and handicapped people.

  17. Fludarabine-based versus CHOP-like regimens with or without rituximab in patients with previously untreated indolent lymphoma: a retrospective analysis of safety and efficacy

    Directory of Open Access Journals (Sweden)

    Xu XX

    2013-10-01

    Full Text Available Xiao-xiao Xu,1 Bei Yan,2 Zhen-xing Wang,3 Yong Yu,1 Xiao-xiong Wu,2 Yi-zhuo Zhang11Department of Hematology, Tianjin Medical University Cancer Institute and Hospital, Tianjin Key Laboratory of Cancer Prevention and Therapy, Tianjin, 2Department of Hematology, First Affiliated Hospital of Chinese People's Liberation Army General Hospital, Beijing, 3Department of Stomach Oncology, TianJin Medical University Cancer Institute and Hospital, Key Laboratory of Cancer Prevention and Therapy, Tianjin, People's Republic of ChinaAbstract: Fludarabine-based regimens and CHOP (doxorubicin, cyclophosphamide, vincristine, prednisone-like regimens with or without rituximab are the most common treatment modalities for indolent lymphoma. However, there is no clear evidence to date about which chemotherapy regimen should be the proper initial treatment of indolent lymphoma. More recently, the use of fludarabine has raised concerns due to its high number of toxicities, especially hematological toxicity and infectious complications. The present study aimed to retrospectively evaluate both the efficacy and the potential toxicities of the two main regimens (fludarabine-based and CHOP-like regimens in patients with previously untreated indolent lymphoma. Among a total of 107 patients assessed, 54 patients received fludarabine-based regimens (FLU arm and 53 received CHOP or CHOPE (doxorubicin, cyclophosphamide, vincristine, prednisone, or plus etoposide regimens (CHOP arm. The results demonstrated that fludarabine-based regimens could induce significantly improved progression-free survival (PFS compared with CHOP-like regimens. However, the FLU arm showed overall survival, complete response, and overall response rates similar to those of the CHOP arm. Grade 3–4 neutropenia occurred in 42.6% of the FLU arm and 7.5% of the CHOP arm (P 60 years and presentation of grade 3–4 myelosuppression were the independent factors to infection, and the FLU arm had significantly

  18. Predictive based monitoring of nuclear plant component degradation using support vector regression

    International Nuclear Information System (INIS)

    Agarwal, Vivek; Alamaniotis, Miltiadis; Tsoukalas, Lefteri H.

    2015-01-01

    Nuclear power plants (NPPs) are large installations comprised of many active and passive assets. Degradation monitoring of all these assets is expensive (labor cost) and highly demanding task. In this paper a framework based on Support Vector Regression (SVR) for online surveillance of critical parameter degradation of NPP components is proposed. In this case, on time replacement or maintenance of components will prevent potential plant malfunctions, and reduce the overall operational cost. In the current work, we apply SVR equipped with a Gaussian kernel function to monitor components. Monitoring includes the one-step-ahead prediction of the component's respective operational quantity using the SVR model, while the SVR model is trained using a set of previous recorded degradation histories of similar components. Predictive capability of the model is evaluated upon arrival of a sensor measurement, which is compared to the component failure threshold. A maintenance decision is based on a fuzzy inference system that utilizes three parameters: (i) prediction evaluation in the previous steps, (ii) predicted value of the current step, (iii) and difference of current predicted value with components failure thresholds. The proposed framework will be tested on turbine blade degradation data.

  19. Composition-Based Prediction of Temperature-Dependent Thermophysical Food Properties: Reevaluating Component Groups and Prediction Models.

    Science.gov (United States)

    Phinney, David Martin; Frelka, John C; Heldman, Dennis Ray

    2017-01-01

    Prediction of temperature-dependent thermophysical properties (thermal conductivity, density, specific heat, and thermal diffusivity) is an important component of process design for food manufacturing. Current models for prediction of thermophysical properties of foods are based on the composition, specifically, fat, carbohydrate, protein, fiber, water, and ash contents, all of which change with temperature. The objectives of this investigation were to reevaluate and improve the prediction expressions for thermophysical properties. Previously published data were analyzed over the temperature range from 10 to 150 °C. These data were analyzed to create a series of relationships between the thermophysical properties and temperature for each food component, as well as to identify the dependence of the thermophysical properties on more specific structural properties of the fats, carbohydrates, and proteins. Results from this investigation revealed that the relationships between the thermophysical properties of the major constituents of foods and temperature can be statistically described by linear expressions, in contrast to the current polynomial models. Links between variability in thermophysical properties and structural properties were observed. Relationships for several thermophysical properties based on more specific constituents have been identified. Distinctions between simple sugars (fructose, glucose, and lactose) and complex carbohydrates (starch, pectin, and cellulose) have been proposed. The relationships between the thermophysical properties and proteins revealed a potential correlation with the molecular weight of the protein. The significance of relating variability in constituent thermophysical properties with structural properties--such as molecular mass--could significantly improve composition-based prediction models and, consequently, the effectiveness of process design. © 2016 Institute of Food Technologists®.

  20. Multi-Objective Predictive Balancing Control of Battery Packs Based on Predictive Current

    Directory of Open Access Journals (Sweden)

    Wenbiao Li

    2016-04-01

    Full Text Available Various balancing topology and control methods have been proposed for the inconsistency problem of battery packs. However, these strategies only focus on a single objective, ignore the mutual interaction among various factors and are only based on the external performance of the battery pack inconsistency, such as voltage balancing and state of charge (SOC balancing. To solve these problems, multi-objective predictive balancing control (MOPBC based on predictive current is proposed in this paper, namely, in the driving process of an electric vehicle, using predictive control to predict the battery pack output current the next time. Based on this information, the impact of the battery pack temperature caused by the output current can be obtained. Then, the influence is added to the battery pack balancing control, which makes the present degradation, temperature, and SOC imbalance achieve balance automatically due to the change of the output current the next moment. According to MOPBC, the simulation model of the balancing circuit is built with four cells in Matlab/Simulink. The simulation results show that MOPBC is not only better than the other traditional balancing control strategies but also reduces the energy loss in the balancing process.

  1. Predicting protein subcellular locations using hierarchical ensemble of Bayesian classifiers based on Markov chains

    Directory of Open Access Journals (Sweden)

    Eils Roland

    2006-06-01

    Full Text Available Abstract Background The subcellular location of a protein is closely related to its function. It would be worthwhile to develop a method to predict the subcellular location for a given protein when only the amino acid sequence of the protein is known. Although many efforts have been made to predict subcellular location from sequence information only, there is the need for further research to improve the accuracy of prediction. Results A novel method called HensBC is introduced to predict protein subcellular location. HensBC is a recursive algorithm which constructs a hierarchical ensemble of classifiers. The classifiers used are Bayesian classifiers based on Markov chain models. We tested our method on six various datasets; among them are Gram-negative bacteria dataset, data for discriminating outer membrane proteins and apoptosis proteins dataset. We observed that our method can predict the subcellular location with high accuracy. Another advantage of the proposed method is that it can improve the accuracy of the prediction of some classes with few sequences in training and is therefore useful for datasets with imbalanced distribution of classes. Conclusion This study introduces an algorithm which uses only the primary sequence of a protein to predict its subcellular location. The proposed recursive scheme represents an interesting methodology for learning and combining classifiers. The method is computationally efficient and competitive with the previously reported approaches in terms of prediction accuracies as empirical results indicate. The code for the software is available upon request.

  2. Genomic prediction based on data from three layer lines: a comparison between linear methods

    NARCIS (Netherlands)

    Calus, M.P.L.; Huang, H.; Vereijken, J.; Visscher, J.; Napel, ten J.; Windig, J.J.

    2014-01-01

    Background The prediction accuracy of several linear genomic prediction models, which have previously been used for within-line genomic prediction, was evaluated for multi-line genomic prediction. Methods Compared to a conventional BLUP (best linear unbiased prediction) model using pedigree data, we

  3. State-based Communication on Time-predictable Multicore Processors

    DEFF Research Database (Denmark)

    Sørensen, Rasmus Bo; Schoeberl, Martin; Sparsø, Jens

    2016-01-01

    Some real-time systems use a form of task-to-task communication called state-based or sample-based communication that does not impose any flow control among the communicating tasks. The concept is similar to a shared variable, where a reader may read the same value multiple times or may not read...... a given value at all. This paper explores time-predictable implementations of state-based communication in network-on-chip based multicore platforms through five algorithms. With the presented analysis of the implemented algorithms, the communicating tasks of one core can be scheduled independently...... of tasks on other cores. Assuming a specific time-predictable multicore processor, we evaluate how the read and write primitives of the five algorithms contribute to the worst-case execution time of the communicating tasks. Each of the five algorithms has specific capabilities that make them suitable...

  4. Predicting tissue-specific expressions based on sequence characteristics

    KAUST Repository

    Paik, Hyojung; Ryu, Tae Woo; Heo, Hyoungsam; Seo, Seungwon; Lee, Doheon; Hur, Cheolgoo

    2011-01-01

    In multicellular organisms, including humans, understanding expression specificity at the tissue level is essential for interpreting protein function, such as tissue differentiation. We developed a prediction approach via generated sequence features from overrepresented patterns in housekeeping (HK) and tissue-specific (TS) genes to classify TS expression in humans. Using TS domains and transcriptional factor binding sites (TFBSs), sequence characteristics were used as indices of expressed tissues in a Random Forest algorithm by scoring exclusive patterns considering the biological intuition; TFBSs regulate gene expression, and the domains reflect the functional specificity of a TS gene. Our proposed approach displayed better performance than previous attempts and was validated using computational and experimental methods.

  5. Predicting tissue-specific expressions based on sequence characteristics

    KAUST Repository

    Paik, Hyojung

    2011-04-30

    In multicellular organisms, including humans, understanding expression specificity at the tissue level is essential for interpreting protein function, such as tissue differentiation. We developed a prediction approach via generated sequence features from overrepresented patterns in housekeeping (HK) and tissue-specific (TS) genes to classify TS expression in humans. Using TS domains and transcriptional factor binding sites (TFBSs), sequence characteristics were used as indices of expressed tissues in a Random Forest algorithm by scoring exclusive patterns considering the biological intuition; TFBSs regulate gene expression, and the domains reflect the functional specificity of a TS gene. Our proposed approach displayed better performance than previous attempts and was validated using computational and experimental methods.

  6. Predicting footbridge vibrations using a probability-based approach

    DEFF Research Database (Denmark)

    Pedersen, Lars; Frier, Christian

    2017-01-01

    Vibrations in footbridges may be problematic as excessive vibrations may occur as a result of actions of pedestrians. Design-stage predictions of levels of footbridge vibration to the action of a pedestrian are useful and have been employed for many years based on a deterministic approach to mode...

  7. Snippet-based relevance predictions for federated web search

    NARCIS (Netherlands)

    Demeester, Thomas; Nguyen, Dong-Phuong; Trieschnigg, Rudolf Berend; Develder, Chris; Hiemstra, Djoerd

    How well can the relevance of a page be predicted, purely based on snippets? This would be highly useful in a Federated Web Search setting where caching large amounts of result snippets is more feasible than caching entire pages. The experiments reported in this paper make use of result snippets and

  8. A burnout prediction model based around char morphology

    Energy Technology Data Exchange (ETDEWEB)

    T. Wu; E. Lester; M. Cloke [University of Nottingham, Nottingham (United Kingdom). Nottingham Energy and Fuel Centre

    2005-07-01

    Poor burnout in a coal-fired power plant has marked penalties in the form of reduced energy efficiency and elevated waste material that can not be utilized. The prediction of coal combustion behaviour in a furnace is of great significance in providing valuable information not only for process optimization but also for coal buyers in the international market. Coal combustion models have been developed that can make predictions about burnout behaviour and burnout potential. Most of these kinetic models require standard parameters such as volatile content, particle size and assumed char porosity in order to make a burnout prediction. This paper presents a new model called the Char Burnout Model (ChB) that also uses detailed information about char morphology in its prediction. The model can use data input from one of two sources. Both sources are derived from image analysis techniques. The first from individual analysis and characterization of real char types using an automated program. The second from predicted char types based on data collected during the automated image analysis of coal particles. Modelling results were compared with a different carbon burnout kinetic model and burnout data from re-firing the chars in a drop tube furnace operating at 1300{sup o}C, 5% oxygen across several residence times. An improved agreement between ChB model and DTF experimental data proved that the inclusion of char morphology in combustion models can improve model predictions. 27 refs., 4 figs., 4 tabs.

  9. A burnout prediction model based around char morphology

    Energy Technology Data Exchange (ETDEWEB)

    Tao Wu; Edward Lester; Michael Cloke [University of Nottingham, Nottingham (United Kingdom). School of Chemical, Environmental and Mining Engineering

    2006-05-15

    Several combustion models have been developed that can make predictions about coal burnout and burnout potential. Most of these kinetic models require standard parameters such as volatile content and particle size to make a burnout prediction. This article presents a new model called the char burnout (ChB) model, which also uses detailed information about char morphology in its prediction. The input data to the model is based on information derived from two different image analysis techniques. One technique generates characterization data from real char samples, and the other predicts char types based on characterization data from image analysis of coal particles. The pyrolyzed chars in this study were created in a drop tube furnace operating at 1300{sup o}C, 200 ms, and 1% oxygen. Modeling results were compared with a different carbon burnout kinetic model as well as the actual burnout data from refiring the same chars in a drop tube furnace operating at 1300{sup o}C, 5% oxygen, and residence times of 200, 400, and 600 ms. A good agreement between ChB model and experimental data indicates that the inclusion of char morphology in combustion models could well improve model predictions. 38 refs., 5 figs., 6 tabs.

  10. Blind Test of Physics-Based Prediction of Protein Structures

    Science.gov (United States)

    Shell, M. Scott; Ozkan, S. Banu; Voelz, Vincent; Wu, Guohong Albert; Dill, Ken A.

    2009-01-01

    We report here a multiprotein blind test of a computer method to predict native protein structures based solely on an all-atom physics-based force field. We use the AMBER 96 potential function with an implicit (GB/SA) model of solvation, combined with replica-exchange molecular-dynamics simulations. Coarse conformational sampling is performed using the zipping and assembly method (ZAM), an approach that is designed to mimic the putative physical routes of protein folding. ZAM was applied to the folding of six proteins, from 76 to 112 monomers in length, in CASP7, a community-wide blind test of protein structure prediction. Because these predictions have about the same level of accuracy as typical bioinformatics methods, and do not utilize information from databases of known native structures, this work opens up the possibility of predicting the structures of membrane proteins, synthetic peptides, or other foldable polymers, for which there is little prior knowledge of native structures. This approach may also be useful for predicting physical protein folding routes, non-native conformations, and other physical properties from amino acid sequences. PMID:19186130

  11. EMD-Based Predictive Deep Belief Network for Time Series Prediction: An Application to Drought Forecasting

    Directory of Open Access Journals (Sweden)

    Norbert A. Agana

    2018-02-01

    Full Text Available Drought is a stochastic natural feature that arises due to intense and persistent shortage of precipitation. Its impact is mostly manifested as agricultural and hydrological droughts following an initial meteorological phenomenon. Drought prediction is essential because it can aid in the preparedness and impact-related management of its effects. This study considers the drought forecasting problem by developing a hybrid predictive model using a denoised empirical mode decomposition (EMD and a deep belief network (DBN. The proposed method first decomposes the data into several intrinsic mode functions (IMFs using EMD, and a reconstruction of the original data is obtained by considering only relevant IMFs. Detrended fluctuation analysis (DFA was applied to each IMF to determine the threshold for robust denoising performance. Based on their scaling exponents, irrelevant intrinsic mode functions are identified and suppressed. The proposed method was applied to predict different time scale drought indices across the Colorado River basin using a standardized streamflow index (SSI as the drought index. The results obtained using the proposed method was compared with standard methods such as multilayer perceptron (MLP and support vector regression (SVR. The proposed hybrid model showed improvement in prediction accuracy, especially for multi-step ahead predictions.

  12. Connecting clinical and actuarial prediction with rule-based methods.

    Science.gov (United States)

    Fokkema, Marjolein; Smits, Niels; Kelderman, Henk; Penninx, Brenda W J H

    2015-06-01

    Meta-analyses comparing the accuracy of clinical versus actuarial prediction have shown actuarial methods to outperform clinical methods, on average. However, actuarial methods are still not widely used in clinical practice, and there has been a call for the development of actuarial prediction methods for clinical practice. We argue that rule-based methods may be more useful than the linear main effect models usually employed in prediction studies, from a data and decision analytic as well as a practical perspective. In addition, decision rules derived with rule-based methods can be represented as fast and frugal trees, which, unlike main effects models, can be used in a sequential fashion, reducing the number of cues that have to be evaluated before making a prediction. We illustrate the usability of rule-based methods by applying RuleFit, an algorithm for deriving decision rules for classification and regression problems, to a dataset on prediction of the course of depressive and anxiety disorders from Penninx et al. (2011). The RuleFit algorithm provided a model consisting of 2 simple decision rules, requiring evaluation of only 2 to 4 cues. Predictive accuracy of the 2-rule model was very similar to that of a logistic regression model incorporating 20 predictor variables, originally applied to the dataset. In addition, the 2-rule model required, on average, evaluation of only 3 cues. Therefore, the RuleFit algorithm appears to be a promising method for creating decision tools that are less time consuming and easier to apply in psychological practice, and with accuracy comparable to traditional actuarial methods. (c) 2015 APA, all rights reserved).

  13. Prediction of Banking Systemic Risk Based on Support Vector Machine

    Directory of Open Access Journals (Sweden)

    Shouwei Li

    2013-01-01

    Full Text Available Banking systemic risk is a complex nonlinear phenomenon and has shed light on the importance of safeguarding financial stability by recent financial crisis. According to the complex nonlinear characteristics of banking systemic risk, in this paper we apply support vector machine (SVM to the prediction of banking systemic risk in an attempt to suggest a new model with better explanatory power and stability. We conduct a case study of an SVM-based prediction model for Chinese banking systemic risk and find the experiment results showing that support vector machine is an efficient method in such case.

  14. Prediction-Based Control for Nonlinear Systems with Input Delay

    Directory of Open Access Journals (Sweden)

    I. Estrada-Sánchez

    2017-01-01

    Full Text Available This work has two primary objectives. First, it presents a state prediction strategy for a class of nonlinear Lipschitz systems subject to constant time delay in the input signal. As a result of a suitable change of variable, the state predictor asymptotically provides the value of the state τ units of time ahead. Second, it proposes a solution to the stabilization and trajectory tracking problems for the considered class of systems using predicted states. The predictor-controller convergence is proved by considering a complete Lyapunov functional. The proposed predictor-based controller strategy is evaluated using numerical simulations.

  15. Rate-Based Model Predictive Control of Turbofan Engine Clearance

    Science.gov (United States)

    DeCastro, Jonathan A.

    2006-01-01

    An innovative model predictive control strategy is developed for control of nonlinear aircraft propulsion systems and sub-systems. At the heart of the controller is a rate-based linear parameter-varying model that propagates the state derivatives across the prediction horizon, extending prediction fidelity to transient regimes where conventional models begin to lose validity. The new control law is applied to a demanding active clearance control application, where the objectives are to tightly regulate blade tip clearances and also anticipate and avoid detrimental blade-shroud rub occurrences by optimally maintaining a predefined minimum clearance. Simulation results verify that the rate-based controller is capable of satisfying the objectives during realistic flight scenarios where both a conventional Jacobian-based model predictive control law and an unconstrained linear-quadratic optimal controller are incapable of doing so. The controller is evaluated using a variety of different actuators, illustrating the efficacy and versatility of the control approach. It is concluded that the new strategy has promise for this and other nonlinear aerospace applications that place high importance on the attainment of control objectives during transient regimes.

  16. Implementation of neural network based non-linear predictive control

    DEFF Research Database (Denmark)

    Sørensen, Paul Haase; Nørgård, Peter Magnus; Ravn, Ole

    1999-01-01

    This paper describes a control method for non-linear systems based on generalized predictive control. Generalized predictive control (GPC) was developed to control linear systems, including open-loop unstable and non-minimum phase systems, but has also been proposed to be extended for the control...... of non-linear systems. GPC is model based and in this paper we propose the use of a neural network for the modeling of the system. Based on the neural network model, a controller with extended control horizon is developed and the implementation issues are discussed, with particular emphasis...... on an efficient quasi-Newton algorithm. The performance is demonstrated on a pneumatic servo system....

  17. Cost-Effectiveness Model for Chemoimmunotherapy Options in Patients with Previously Untreated Chronic Lymphocytic Leukemia Unsuitable for Full-Dose Fludarabine-Based Therapy.

    Science.gov (United States)

    Becker, Ursula; Briggs, Andrew H; Moreno, Santiago G; Ray, Joshua A; Ngo, Phuong; Samanta, Kunal

    2016-06-01

    To evaluate the cost-effectiveness of treatment with anti-CD20 monoclonal antibody obinutuzumab plus chlorambucil (GClb) in untreated patients with chronic lymphocytic leukemia unsuitable for full-dose fludarabine-based therapy. A Markov model was used to assess the cost-effectiveness of GClb versus other chemoimmunotherapy options. The model comprised three mutually exclusive health states: "progression-free survival (with/without therapy)", "progression (refractory/relapsed lines)", and "death". Each state was assigned a health utility value representing patients' quality of life and a specific cost value. Comparisons between GClb and rituximab plus chlorambucil or only chlorambucil were performed using patient-level clinical trial data; other comparisons were performed via a network meta-analysis using information gathered in a systematic literature review. To support the model, a utility elicitation study was conducted from the perspective of the UK National Health Service. There was good agreement between the model-predicted progression-free and overall survival and that from the CLL11 trial. On incorporating data from the indirect treatment comparisons, it was found that GClb was cost-effective with a range of incremental cost-effectiveness ratios below a threshold of £30,000 per quality-adjusted life-year gained, and remained so during deterministic and probabilistic sensitivity analyses under various scenarios. GClb was estimated to increase both quality-adjusted life expectancy and treatment costs compared with several commonly used therapies, with incremental cost-effectiveness ratios below commonly referenced UK thresholds. This article offers a real example of how to combine direct and indirect evidence in a cost-effectiveness analysis of oncology drugs. Copyright © 2016 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.

  18. Prediction of potential drug targets based on simple sequence properties

    Directory of Open Access Journals (Sweden)

    Lai Luhua

    2007-09-01

    Full Text Available Abstract Background During the past decades, research and development in drug discovery have attracted much attention and efforts. However, only 324 drug targets are known for clinical drugs up to now. Identifying potential drug targets is the first step in the process of modern drug discovery for developing novel therapeutic agents. Therefore, the identification and validation of new and effective drug targets are of great value for drug discovery in both academia and pharmaceutical industry. If a protein can be predicted in advance for its potential application as a drug target, the drug discovery process targeting this protein will be greatly speeded up. In the current study, based on the properties of known drug targets, we have developed a sequence-based drug target prediction method for fast identification of novel drug targets. Results Based on simple physicochemical properties extracted from protein sequences of known drug targets, several support vector machine models have been constructed in this study. The best model can distinguish currently known drug targets from non drug targets at an accuracy of 84%. Using this model, potential protein drug targets of human origin from Swiss-Prot were predicted, some of which have already attracted much attention as potential drug targets in pharmaceutical research. Conclusion We have developed a drug target prediction method based solely on protein sequence information without the knowledge of family/domain annotation, or the protein 3D structure. This method can be applied in novel drug target identification and validation, as well as genome scale drug target predictions.

  19. Driver's mental workload prediction model based on physiological indices.

    Science.gov (United States)

    Yan, Shengyuan; Tran, Cong Chi; Wei, Yingying; Habiyaremye, Jean Luc

    2017-09-15

    Developing an early warning model to predict the driver's mental workload (MWL) is critical and helpful, especially for new or less experienced drivers. The present study aims to investigate the correlation between new drivers' MWL and their work performance, regarding the number of errors. Additionally, the group method of data handling is used to establish the driver's MWL predictive model based on subjective rating (NASA task load index [NASA-TLX]) and six physiological indices. The results indicate that the NASA-TLX and the number of errors are positively correlated, and the predictive model shows the validity of the proposed model with an R 2 value of 0.745. The proposed model is expected to provide a reference value for the new drivers of their MWL by providing the physiological indices, and the driving lesson plans can be proposed to sustain an appropriate MWL as well as improve the driver's work performance.

  20. Transcription factor binding sites prediction based on modified nucleosomes.

    Directory of Open Access Journals (Sweden)

    Mohammad Talebzadeh

    Full Text Available In computational methods, position weight matrices (PWMs are commonly applied for transcription factor binding site (TFBS prediction. Although these matrices are more accurate than simple consensus sequences to predict actual binding sites, they usually produce a large number of false positive (FP predictions and so are impoverished sources of information. Several studies have employed additional sources of information such as sequence conservation or the vicinity to transcription start sites to distinguish true binding regions from random ones. Recently, the spatial distribution of modified nucleosomes has been shown to be associated with different promoter architectures. These aligned patterns can facilitate DNA accessibility for transcription factors. We hypothesize that using data from these aligned and periodic patterns can improve the performance of binding region prediction. In this study, we propose two effective features, "modified nucleosomes neighboring" and "modified nucleosomes occupancy", to decrease FP in binding site discovery. Based on these features, we designed a logistic regression classifier which estimates the probability of a region as a TFBS. Our model learned each feature based on Sp1 binding sites on Chromosome 1 and was tested on the other chromosomes in human CD4+T cells. In this work, we investigated 21 histone modifications and found that only 8 out of 21 marks are strongly correlated with transcription factor binding regions. To prove that these features are not specific to Sp1, we combined the logistic regression classifier with the PWM, and created a new model to search TFBSs on the genome. We tested the model using transcription factors MAZ, PU.1 and ELF1 and compared the results to those using only the PWM. The results show that our model can predict Transcription factor binding regions more successfully. The relative simplicity of the model and capability of integrating other features make it a superior method

  1. The Attribute for Hydrocarbon Prediction Based on Attenuation

    International Nuclear Information System (INIS)

    Hermana, Maman; Harith, Z Z T; Sum, C W; Ghosh, D P

    2014-01-01

    Hydrocarbon prediction is a crucial issue in the oil and gas industry. Currently, the prediction of pore fluid and lithology are based on amplitude interpretation which has the potential to produce pitfalls in certain conditions of reservoir. Motivated by this fact, this work is directed to find out other attributes that can be used to reduce the pitfalls in the amplitude interpretation. Some seismic attributes were examined and studies showed that the attenuation attribute is a better attribute for hydrocarbon prediction. Theoretically, the attenuation mechanism of wave propagation is associated with the movement of fluid in the pore; hence the existence of hydrocarbon in the pore will be represented by attenuation attribute directly. In this paper we evaluated the feasibility of the quality factor ratio of P-wave and S-wave (Qp/Qs) as hydrocarbon indicator using well data and also we developed a new attribute based on attenuation for hydrocarbon prediction -- Normalized Energy Reduction Stack (NERS). To achieve these goals, this work was divided into 3 main parts; estimating the Qp/Qs on well log data, testing the new attribute in the synthetic data and applying the new attribute on real data in Malay Basin data. The result show that the Qp/Qs is better than Poisson's ratio and Lamda over Mu as hydrocarbon indicator. The curve, trend analysis and contrast of Qp/Qs is more powerful at distinguishing pore fluid than Poisson ratio and Lamda over Mu. The NERS attribute was successful in distinguishing the hydrocarbon from brine on synthetic data. Applying this attribute on real data on Malay basin, the NERS attribute is qualitatively conformable with the structure and location where the gas is predicted. The quantitative interpretation of this attribute for hydrocarbon prediction needs to be investigated further

  2. Predicting Subcontractor Performance Using Web-Based Evolutionary Fuzzy Neural Networks

    Directory of Open Access Journals (Sweden)

    Chien-Ho Ko

    2013-01-01

    Full Text Available Subcontractor performance directly affects project success. The use of inappropriate subcontractors may result in individual work delays, cost overruns, and quality defects throughout the project. This study develops web-based Evolutionary Fuzzy Neural Networks (EFNNs to predict subcontractor performance. EFNNs are a fusion of Genetic Algorithms (GAs, Fuzzy Logic (FL, and Neural Networks (NNs. FL is primarily used to mimic high level of decision-making processes and deal with uncertainty in the construction industry. NNs are used to identify the association between previous performance and future status when predicting subcontractor performance. GAs are optimizing parameters required in FL and NNs. EFNNs encode FL and NNs using floating numbers to shorten the length of a string. A multi-cut-point crossover operator is used to explore the parameter and retain solution legality. Finally, the applicability of the proposed EFNNs is validated using real subcontractors. The EFNNs are evolved using 22 historical patterns and tested using 12 unseen cases. Application results show that the proposed EFNNs surpass FL and NNs in predicting subcontractor performance. The proposed approach improves prediction accuracy and reduces the effort required to predict subcontractor performance, providing field operators with web-based remote access to a reliable, scientific prediction mechanism.

  3. Predicting subcontractor performance using web-based Evolutionary Fuzzy Neural Networks.

    Science.gov (United States)

    Ko, Chien-Ho

    2013-01-01

    Subcontractor performance directly affects project success. The use of inappropriate subcontractors may result in individual work delays, cost overruns, and quality defects throughout the project. This study develops web-based Evolutionary Fuzzy Neural Networks (EFNNs) to predict subcontractor performance. EFNNs are a fusion of Genetic Algorithms (GAs), Fuzzy Logic (FL), and Neural Networks (NNs). FL is primarily used to mimic high level of decision-making processes and deal with uncertainty in the construction industry. NNs are used to identify the association between previous performance and future status when predicting subcontractor performance. GAs are optimizing parameters required in FL and NNs. EFNNs encode FL and NNs using floating numbers to shorten the length of a string. A multi-cut-point crossover operator is used to explore the parameter and retain solution legality. Finally, the applicability of the proposed EFNNs is validated using real subcontractors. The EFNNs are evolved using 22 historical patterns and tested using 12 unseen cases. Application results show that the proposed EFNNs surpass FL and NNs in predicting subcontractor performance. The proposed approach improves prediction accuracy and reduces the effort required to predict subcontractor performance, providing field operators with web-based remote access to a reliable, scientific prediction mechanism.

  4. Concomitant prediction of function and fold at the domain level with GO-based profiles.

    Science.gov (United States)

    Lopez, Daniel; Pazos, Florencio

    2013-01-01

    Predicting the function of newly sequenced proteins is crucial due to the pace at which these raw sequences are being obtained. Almost all resources for predicting protein function assign functional terms to whole chains, and do not distinguish which particular domain is responsible for the allocated function. This is not a limitation of the methodologies themselves but it is due to the fact that in the databases of functional annotations these methods use for transferring functional terms to new proteins, these annotations are done on a whole-chain basis. Nevertheless, domains are the basic evolutionary and often functional units of proteins. In many cases, the domains of a protein chain have distinct molecular functions, independent from each other. For that reason resources with functional annotations at the domain level, as well as methodologies for predicting function for individual domains adapted to these resources are required.We present a methodology for predicting the molecular function of individual domains, based on a previously developed database of functional annotations at the domain level. The approach, which we show outperforms a standard method based on sequence searches in assigning function, concomitantly predicts the structural fold of the domains and can give hints on the functionally important residues associated to the predicted function.

  5. Sequence-based prediction of protein protein interaction using a deep-learning algorithm.

    Science.gov (United States)

    Sun, Tanlin; Zhou, Bo; Lai, Luhua; Pei, Jianfeng

    2017-05-25

    Protein-protein interactions (PPIs) are critical for many biological processes. It is therefore important to develop accurate high-throughput methods for identifying PPI to better understand protein function, disease occurrence, and therapy design. Though various computational methods for predicting PPI have been developed, their robustness for prediction with external datasets is unknown. Deep-learning algorithms have achieved successful results in diverse areas, but their effectiveness for PPI prediction has not been tested. We used a stacked autoencoder, a type of deep-learning algorithm, to study the sequence-based PPI prediction. The best model achieved an average accuracy of 97.19% with 10-fold cross-validation. The prediction accuracies for various external datasets ranged from 87.99% to 99.21%, which are superior to those achieved with previous methods. To our knowledge, this research is the first to apply a deep-learning algorithm to sequence-based PPI prediction, and the results demonstrate its potential in this field.

  6. Background-Modeling-Based Adaptive Prediction for Surveillance Video Coding.

    Science.gov (United States)

    Zhang, Xianguo; Huang, Tiejun; Tian, Yonghong; Gao, Wen

    2014-02-01

    The exponential growth of surveillance videos presents an unprecedented challenge for high-efficiency surveillance video coding technology. Compared with the existing coding standards that were basically developed for generic videos, surveillance video coding should be designed to make the best use of the special characteristics of surveillance videos (e.g., relative static background). To do so, this paper first conducts two analyses on how to improve the background and foreground prediction efficiencies in surveillance video coding. Following the analysis results, we propose a background-modeling-based adaptive prediction (BMAP) method. In this method, all blocks to be encoded are firstly classified into three categories. Then, according to the category of each block, two novel inter predictions are selectively utilized, namely, the background reference prediction (BRP) that uses the background modeled from the original input frames as the long-term reference and the background difference prediction (BDP) that predicts the current data in the background difference domain. For background blocks, the BRP can effectively improve the prediction efficiency using the higher quality background as the reference; whereas for foreground-background-hybrid blocks, the BDP can provide a better reference after subtracting its background pixels. Experimental results show that the BMAP can achieve at least twice the compression ratio on surveillance videos as AVC (MPEG-4 Advanced Video Coding) high profile, yet with a slightly additional encoding complexity. Moreover, for the foreground coding performance, which is crucial to the subjective quality of moving objects in surveillance videos, BMAP also obtains remarkable gains over several state-of-the-art methods.

  7. Empirical comparison of web-based antimicrobial peptide prediction tools.

    Science.gov (United States)

    Gabere, Musa Nur; Noble, William Stafford

    2017-07-01

    Antimicrobial peptides (AMPs) are innate immune molecules that exhibit activities against a range of microbes, including bacteria, fungi, viruses and protozoa. Recent increases in microbial resistance against current drugs has led to a concomitant increase in the need for novel antimicrobial agents. Over the last decade, a number of AMP prediction tools have been designed and made freely available online. These AMP prediction tools show potential to discriminate AMPs from non-AMPs, but the relative quality of the predictions produced by the various tools is difficult to quantify. We compiled two sets of AMP and non-AMP peptides, separated into three categories-antimicrobial, antibacterial and bacteriocins. Using these benchmark data sets, we carried out a systematic evaluation of ten publicly available AMP prediction methods. Among the six general AMP prediction tools-ADAM, CAMPR3(RF), CAMPR3(SVM), MLAMP, DBAASP and MLAMP-we find that CAMPR3(RF) provides a statistically significant improvement in performance, as measured by the area under the receiver operating characteristic (ROC) curve, relative to the other five methods. Surprisingly, for antibacterial prediction, the original AntiBP method significantly outperforms its successor, AntiBP2 based on one benchmark dataset. The two bacteriocin prediction tools, BAGEL3 and BACTIBASE, both provide very good performance and BAGEL3 outperforms its predecessor, BACTIBASE, on the larger of the two benchmarks. gaberemu@ngha.med.sa or william-noble@uw.edu. Supplementary data are available at Bioinformatics online. © The Author 2017. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com

  8. Deep-Learning-Based Drug-Target Interaction Prediction.

    Science.gov (United States)

    Wen, Ming; Zhang, Zhimin; Niu, Shaoyu; Sha, Haozhi; Yang, Ruihan; Yun, Yonghuan; Lu, Hongmei

    2017-04-07

    Identifying interactions between known drugs and targets is a major challenge in drug repositioning. In silico prediction of drug-target interaction (DTI) can speed up the expensive and time-consuming experimental work by providing the most potent DTIs. In silico prediction of DTI can also provide insights about the potential drug-drug interaction and promote the exploration of drug side effects. Traditionally, the performance of DTI prediction depends heavily on the descriptors used to represent the drugs and the target proteins. In this paper, to accurately predict new DTIs between approved drugs and targets without separating the targets into different classes, we developed a deep-learning-based algorithmic framework named DeepDTIs. It first abstracts representations from raw input descriptors using unsupervised pretraining and then applies known label pairs of interaction to build a classification model. Compared with other methods, it is found that DeepDTIs reaches or outperforms other state-of-the-art methods. The DeepDTIs can be further used to predict whether a new drug targets to some existing targets or whether a new target interacts with some existing drugs.

  9. The Dissolved Oxygen Prediction Method Based on Neural Network

    Directory of Open Access Journals (Sweden)

    Zhong Xiao

    2017-01-01

    Full Text Available The dissolved oxygen (DO is oxygen dissolved in water, which is an important factor for the aquaculture. Using BP neural network method with the combination of purelin, logsig, and tansig activation functions is proposed for the prediction of aquaculture’s dissolved oxygen. The input layer, hidden layer, and output layer are introduced in detail including the weight adjustment process. The breeding data of three ponds in actual 10 consecutive days were used for experiments; these ponds were located in Beihai, Guangxi, a traditional aquaculture base in southern China. The data of the first 7 days are used for training, and the data of the latter 3 days are used for the test. Compared with the common prediction models, curve fitting (CF, autoregression (AR, grey model (GM, and support vector machines (SVM, the experimental results show that the prediction accuracy of the neural network is the highest, and all the predicted values are less than 5% of the error limit, which can meet the needs of practical applications, followed by AR, GM, SVM, and CF. The prediction model can help to improve the water quality monitoring level of aquaculture which will prevent the deterioration of water quality and the outbreak of disease.

  10. Network-based prediction and analysis of HIV dependency factors.

    Directory of Open Access Journals (Sweden)

    T M Murali

    2011-09-01

    Full Text Available HIV Dependency Factors (HDFs are a class of human proteins that are essential for HIV replication, but are not lethal to the host cell when silenced. Three previous genome-wide RNAi experiments identified HDF sets with little overlap. We combine data from these three studies with a human protein interaction network to predict new HDFs, using an intuitive algorithm called SinkSource and four other algorithms published in the literature. Our algorithm achieves high precision and recall upon cross validation, as do the other methods. A number of HDFs that we predict are known to interact with HIV proteins. They belong to multiple protein complexes and biological processes that are known to be manipulated by HIV. We also demonstrate that many predicted HDF genes show significantly different programs of expression in early response to SIV infection in two non-human primate species that differ in AIDS progression. Our results suggest that many HDFs are yet to be discovered and that they have potential value as prognostic markers to determine pathological outcome and the likelihood of AIDS development. More generally, if multiple genome-wide gene-level studies have been performed at independent labs to study the same biological system or phenomenon, our methodology is applicable to interpret these studies simultaneously in the context of molecular interaction networks and to ask if they reinforce or contradict each other.

  11. Morphology-based prediction of osteogenic differentiation potential of human mesenchymal stem cells.

    Directory of Open Access Journals (Sweden)

    Fumiko Matsuoka

    Full Text Available Human bone marrow mesenchymal stem cells (hBMSCs are widely used cell source for clinical bone regeneration. Achieving the greatest therapeutic effect is dependent on the osteogenic differentiation potential of the stem cells to be implanted. However, there are still no practical methods to characterize such potential non-invasively or previously. Monitoring cellular morphology is a practical and non-invasive approach for evaluating osteogenic potential. Unfortunately, such image-based approaches had been historically qualitative and requiring experienced interpretation. By combining the non-invasive attributes of microscopy with the latest technology allowing higher throughput and quantitative imaging metrics, we studied the applicability of morphometric features to quantitatively predict cellular osteogenic potential. We applied computational machine learning, combining cell morphology features with their corresponding biochemical osteogenic assay results, to develop prediction model of osteogenic differentiation. Using a dataset of 9,990 images automatically acquired by BioStation CT during osteogenic differentiation culture of hBMSCs, 666 morphometric features were extracted as parameters. Two commonly used osteogenic markers, alkaline phosphatase (ALP activity and calcium deposition were measured experimentally, and used as the true biological differentiation status to validate the prediction accuracy. Using time-course morphological features throughout differentiation culture, the prediction results highly correlated with the experimentally defined differentiation marker values (R>0.89 for both marker predictions. The clinical applicability of our morphology-based prediction was further examined with two scenarios: one using only historical cell images and the other using both historical images together with the patient's own cell images to predict a new patient's cellular potential. The prediction accuracy was found to be greatly enhanced

  12. Predicting online ratings based on the opinion spreading process

    Science.gov (United States)

    He, Xing-Sheng; Zhou, Ming-Yang; Zhuo, Zhao; Fu, Zhong-Qian; Liu, Jian-Guo

    2015-10-01

    Predicting users' online ratings is always a challenge issue and has drawn lots of attention. In this paper, we present a rating prediction method by combining the user opinion spreading process with the collaborative filtering algorithm, where user similarity is defined by measuring the amount of opinion a user transfers to another based on the primitive user-item rating matrix. The proposed method could produce a more precise rating prediction for each unrated user-item pair. In addition, we introduce a tunable parameter λ to regulate the preferential diffusion relevant to the degree of both opinion sender and receiver. The numerical results for Movielens and Netflix data sets show that this algorithm has a better accuracy than the standard user-based collaborative filtering algorithm using Cosine and Pearson correlation without increasing computational complexity. By tuning λ, our method could further boost the prediction accuracy when using Mean Absolute Error (MAE) and Root Mean Squared Error (RMSE) as measurements. In the optimal cases, on Movielens and Netflix data sets, the corresponding algorithmic accuracy (MAE and RMSE) are improved 11.26% and 8.84%, 13.49% and 10.52% compared to the item average method, respectively.

  13. PNN-based Rockburst Prediction Model and Its Applications

    Directory of Open Access Journals (Sweden)

    Yu Zhou

    2017-07-01

    Full Text Available Rock burst is one of main engineering geological problems significantly threatening the safety of construction. Prediction of rock burst is always an important issue concerning the safety of workers and equipment in tunnels. In this paper, a novel PNN-based rock burst prediction model is proposed to determine whether rock burst will happen in the underground rock projects and how much the intensity of rock burst is. The probabilistic neural network (PNN is developed based on Bayesian criteria of multivariate pattern classification. Because PNN has the advantages of low training complexity, high stability, quick convergence, and simple construction, it can be well applied in the prediction of rock burst. Some main control factors, such as rocks’ maximum tangential stress, rocks’ uniaxial compressive strength, rocks’ uniaxial tensile strength, and elastic energy index of rock are chosen as the characteristic vector of PNN. PNN model is obtained through training data sets of rock burst samples which come from underground rock project in domestic and abroad. Other samples are tested with the model. The testing results agree with the practical records. At the same time, two real-world applications are used to verify the proposed method. The results of prediction are same as the results of existing methods, just same as what happened in the scene, which verifies the effectiveness and applicability of our proposed work.

  14. Link Prediction in Evolving Networks Based on Popularity of Nodes.

    Science.gov (United States)

    Wang, Tong; He, Xing-Sheng; Zhou, Ming-Yang; Fu, Zhong-Qian

    2017-08-02

    Link prediction aims to uncover the underlying relationship behind networks, which could be utilized to predict missing edges or identify the spurious edges. The key issue of link prediction is to estimate the likelihood of potential links in networks. Most classical static-structure based methods ignore the temporal aspects of networks, limited by the time-varying features, such approaches perform poorly in evolving networks. In this paper, we propose a hypothesis that the ability of each node to attract links depends not only on its structural importance, but also on its current popularity (activeness), since active nodes have much more probability to attract future links. Then a novel approach named popularity based structural perturbation method (PBSPM) and its fast algorithm are proposed to characterize the likelihood of an edge from both existing connectivity structure and current popularity of its two endpoints. Experiments on six evolving networks show that the proposed methods outperform state-of-the-art methods in accuracy and robustness. Besides, visual results and statistical analysis reveal that the proposed methods are inclined to predict future edges between active nodes, rather than edges between inactive nodes.

  15. GIS-BASED PREDICTION OF HURRICANE FLOOD INUNDATION

    Energy Technology Data Exchange (ETDEWEB)

    JUDI, DAVID [Los Alamos National Laboratory; KALYANAPU, ALFRED [Los Alamos National Laboratory; MCPHERSON, TIMOTHY [Los Alamos National Laboratory; BERSCHEID, ALAN [Los Alamos National Laboratory

    2007-01-17

    A simulation environment is being developed for the prediction and analysis of the inundation consequences for infrastructure systems from extreme flood events. This decision support architecture includes a GIS-based environment for model input development, simulation integration tools for meteorological, hydrologic, and infrastructure system models and damage assessment tools for infrastructure systems. The GIS-based environment processes digital elevation models (30-m from the USGS), land use/cover (30-m NLCD), stream networks from the National Hydrography Dataset (NHD) and soils data from the NRCS (STATSGO) to create stream network, subbasins, and cross-section shapefiles for drainage basins selected for analysis. Rainfall predictions are made by a numerical weather model and ingested in gridded format into the simulation environment. Runoff hydrographs are estimated using Green-Ampt infiltration excess runoff prediction and a 1D diffusive wave overland flow routing approach. The hydrographs are fed into the stream network and integrated in a dynamic wave routing module using the EPA's Storm Water Management Model (SWMM) to predict flood depth. The flood depths are then transformed into inundation maps and exported for damage assessment. Hydrologic/hydraulic results are presented for Tropical Storm Allison.

  16. Predicting Social Anxiety Treatment Outcome Based on Therapeutic Email Conversations.

    Science.gov (United States)

    Hoogendoorn, Mark; Berger, Thomas; Schulz, Ava; Stolz, Timo; Szolovits, Peter

    2017-09-01

    Predicting therapeutic outcome in the mental health domain is of utmost importance to enable therapists to provide the most effective treatment to a patient. Using information from the writings of a patient can potentially be a valuable source of information, especially now that more and more treatments involve computer-based exercises or electronic conversations between patient and therapist. In this paper, we study predictive modeling using writings of patients under treatment for a social anxiety disorder. We extract a wealth of information from the text written by patients including their usage of words, the topics they talk about, the sentiment of the messages, and the style of writing. In addition, we study trends over time with respect to those measures. We then apply machine learning algorithms to generate the predictive models. Based on a dataset of 69 patients, we are able to show that we can predict therapy outcome with an area under the curve of 0.83 halfway through the therapy and with a precision of 0.78 when using the full data (i.e., the entire treatment period). Due to the limited number of participants, it is hard to generalize the results, but they do show great potential in this type of information.

  17. Chaos Time Series Prediction Based on Membrane Optimization Algorithms

    Directory of Open Access Journals (Sweden)

    Meng Li

    2015-01-01

    Full Text Available This paper puts forward a prediction model based on membrane computing optimization algorithm for chaos time series; the model optimizes simultaneously the parameters of phase space reconstruction (τ,m and least squares support vector machine (LS-SVM (γ,σ by using membrane computing optimization algorithm. It is an important basis for spectrum management to predict accurately the change trend of parameters in the electromagnetic environment, which can help decision makers to adopt an optimal action. Then, the model presented in this paper is used to forecast band occupancy rate of frequency modulation (FM broadcasting band and interphone band. To show the applicability and superiority of the proposed model, this paper will compare the forecast model presented in it with conventional similar models. The experimental results show that whether single-step prediction or multistep prediction, the proposed model performs best based on three error measures, namely, normalized mean square error (NMSE, root mean square error (RMSE, and mean absolute percentage error (MAPE.

  18. PREDICTIVE POTENTIAL FIELD-BASED COLLISION AVOIDANCE FOR MULTICOPTERS

    Directory of Open Access Journals (Sweden)

    M. Nieuwenhuisen

    2013-08-01

    Full Text Available Reliable obstacle avoidance is a key to navigating with UAVs in the close vicinity of static and dynamic obstacles. Wheel-based mobile robots are often equipped with 2D or 3D laser range finders that cover the 2D workspace sufficiently accurate and at a high rate. Micro UAV platforms operate in a 3D environment, but the restricted payload prohibits the use of fast state-of-the-art 3D sensors. Thus, perception of small obstacles is often only possible in the vicinity of the UAV and a fast collision avoidance system is necessary. We propose a reactive collision avoidance system based on artificial potential fields, that takes the special dynamics of UAVs into account by predicting the influence of obstacles on the estimated trajectory in the near future using a learned motion model. Experimental evaluation shows that the prediction leads to smoother trajectories and allows to navigate collision-free through passageways.

  19. Offset Free Tracking Predictive Control Based on Dynamic PLS Framework

    Directory of Open Access Journals (Sweden)

    Jin Xin

    2017-10-01

    Full Text Available This paper develops an offset free tracking model predictive control based on a dynamic partial least square (PLS framework. First, state space model is used as the inner model of PLS to describe the dynamic system, where subspace identification method is used to identify the inner model. Based on the obtained model, multiple independent model predictive control (MPC controllers are designed. Due to the decoupling character of PLS, these controllers are running separately, which is suitable for distributed control framework. In addition, the increment of inner model output is considered in the cost function of MPC, which involves integral action in the controller. Hence, the offset free tracking performance is guaranteed. The results of an industry background simulation demonstrate the effectiveness of proposed method.

  20. Construction Worker Fatigue Prediction Model Based on System Dynamic

    OpenAIRE

    Wahyu Adi Tri Joko; Ayu Ratnawinanda Lila

    2017-01-01

    Construction accident can be caused by internal and external factors such as worker fatigue and unsafe project environment. Tight schedule of construction project forcing construction worker to work overtime in long period. This situation leads to worker fatigue. This paper proposes a model to predict construction worker fatigue based on system dynamic (SD). System dynamic is used to represent correlation among internal and external factors and to simulate level of worker fatigue. To validate...

  1. Seminal Quality Prediction Using Clustering-Based Decision Forests

    Directory of Open Access Journals (Sweden)

    Hong Wang

    2014-08-01

    Full Text Available Prediction of seminal quality with statistical learning tools is an emerging methodology in decision support systems in biomedical engineering and is very useful in early diagnosis of seminal patients and selection of semen donors candidates. However, as is common in medical diagnosis, seminal quality prediction faces the class imbalance problem. In this paper, we propose a novel supervised ensemble learning approach, namely Clustering-Based Decision Forests, to tackle unbalanced class learning problem in seminal quality prediction. Experiment results on real fertility diagnosis dataset have shown that Clustering-Based Decision Forests outperforms decision tree, Support Vector Machines, random forests, multilayer perceptron neural networks and logistic regression by a noticeable margin. Clustering-Based Decision Forests can also be used to evaluate variables’ importance and the top five important factors that may affect semen concentration obtained in this study are age, serious trauma, sitting time, the season when the semen sample is produced, and high fevers in the last year. The findings could be helpful in explaining seminal concentration problems in infertile males or pre-screening semen donor candidates.

  2. Prediction of Force Measurements of a Microbend Sensor Based on an Artificial Neural Network

    Directory of Open Access Journals (Sweden)

    Kemal Fidanboylu

    2009-09-01

    Full Text Available Artificial neural network (ANN based prediction of the response of a microbend fiber optic sensor is presented. To the best of our knowledge no similar work has been previously reported in the literature. Parallel corrugated plates with three deformation cycles, 6 mm thickness of the spacer material and 16 mm mechanical periodicity between deformations were used in the microbend sensor. Multilayer Perceptron (MLP with different training algorithms, Radial Basis Function (RBF network and General Regression Neural Network (GRNN are used as ANN models in this work. All of these models can predict the sensor responses with considerable errors. RBF has the best performance with the smallest mean square error (MSE values of training and test results. Among the MLP algorithms and GRNN the Levenberg-Marquardt algorithm has good results. These models successfully predict the sensor responses, hence ANNs can be used as useful tool in the design of more robust fiber optic sensors.

  3. Sunburn and sun-protective behaviors among adults with and without previous nonmelanoma skin cancer (NMSC): A population-based study.

    Science.gov (United States)

    Fischer, Alexander H; Wang, Timothy S; Yenokyan, Gayane; Kang, Sewon; Chien, Anna L

    2016-08-01

    Individuals with previous nonmelanoma skin cancer (NMSC) are at increased risk for subsequent skin cancer, and should therefore limit ultraviolet exposure. We sought to determine whether individuals with previous NMSC engage in better sun protection than those with no skin cancer history. We pooled self-reported data (2005 and 2010 National Health Interview Surveys) from US non-Hispanic white adults (758 with and 34,161 without previous NMSC). We calculated adjusted prevalence odds ratios (aPOR) and 95% confidence intervals (CI), taking into account the complex survey design. Individuals with previous NMSC versus no history of NMSC had higher rates of frequent use of shade (44.3% vs 27.0%; aPOR 1.41; 95% CI 1.16-1.71), long sleeves (20.5% vs 7.7%; aPOR 1.55; 95% CI 1.21-1.98), a wide-brimmed hat (26.1% vs 10.5%; aPOR 1.52; 95% CI 1.24-1.87), and sunscreen (53.7% vs 33.1%; aPOR 2.11; 95% CI 1.73-2.59), but did not have significantly lower odds of recent sunburn (29.7% vs 40.7%; aPOR 0.95; 95% CI 0.77-1.17). Among those with previous NMSC, recent sunburn was inversely associated with age, sun avoidance, and shade but not sunscreen. Self-reported cross-sectional data and unavailable information quantifying regular sun exposure are limitations. Physicians should emphasize sunburn prevention when counseling patients with previous NMSC, especially younger adults, focusing on shade and sun avoidance over sunscreen. Copyright © 2016 American Academy of Dermatology, Inc. Published by Elsevier Inc. All rights reserved.

  4. Validating the predictions of case-based decision theory

    OpenAIRE

    Radoc, Benjamin

    2015-01-01

    Real-life decision-makers typically do not know all possible outcomes arising from alternative courses of action. Instead, when people face a problem, they may rely on the recollection of their past personal experience: the situation, the action taken, and the accompanying consequence. In addition, the applicability of a past experience in decision-making may depend on how similar the current problem is to situations encountered previously. Case-based decision theory (CBDT), proposed by Itzha...

  5. New Temperature-based Models for Predicting Global Solar Radiation

    International Nuclear Information System (INIS)

    Hassan, Gasser E.; Youssef, M. Elsayed; Mohamed, Zahraa E.; Ali, Mohamed A.; Hanafy, Ahmed A.

    2016-01-01

    Highlights: • New temperature-based models for estimating solar radiation are investigated. • The models are validated against 20-years measured data of global solar radiation. • The new temperature-based model shows the best performance for coastal sites. • The new temperature-based model is more accurate than the sunshine-based models. • The new model is highly applicable with weather temperature forecast techniques. - Abstract: This study presents new ambient-temperature-based models for estimating global solar radiation as alternatives to the widely used sunshine-based models owing to the unavailability of sunshine data at all locations around the world. Seventeen new temperature-based models are established, validated and compared with other three models proposed in the literature (the Annandale, Allen and Goodin models) to estimate the monthly average daily global solar radiation on a horizontal surface. These models are developed using a 20-year measured dataset of global solar radiation for the case study location (Lat. 30°51′N and long. 29°34′E), and then, the general formulae of the newly suggested models are examined for ten different locations around Egypt. Moreover, the local formulae for the models are established and validated for two coastal locations where the general formulae give inaccurate predictions. Mostly common statistical errors are utilized to evaluate the performance of these models and identify the most accurate model. The obtained results show that the local formula for the most accurate new model provides good predictions for global solar radiation at different locations, especially at coastal sites. Moreover, the local and general formulas of the most accurate temperature-based model also perform better than the two most accurate sunshine-based models from the literature. The quick and accurate estimations of the global solar radiation using this approach can be employed in the design and evaluation of performance for

  6. Integrative approaches to the prediction of protein functions based on the feature selection

    Directory of Open Access Journals (Sweden)

    Lee Hyunju

    2009-12-01

    Full Text Available Abstract Background Protein function prediction has been one of the most important issues in functional genomics. With the current availability of various genomic data sets, many researchers have attempted to develop integration models that combine all available genomic data for protein function prediction. These efforts have resulted in the improvement of prediction quality and the extension of prediction coverage. However, it has also been observed that integrating more data sources does not always increase the prediction quality. Therefore, selecting data sources that highly contribute to the protein function prediction has become an important issue. Results We present systematic feature selection methods that assess the contribution of genome-wide data sets to predict protein functions and then investigate the relationship between genomic data sources and protein functions. In this study, we use ten different genomic data sources in Mus musculus, including: protein-domains, protein-protein interactions, gene expressions, phenotype ontology, phylogenetic profiles and disease data sources to predict protein functions that are labelled with Gene Ontology (GO terms. We then apply two approaches to feature selection: exhaustive search feature selection using a kernel based logistic regression (KLR, and a kernel based L1-norm regularized logistic regression (KL1LR. In the first approach, we exhaustively measure the contribution of each data set for each function based on its prediction quality. In the second approach, we use the estimated coefficients of features as measures of contribution of data sources. Our results show that the proposed methods improve the prediction quality compared to the full integration of all data sources and other filter-based feature selection methods. We also show that contributing data sources can differ depending on the protein function. Furthermore, we observe that highly contributing data sets can be similar among

  7. Crystal density predictions for nitramines based on quantum chemistry

    International Nuclear Information System (INIS)

    Qiu Ling; Xiao Heming; Gong Xuedong; Ju Xuehai; Zhu Weihua

    2007-01-01

    An efficient and convenient method for predicting the crystalline densities of energetic materials was established based on the quantum chemical computations. Density functional theory (DFT) with four different basis sets (6-31G**, 6-311G**, 6-31+G**, and 6-311++G**) and various semiempirical molecular orbital (MO) methods have been employed to predict the molecular volumes and densities of a series of energetic nitramines including acyclic, monocyclic, and polycyclic/cage molecules. The relationships between the calculated values and experimental data were discussed in detail, and linear correlations were suggested and compared at different levels. The calculation shows that if the selected basis set is larger, it will expend more CPU (central processing unit) time, larger molecular volume and smaller density will be obtained. And the densities predicted by the semiempirical MO methods are all systematically larger than the experimental data. In comparison with other methods, B3LYP/6-31G** is most accurate and economical to predict the solid-state densities of energetic nitramines. This may be instructive to the molecular designing and screening novel HEDMs

  8. Power system dynamic state estimation using prediction based evolutionary technique

    International Nuclear Information System (INIS)

    Basetti, Vedik; Chandel, Ashwani K.; Chandel, Rajeevan

    2016-01-01

    In this paper, a new robust LWS (least winsorized square) estimator is proposed for dynamic state estimation of a power system. One of the main advantages of this estimator is that it has an inbuilt bad data rejection property and is less sensitive to bad data measurements. In the proposed approach, Brown's double exponential smoothing technique has been utilised for its reliable performance at the prediction step. The state estimation problem is solved as an optimisation problem using a new jDE-self adaptive differential evolution with prediction based population re-initialisation technique at the filtering step. This new stochastic search technique has been embedded with different state scenarios using the predicted state. The effectiveness of the proposed LWS technique is validated under different conditions, namely normal operation, bad data, sudden load change, and loss of transmission line conditions on three different IEEE test bus systems. The performance of the proposed approach is compared with the conventional extended Kalman filter. On the basis of various performance indices, the results thus obtained show that the proposed technique increases the accuracy and robustness of power system dynamic state estimation performance. - Highlights: • To estimate the states of the power system under dynamic environment. • The performance of the EKF method is degraded during anomaly conditions. • The proposed method remains robust towards anomalies. • The proposed method provides precise state estimates even in the presence of anomalies. • The results show that prediction accuracy is enhanced by using the proposed model.

  9. Predicting Drug-Target Interactions Based on Small Positive Samples.

    Science.gov (United States)

    Hu, Pengwei; Chan, Keith C C; Hu, Yanxing

    2018-01-01

    A basic task in drug discovery is to find new medication in the form of candidate compounds that act on a target protein. In other words, a drug has to interact with a target and such drug-target interaction (DTI) is not expected to be random. Significant and interesting patterns are expected to be hidden in them. If these patterns can be discovered, new drugs are expected to be more easily discoverable. Currently, a number of computational methods have been proposed to predict DTIs based on their similarity. However, such as approach does not allow biochemical features to be directly considered. As a result, some methods have been proposed to try to discover patterns in physicochemical interactions. Since the number of potential negative DTIs are very high both in absolute terms and in comparison to that of the known ones, these methods are rather computationally expensive and they can only rely on subsets, rather than the full set, of negative DTIs for training and validation. As there is always a relatively high chance for negative DTIs to be falsely identified and as only partial subset of such DTIs is considered, existing approaches can be further improved to better predict DTIs. In this paper, we present a novel approach, called ODT (one class drug target interaction prediction), for such purpose. One main task of ODT is to discover association patterns between interacting drugs and proteins from the chemical structure of the former and the protein sequence network of the latter. ODT does so in two phases. First, the DTI-network is transformed to a representation by structural properties. Second, it applies a oneclass classification algorithm to build a prediction model based only on known positive interactions. We compared the best AUROC scores of the ODT with several state-of-art approaches on Gold standard data. The prediction accuracy of the ODT is superior in comparison with all the other methods at GPCRs dataset and Ion channels dataset. Performance

  10. Estimating Stochastic Volatility Models using Prediction-based Estimating Functions

    DEFF Research Database (Denmark)

    Lunde, Asger; Brix, Anne Floor

    to the performance of the GMM estimator based on conditional moments of integrated volatility from Bollerslev and Zhou (2002). The case where the observed log-price process is contaminated by i.i.d. market microstructure (MMS) noise is also investigated. First, the impact of MMS noise on the parameter estimates from......In this paper prediction-based estimating functions (PBEFs), introduced in Sørensen (2000), are reviewed and PBEFs for the Heston (1993) stochastic volatility model are derived. The finite sample performance of the PBEF based estimator is investigated in a Monte Carlo study, and compared...... to correctly account for the noise are investigated. Our Monte Carlo study shows that the estimator based on PBEFs outperforms the GMM estimator, both in the setting with and without MMS noise. Finally, an empirical application investigates the possible challenges and general performance of applying the PBEF...

  11. Usefulness of indirect alcohol biomarkers for predicting recidivism of drunk-driving among previously convicted drunk-driving offenders: results from the recidivism of alcohol-impaired driving (ROAD) study.

    Science.gov (United States)

    Maenhout, Thomas M; Poll, Anneleen; Vermassen, Tijl; De Buyzere, Marc L; Delanghe, Joris R

    2014-01-01

    In several European countries, drivers under the influence (DUI), suspected of chronic alcohol abuse are referred for medical and psychological examination. This study (the ROAD study, or Recidivism Of Alcohol-impaired Driving) investigated the usefulness of indirect alcohol biomarkers for predicting drunk-driving recidivism in previously convicted drunk-driving offenders. The ROAD study is a prospective study (2009-13) that was performed on 517 randomly selected drivers in Belgium. They were convicted for drunk-driving for which their licence was confiscated. The initial post-arrest blood samples were collected and analysed for percentage carbohydrate-deficient transferrin (%CDT), transaminsase activities [alanine amino transferase (ALT), aspartate amino transferase (AST)], gamma-glutamyltransferase (γGT) and red cell mean corpuscular volume (MCV). The observation time for each driver was 3 years and dynamic. A logistic regression analysis revealed that ln(%CDT) (P drunk-driving. The ROAD index (which includes ln(%CDT), ln(γGT), -ln(ALT) and the sex of the driver) was calculated and had a significantly higher area under the receiver operator characteristic curve (0.71) than the individual biomarkers for drunk-driving recidivism. Drivers with a high risk of recidivating (ROAD index ≥ 25%; third tertile) could be distinguished from drivers with an intermediate risk (16% ≤ ROAD index drunk-driving. The association with gamma-glutamyltransferase, alanine amino transferase and the sex of the driver could have additional value for identifying drunk-drivers at intermediate risk of recidivism. Non-specific indirect alcohol markers, such as alanine amino transferase, gamma-glutamyltransferase, aspartate amino transferase and red cell mean corpuscular volume have minimal added value to % carbohydrate-deficient transferrin for distinguishing drunk drivers with a low or high risk of recidivism. © 2013 Society for the Study of Addiction.

  12. Prostatectomy-based validation of combined urine and plasma test for predicting high grade prostate cancer.

    Science.gov (United States)

    Albitar, Maher; Ma, Wanlong; Lund, Lars; Shahbaba, Babak; Uchio, Edward; Feddersen, Søren; Moylan, Donald; Wojno, Kirk; Shore, Neal

    2018-03-01

    Distinguishing between low- and high-grade prostate cancers (PCa) is important, but biopsy may underestimate the actual grade of cancer. We have previously shown that urine/plasma-based prostate-specific biomarkers can predict high grade PCa. Our objective was to determine the accuracy of a test using cell-free RNA levels of biomarkers in predicting prostatectomy results. This multicenter community-based prospective study was conducted using urine/blood samples collected from 306 patients. All recruited patients were treatment-naïve, without metastases, and had been biopsied, designated a Gleason Score (GS) based on biopsy, and assigned to prostatectomy prior to participation in the study. The primary outcome measure was the urine/plasma test accuracy in predicting high grade PCa on prostatectomy compared with biopsy findings. Sensitivity and specificity were calculated using standard formulas, while comparisons between groups were performed using the Wilcoxon Rank Sum, Kruskal-Wallis, Chi-Square, and Fisher's exact test. GS as assigned by standard 10-12 core biopsies was 3 + 3 in 90 (29.4%), 3 + 4 in 122 (39.8%), 4 + 3 in 50 (16.3%), and > 4 + 3 in 44 (14.4%) patients. The urine/plasma assay confirmed a previous validation and was highly accurate in predicting the presence of high-grade PCa (Gleason ≥3 + 4) with sensitivity between 88% and 95% as verified by prostatectomy findings. GS was upgraded after prostatectomy in 27% of patients and downgraded in 12% of patients. This plasma/urine biomarker test accurately predicts high grade cancer as determined by prostatectomy with a sensitivity at 92-97%, while the sensitivity of core biopsies was 78%. © 2018 Wiley Periodicals, Inc.

  13. Performance of local information-based link prediction: a sampling perspective

    Science.gov (United States)

    Zhao, Jichang; Feng, Xu; Dong, Li; Liang, Xiao; Xu, Ke

    2012-08-01

    Link prediction is pervasively employed to uncover the missing links in the snapshots of real-world networks, which are usually obtained through different kinds of sampling methods. In the previous literature, in order to evaluate the performance of the prediction, known edges in the sampled snapshot are divided into the training set and the probe set randomly, without considering the underlying sampling approaches. However, different sampling methods might lead to different missing links, especially for the biased ways. For this reason, random partition-based evaluation of performance is no longer convincing if we take the sampling method into account. In this paper, we try to re-evaluate the performance of local information-based link predictions through sampling method governed division of the training set and the probe set. It is interesting that we find that for different sampling methods, each prediction approach performs unevenly. Moreover, most of these predictions perform weakly when the sampling method is biased, which indicates that the performance of these methods might have been overestimated in the prior works.

  14. Predictability of depression severity based on posterior alpha oscillations.

    Science.gov (United States)

    Jiang, H; Popov, T; Jylänki, P; Bi, K; Yao, Z; Lu, Q; Jensen, O; van Gerven, M A J

    2016-04-01

    We aimed to integrate neural data and an advanced machine learning technique to predict individual major depressive disorder (MDD) patient severity. MEG data was acquired from 22 MDD patients and 22 healthy controls (HC) resting awake with eyes closed. Individual power spectra were calculated by a Fourier transform. Sources were reconstructed via beamforming technique. Bayesian linear regression was applied to predict depression severity based on the spatial distribution of oscillatory power. In MDD patients, decreased theta (4-8 Hz) and alpha (8-14 Hz) power was observed in fronto-central and posterior areas respectively, whereas increased beta (14-30 Hz) power was observed in fronto-central regions. In particular, posterior alpha power was negatively related to depression severity. The Bayesian linear regression model showed significant depression severity prediction performance based on the spatial distribution of both alpha (r=0.68, p=0.0005) and beta power (r=0.56, p=0.007) respectively. Our findings point to a specific alteration of oscillatory brain activity in MDD patients during rest as characterized from MEG data in terms of spectral and spatial distribution. The proposed model yielded a quantitative and objective estimation for the depression severity, which in turn has a potential for diagnosis and monitoring of the recovery process. Copyright © 2016 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.

  15. Network-based prediction and knowledge mining of disease genes.

    Science.gov (United States)

    Carson, Matthew B; Lu, Hui

    2015-01-01

    could be used to identify likely disease associations. We analyzed the human protein interaction network and its relationship to disease and found that both the number of interactions with other proteins and the disease relationship of neighboring proteins helped to determine whether a protein had a relationship to disease. Our classifier predicted many proteins with no annotated disease association to be disease-related, which indicated that these proteins have network characteristics that are similar to disease-related proteins and may therefore have disease associations not previously identified. By performing a post-processing step after the prediction, we were able to identify evidence in literature supporting this possibility. This method could provide a useful filter for experimentalists searching for new candidate protein targets for drug repositioning and could also be extended to include other network and data types in order to refine these predictions.

  16. SNBRFinder: A Sequence-Based Hybrid Algorithm for Enhanced Prediction of Nucleic Acid-Binding Residues.

    Directory of Open Access Journals (Sweden)

    Xiaoxia Yang

    Full Text Available Protein-nucleic acid interactions are central to various fundamental biological processes. Automated methods capable of reliably identifying DNA- and RNA-binding residues in protein sequence are assuming ever-increasing importance. The majority of current algorithms rely on feature-based prediction, but their accuracy remains to be further improved. Here we propose a sequence-based hybrid algorithm SNBRFinder (Sequence-based Nucleic acid-Binding Residue Finder by merging a feature predictor SNBRFinderF and a template predictor SNBRFinderT. SNBRFinderF was established using the support vector machine whose inputs include sequence profile and other complementary sequence descriptors, while SNBRFinderT was implemented with the sequence alignment algorithm based on profile hidden Markov models to capture the weakly homologous template of query sequence. Experimental results show that SNBRFinderF was clearly superior to the commonly used sequence profile-based predictor and SNBRFinderT can achieve comparable performance to the structure-based template methods. Leveraging the complementary relationship between these two predictors, SNBRFinder reasonably improved the performance of both DNA- and RNA-binding residue predictions. More importantly, the sequence-based hybrid prediction reached competitive performance relative to our previous structure-based counterpart. Our extensive and stringent comparisons show that SNBRFinder has obvious advantages over the existing sequence-based prediction algorithms. The value of our algorithm is highlighted by establishing an easy-to-use web server that is freely accessible at http://ibi.hzau.edu.cn/SNBRFinder.

  17. A range-based predictive localization algorithm for WSID networks

    Science.gov (United States)

    Liu, Yuan; Chen, Junjie; Li, Gang

    2017-11-01

    Most studies on localization algorithms are conducted on the sensor networks with densely distributed nodes. However, the non-localizable problems are prone to occur in the network with sparsely distributed sensor nodes. To solve this problem, a range-based predictive localization algorithm (RPLA) is proposed in this paper for the wireless sensor networks syncretizing the RFID (WSID) networks. The Gaussian mixture model is established to predict the trajectory of a mobile target. Then, the received signal strength indication is used to reduce the residence area of the target location based on the approximate point-in-triangulation test algorithm. In addition, collaborative localization schemes are introduced to locate the target in the non-localizable situations. Simulation results verify that the RPLA achieves accurate localization for the network with sparsely distributed sensor nodes. The localization accuracy of the RPLA is 48.7% higher than that of the APIT algorithm, 16.8% higher than that of the single Gaussian model-based algorithm and 10.5% higher than that of the Kalman filtering-based algorithm.

  18. Results on three predictions for July 2012 federal elections in Mexico based on past regularities.

    Directory of Open Access Journals (Sweden)

    H Hernández-Saldaña

    Full Text Available The Presidential Election in Mexico of July 2012 has been the third time that PREP, Previous Electoral Results Program works. PREP gives voting outcomes based in electoral certificates of each polling station that arrive to capture centers. In previous ones, some statistical regularities had been observed, three of them were selected to make predictions and were published in arXiv:1207.0078 [physics.soc-ph]. Using the database made public in July 2012, two of the predictions were completely fulfilled, while, the third one was measured and confirmed using the database obtained upon request to the electoral authorities. The first two predictions confirmed by actual measures are: (ii The Partido Revolucionario Institucional, PRI, is a sprinter and has a better performance in polling stations arriving late to capture centers during the process. (iii Distribution of vote of this party is well described by a smooth function named a Daisy model. A Gamma distribution, but compatible with a Daisy model, fits the distribution as well. The third prediction confirms that errare humanum est, since the error distributions of all the self-consistency variables appeared as a central power law with lateral lobes as in 2000 and 2006 electoral processes. The three measured regularities appeared no matter the political environment.

  19. Results on three predictions for July 2012 federal elections in Mexico based on past regularities.

    Science.gov (United States)

    Hernández-Saldaña, H

    2013-01-01

    The Presidential Election in Mexico of July 2012 has been the third time that PREP, Previous Electoral Results Program works. PREP gives voting outcomes based in electoral certificates of each polling station that arrive to capture centers. In previous ones, some statistical regularities had been observed, three of them were selected to make predictions and were published in arXiv:1207.0078 [physics.soc-ph]. Using the database made public in July 2012, two of the predictions were completely fulfilled, while, the third one was measured and confirmed using the database obtained upon request to the electoral authorities. The first two predictions confirmed by actual measures are: (ii) The Partido Revolucionario Institucional, PRI, is a sprinter and has a better performance in polling stations arriving late to capture centers during the process. (iii) Distribution of vote of this party is well described by a smooth function named a Daisy model. A Gamma distribution, but compatible with a Daisy model, fits the distribution as well. The third prediction confirms that errare humanum est, since the error distributions of all the self-consistency variables appeared as a central power law with lateral lobes as in 2000 and 2006 electoral processes. The three measured regularities appeared no matter the political environment.

  20. Performance reliability prediction for thermal aging based on kalman filtering

    International Nuclear Information System (INIS)

    Ren Shuhong; Wen Zhenhua; Xue Fei; Zhao Wensheng

    2015-01-01

    The performance reliability of the nuclear power plant main pipeline that failed due to thermal aging was studied by the performance degradation theory. Firstly, through the data obtained from the accelerated thermal aging experiments, the degradation process of the impact strength and fracture toughness of austenitic stainless steel material of the main pipeline was analyzed. The time-varying performance degradation model based on the state space method was built, and the performance trends were predicted by using Kalman filtering. Then, the multi-parameter and real-time performance reliability prediction model for the main pipeline thermal aging was developed by considering the correlation between the impact properties and fracture toughness, and by using the stochastic process theory. Thus, the thermal aging performance reliability and reliability life of the main pipeline with multi-parameter were obtained, which provides the scientific basis for the optimization management of the aging maintenance decision making for nuclear power plant main pipelines. (authors)

  1. Human Posture and Movement Prediction based on Musculoskeletal Modeling

    DEFF Research Database (Denmark)

    Farahani, Saeed Davoudabadi

    2014-01-01

    Abstract This thesis explores an optimization-based formulation, so-called inverse-inverse dynamics, for the prediction of human posture and motion dynamics performing various tasks. It is explained how this technique enables us to predict natural kinematic and kinetic patterns for human posture...... and motion using AnyBody Modeling System (AMS). AMS uses inverse dynamics to analyze musculoskeletal systems and is, therefore, limited by its dependency on input kinematics. We propose to alleviate this dependency by assuming that voluntary postures and movement strategies in humans are guided by a desire...... expenditure, joint forces and other physiological properties derived from the detailed musculoskeletal analysis. Several attempts have been made to uncover the principles underlying motion control strategies in the literature. In case of some movements, like human squat jumping, there is almost no doubt...

  2. Construction Worker Fatigue Prediction Model Based on System Dynamic

    Directory of Open Access Journals (Sweden)

    Wahyu Adi Tri Joko

    2017-01-01

    Full Text Available Construction accident can be caused by internal and external factors such as worker fatigue and unsafe project environment. Tight schedule of construction project forcing construction worker to work overtime in long period. This situation leads to worker fatigue. This paper proposes a model to predict construction worker fatigue based on system dynamic (SD. System dynamic is used to represent correlation among internal and external factors and to simulate level of worker fatigue. To validate the model, 93 construction workers whom worked in a high rise building construction projects, were used as case study. The result shows that excessive workload, working elevation and age, are the main factors lead to construction worker fatigue. Simulation result also shows that these factors can increase worker fatigue level to 21.2% times compared to normal condition. Beside predicting worker fatigue level this model can also be used as early warning system to prevent construction worker accident

  3. Wind power prediction based on genetic neural network

    Science.gov (United States)

    Zhang, Suhan

    2017-04-01

    The scale of grid connected wind farms keeps increasing. To ensure the stability of power system operation, make a reasonable scheduling scheme and improve the competitiveness of wind farm in the electricity generation market, it's important to accurately forecast the short-term wind power. To reduce the influence of the nonlinear relationship between the disturbance factor and the wind power, the improved prediction model based on genetic algorithm and neural network method is established. To overcome the shortcomings of long training time of BP neural network and easy to fall into local minimum and improve the accuracy of the neural network, genetic algorithm is adopted to optimize the parameters and topology of neural network. The historical data is used as input to predict short-term wind power. The effectiveness and feasibility of the method is verified by the actual data of a certain wind farm as an example.

  4. Machine-Learning-Based No Show Prediction in Outpatient Visits

    Directory of Open Access Journals (Sweden)

    Carlos Elvira

    2018-03-01

    Full Text Available A recurring problem in healthcare is the high percentage of patients who miss their appointment, be it a consultation or a hospital test. The present study seeks patient’s behavioural patterns that allow predicting the probability of no- shows. We explore the convenience of using Big Data Machine Learning models to accomplish this task. To begin with, a predictive model based only on variables associated with the target appointment is built. Then the model is improved by considering the patient’s history of appointments. In both cases, the Gradient Boosting algorithm was the predictor of choice. Our numerical results are considered promising given the small amount of information available. However, there seems to be plenty of room to improve the model if we manage to collect additional data for both patients and appointments.

  5. Prediction and Validation of Mars Pathfinder Hypersonic Aerodynamic Data Base

    Science.gov (United States)

    Gnoffo, Peter A.; Braun, Robert D.; Weilmuenster, K. James; Mitcheltree, Robert A.; Engelund, Walter C.; Powell, Richard W.

    1998-01-01

    Postflight analysis of the Mars Pathfinder hypersonic, continuum aerodynamic data base is presented. Measured data include accelerations along the body axis and axis normal directions. Comparisons of preflight simulation and measurements show good agreement. The prediction of two static instabilities associated with movement of the sonic line from the shoulder to the nose and back was confirmed by measured normal accelerations. Reconstruction of atmospheric density during entry has an uncertainty directly proportional to the uncertainty in the predicted axial coefficient. The sensitivity of the moment coefficient to freestream density, kinetic models and center-of-gravity location are examined to provide additional consistency checks of the simulation with flight data. The atmospheric density as derived from axial coefficient and measured axial accelerations falls within the range required for sonic line shift and static stability transition as independently determined from normal accelerations.

  6. Abalone Muscle Texture Evaluation and Prediction Based on TPA Experiment

    Directory of Open Access Journals (Sweden)

    Jiaxu Dong

    2017-01-01

    Full Text Available The effects of different heat treatments on abalones’ texture properties and sensory characteristics were studied. Thermal processing of abalone muscle was analyzed to determine the optimal heat treatment condition based on fuzzy evaluation. The results showed that heat treatment at 85°C for 1 hour had certain desirable effects on the properties of the abalone meat. Specifically, a back propagation (BP neural network was introduced to predict the equations of statistically significant sensory hardness, springiness, and smell using the texture data gained through TPA (texture profile analysis experiments as input and sensory evaluation data as the desired output. The final outcome was that the predictability was proved to be satisfactory, with an average error of 6.93%.

  7. Acceleration and Orientation Jumping Performance Differences Among Elite Professional Male Handball Players With or Without Previous ACL Reconstruction: An Inertial Sensor Unit-Based Study.

    Science.gov (United States)

    Setuain, Igor; González-Izal, Miriam; Alfaro, Jesús; Gorostiaga, Esteban; Izquierdo, Mikel

    2015-12-01

    Handball is one of the most challenging sports for the knee joint. Persistent biomechanical and jumping capacity alterations can be observed in athletes with an anterior cruciate ligament (ACL) injury. Commonly identified jumping biomechanical alterations have been described by the use of laboratory technologies. However, portable and easy-to-handle technologies that enable an evaluation of jumping biomechanics at the training field are lacking. To analyze unilateral/bilateral acceleration and orientation jumping performance differences among elite male handball athletes with or without previous ACL reconstruction via a single inertial sensor unit device. Case control descriptive study. At the athletes' usual training court. Twenty-two elite male (6 ACL-reconstructed and 16 uninjured control players) handball players were evaluated. The participants performed a vertical jump test battery that included a 50-cm vertical bilateral drop jump, a 20-cm vertical unilateral drop jump, and vertical unilateral countermovement jump maneuvers. Peak 3-dimensional (X, Y, Z) acceleration (m·s(-2)), jump phase duration and 3-dimensional orientation values (°) were obtained from the inertial sensor unit device. Two-tailed t-tests and a one-way analysis of variance were performed to compare means. The P value cut-off for significance was set at P handball athletes with previous ACL reconstruction demonstrated a jumping biomechanical profile similar to control players, including similar jumping performance values in both bilateral and unilateral jumping maneuvers, several years after ACL reconstruction. These findings are in agreement with previous research showing full functional restoration of abilities in top-level male athletes after ACL reconstruction, rehabilitation and subsequent return to sports at the previous level. Copyright © 2015 American Academy of Physical Medicine and Rehabilitation. Published by Elsevier Inc. All rights reserved.

  8. Effect of feeding different cereal-based diets on the performance and gut health of weaned piglets with or without previous access to creep feed during lactation.

    Science.gov (United States)

    Torrallardona, D; Andrés-Elias, N; López-Soria, S; Badiola, I; Cerdà-Cuéllar, M

    2012-12-01

    A trial was conducted to evaluate the effect of different cereals on the performance, gut mucosa, and microbiota of weanling pigs with or without previous access to creep feed during lactation. A total of 108 newly weaned pigs (7.4 kg BW; 26 d of age; half with and half without creep feed) were used. Piglets were distributed by BW into 36 pens according to a 2 × 6 factorial arrangement of treatments with previous access to creep feed (with or without) and cereal source in the experimental diet [barley (Hordeum vulgare), rice (Oryza sativa)-wheat (Triticum aestivum) bran, corn (Zea mays), naked oats (Avena sativa), oats, or rice] as main factors. Pigs were offered the experimental diets for 21 d and performance was monitored. At day 21, 4 piglets from each treatment were killed and sampled for the histological evaluation of jejunal mucosa and the study of ileal and cecal microbiota by RFLP. The Manhattan distances between RFLP profiles were calculated and intragroup similarities (IGS) were estimated for each treatment. An interaction between cereal source and previous creep feeding was observed for ADFI (P creep feeding increased ADFI for the rice-wheat bran diet it reduced it for naked oats. No differences in mucosal morphology were observed except for deeper crypts in pigs that did not have previous access to creep feed (P creep feeding and cereal was also observed for the IGS of the cecal microbiota at day 21 (P creep feed reduced IGS in the piglets fed oats or barley but no differences were observed for the other cereal sources. It is concluded that the effect of creep feeding during lactation on the performance and the microbiota of piglets after weaning is dependent on the nature of the cereal in the postweaning diet.

  9. Rutting Prediction in Asphalt Pavement Based on Viscoelastic Theory

    Directory of Open Access Journals (Sweden)

    Nahi Mohammed Hadi

    2016-01-01

    Full Text Available Rutting is one of the most disturbing failures on the asphalt roads due to the interrupting it is caused to the drivers. Predicting of asphalt pavement rutting is essential tool leads to better asphalt mixture design. This work describes a method of predicting the behaviour of various asphalt pavement mixes and linking these to an accelerated performance testing. The objective of this study is to develop a finite element model based on viscoplastic theory for simulating the laboratory testing of asphalt mixes in Hamburg Wheel Rut Tester (HWRT for rutting. The creep parameters C1, C2 and C3 are developed from the triaxial repeated load creep test at 50°C and at a frequency of 1 Hz and the modulus of elasticity and Poisson’ s ratio determined at the same temperature. Viscoelastic model (creep model is adopted using a FE simulator (ANSYS in order to calculate the rutting for various mixes under a uniform loading pressure of 500 kPa. An eight-node with a three Degrees of Freedom (UX, UY, and UZ Element is used for the simulation. The creep model developed for HWRT tester was verified by comparing the predicted rut depths with the measured one and by comparing the rut depth with ABAQUS result from literature. Reasonable agreement can be obtained between the predicted rut depths and the measured one. Moreover, it is found that creep model parameter C1 and C3 have a strong relationship with rutting. It was clear that the parameter C1 strongly influences rutting than the parameter C3. Finally, it can be concluded that creep model based on finite element method can be used as an effective tool to analyse rutting of asphalt pavements.

  10. Not just the norm: exemplar-based models also predict face aftereffects.

    Science.gov (United States)

    Ross, David A; Deroche, Mickael; Palmeri, Thomas J

    2014-02-01

    The face recognition literature has considered two competing accounts of how faces are represented within the visual system: Exemplar-based models assume that faces are represented via their similarity to exemplars of previously experienced faces, while norm-based models assume that faces are represented with respect to their deviation from an average face, or norm. Face identity aftereffects have been taken as compelling evidence in favor of a norm-based account over an exemplar-based account. After a relatively brief period of adaptation to an adaptor face, the perceived identity of a test face is shifted toward a face with attributes opposite to those of the adaptor, suggesting an explicit psychological representation of the norm. Surprisingly, despite near universal recognition that face identity aftereffects imply norm-based coding, there have been no published attempts to simulate the predictions of norm- and exemplar-based models in face adaptation paradigms. Here, we implemented and tested variations of norm and exemplar models. Contrary to common claims, our simulations revealed that both an exemplar-based model and a version of a two-pool norm-based model, but not a traditional norm-based model, predict face identity aftereffects following face adaptation.

  11. A Risk Prediction Model for Sporadic CRC Based on Routine Lab Results.

    Science.gov (United States)

    Boursi, Ben; Mamtani, Ronac; Hwang, Wei-Ting; Haynes, Kevin; Yang, Yu-Xiao

    2016-07-01

    Current risk scores for colorectal cancer (CRC) are based on demographic and behavioral factors and have limited predictive values. To develop a novel risk prediction model for sporadic CRC using clinical and laboratory data in electronic medical records. We conducted a nested case-control study in a UK primary care database. Cases included those with a diagnostic code of CRC, aged 50-85. Each case was matched with four controls using incidence density sampling. CRC predictors were examined using univariate conditional logistic regression. Variables with p value CRC prediction models which included age, sex, height, obesity, ever smoking, alcohol dependence, and previous screening colonoscopy had an AUC of 0.58 (0.57-0.59) with poor goodness of fit. A laboratory-based model including hematocrit, MCV, lymphocytes, and neutrophil-lymphocyte ratio (NLR) had an AUC of 0.76 (0.76-0.77) and a McFadden's R2 of 0.21 with a NRI of 47.6 %. A combined model including sex, hemoglobin, MCV, white blood cells, platelets, NLR, and oral hypoglycemic use had an AUC of 0.80 (0.79-0.81) with a McFadden's R2 of 0.27 and a NRI of 60.7 %. Similar results were shown in an internal validation set. A laboratory-based risk model had good predictive power for sporadic CRC risk.

  12. Previously unknown species of Aspergillus.

    Science.gov (United States)

    Gautier, M; Normand, A-C; Ranque, S

    2016-08-01

    The use of multi-locus DNA sequence analysis has led to the description of previously unknown 'cryptic' Aspergillus species, whereas classical morphology-based identification of Aspergillus remains limited to the section or species-complex level. The current literature highlights two main features concerning these 'cryptic' Aspergillus species. First, the prevalence of such species in clinical samples is relatively high compared with emergent filamentous fungal taxa such as Mucorales, Scedosporium or Fusarium. Second, it is clearly important to identify these species in the clinical laboratory because of the high frequency of antifungal drug-resistant isolates of such Aspergillus species. Matrix-assisted laser desorption/ionization-time of flight mass spectrometry (MALDI-TOF MS) has recently been shown to enable the identification of filamentous fungi with an accuracy similar to that of DNA sequence-based methods. As MALDI-TOF MS is well suited to the routine clinical laboratory workflow, it facilitates the identification of these 'cryptic' Aspergillus species at the routine mycology bench. The rapid establishment of enhanced filamentous fungi identification facilities will lead to a better understanding of the epidemiology and clinical importance of these emerging Aspergillus species. Based on routine MALDI-TOF MS-based identification results, we provide original insights into the key interpretation issues of a positive Aspergillus culture from a clinical sample. Which ubiquitous species that are frequently isolated from air samples are rarely involved in human invasive disease? Can both the species and the type of biological sample indicate Aspergillus carriage, colonization or infection in a patient? Highly accurate routine filamentous fungi identification is central to enhance the understanding of these previously unknown Aspergillus species, with a vital impact on further improved patient care. Copyright © 2016 European Society of Clinical Microbiology and

  13. Enabling Persistent Autonomy for Underwater Gliders with Ocean Model Predictions and Terrain Based Navigation

    Directory of Open Access Journals (Sweden)

    Andrew eStuntz

    2016-04-01

    Full Text Available Effective study of ocean processes requires sampling over the duration of long (weeks to months oscillation patterns. Such sampling requires persistent, autonomous underwater vehicles, that have a similarly long deployment duration. The spatiotemporal dynamics of the ocean environment, coupled with limited communication capabilities, make navigation and localization difficult, especially in coastal regions where the majority of interesting phenomena occur. In this paper, we consider the combination of two methods for reducing navigation and localization error; a predictive approach based on ocean model predictions and a prior information approach derived from terrain-based navigation. The motivation for this work is not only for real-time state estimation, but also for accurately reconstructing the actual path that the vehicle traversed to contextualize the gathered data, with respect to the science question at hand. We present an application for the practical use of priors and predictions for large-scale ocean sampling. This combined approach builds upon previous works by the authors, and accurately localizes the traversed path of an underwater glider over long-duration, ocean deployments. The proposed method takes advantage of the reliable, short-term predictions of an ocean model, and the utility of priors used in terrain-based navigation over areas of significant bathymetric relief to bound uncertainty error in dead-reckoning navigation. This method improves upon our previously published works by 1 demonstrating the utility of our terrain-based navigation method with multiple field trials, and 2 presenting a hybrid algorithm that combines both approaches to bound navigational error and uncertainty for long-term deployments of underwater vehicles. We demonstrate the approach by examining data from actual field trials with autonomous underwater gliders, and demonstrate an ability to estimate geographical location of an underwater glider to 2

  14. Cloud-based Predictive Modeling System and its Application to Asthma Readmission Prediction

    Science.gov (United States)

    Chen, Robert; Su, Hang; Khalilia, Mohammed; Lin, Sizhe; Peng, Yue; Davis, Tod; Hirsh, Daniel A; Searles, Elizabeth; Tejedor-Sojo, Javier; Thompson, Michael; Sun, Jimeng

    2015-01-01

    The predictive modeling process is time consuming and requires clinical researchers to handle complex electronic health record (EHR) data in restricted computational environments. To address this problem, we implemented a cloud-based predictive modeling system via a hybrid setup combining a secure private server with the Amazon Web Services (AWS) Elastic MapReduce platform. EHR data is preprocessed on a private server and the resulting de-identified event sequences are hosted on AWS. Based on user-specified modeling configurations, an on-demand web service launches a cluster of Elastic Compute 2 (EC2) instances on AWS to perform feature selection and classification algorithms in a distributed fashion. Afterwards, the secure private server aggregates results and displays them via interactive visualization. We tested the system on a pediatric asthma readmission task on a de-identified EHR dataset of 2,967 patients. We conduct a larger scale experiment on the CMS Linkable 2008–2010 Medicare Data Entrepreneurs’ Synthetic Public Use File dataset of 2 million patients, which achieves over 25-fold speedup compared to sequential execution. PMID:26958172

  15. Predicting the Performance of Chain Saw Machines Based on Shore Scleroscope Hardness

    Science.gov (United States)

    Tumac, Deniz

    2014-03-01

    Shore hardness has been used to estimate several physical and mechanical properties of rocks over the last few decades. However, the number of researches correlating Shore hardness with rock cutting performance is quite limited. Also, rather limited researches have been carried out on predicting the performance of chain saw machines. This study differs from the previous investigations in the way that Shore hardness values (SH1, SH2, and deformation coefficient) are used to determine the field performance of chain saw machines. The measured Shore hardness values are correlated with the physical and mechanical properties of natural stone samples, cutting parameters (normal force, cutting force, and specific energy) obtained from linear cutting tests in unrelieved cutting mode, and areal net cutting rate of chain saw machines. Two empirical models developed previously are improved for the prediction of the areal net cutting rate of chain saw machines. The first model is based on a revised chain saw penetration index, which uses SH1, machine weight, and useful arm cutting depth as predictors. The second model is based on the power consumed for only cutting the stone, arm thickness, and specific energy as a function of the deformation coefficient. While cutting force has a strong relationship with Shore hardness values, the normal force has a weak or moderate correlation. Uniaxial compressive strength, Cerchar abrasivity index, and density can also be predicted by Shore hardness values.

  16. Do Culture-based Segments Predict Selection of Market Strategy?

    Directory of Open Access Journals (Sweden)

    Veronika Jadczaková

    2015-01-01

    Full Text Available Academists and practitioners have already acknowledged the importance of unobservable segmentation bases (such as psychographics yet still focusing on how well these bases are capable of describing relevant segments (the identifiability criterion rather than on how precisely these segments can predict (the predictability criterion. Therefore, this paper intends to add a debate to this topic by exploring whether culture-based segments do account for a selection of market strategy. To do so, a set of market strategy variables over a sample of 251 manufacturing firms was first regressed on a set of 19 cultural variables using canonical correlation analysis. Having found significant relationship in the first canonical function, it was further examined by means of correspondence analysis which cultural segments – if any – are linked to which market strategies. However, as correspondence analysis failed to find a significant relationship, it may be concluded that business culture might relate to the adoption of market strategy but not to the cultural groupings presented in the paper.

  17. Risk prediction of cardiovascular death based on the QTc interval

    DEFF Research Database (Denmark)

    Nielsen, Jonas B; Graff, Claus; Rasmussen, Peter V

    2014-01-01

    electrocardiograms from 173 529 primary care patients aged 50-90 years were collected during 2001-11. The Framingham formula was used for heart rate-correction of the QT interval. Data on medication, comorbidity, and outcomes were retrieved from administrative registries. During a median follow-up period of 6......AIMS: Using a large, contemporary primary care population we aimed to provide absolute long-term risks of cardiovascular death (CVD) based on the QTc interval and to test whether the QTc interval is of value in risk prediction of CVD on an individual level. METHODS AND RESULTS: Digital...

  18. Decline curve based models for predicting natural gas well performance

    Directory of Open Access Journals (Sweden)

    Arash Kamari

    2017-06-01

    Full Text Available The productivity of a gas well declines over its production life as cannot cover economic policies. To overcome such problems, the production performance of gas wells should be predicted by applying reliable methods to analyse the decline trend. Therefore, reliable models are developed in this study on the basis of powerful artificial intelligence techniques viz. the artificial neural network (ANN modelling strategy, least square support vector machine (LSSVM approach, adaptive neuro-fuzzy inference system (ANFIS, and decision tree (DT method for the prediction of cumulative gas production as well as initial decline rate multiplied by time as a function of the Arps' decline curve exponent and ratio of initial gas flow rate over total gas flow rate. It was concluded that the results obtained based on the models developed in current study are in satisfactory agreement with the actual gas well production data. Furthermore, the results of comparative study performed demonstrates that the LSSVM strategy is superior to the other models investigated for the prediction of both cumulative gas production, and initial decline rate multiplied by time.

  19. Mobile Phone-Based Mood Ratings Prospectively Predict Psychotherapy Attendance.

    Science.gov (United States)

    Bruehlman-Senecal, Emma; Aguilera, Adrian; Schueller, Stephen M

    2017-09-01

    Psychotherapy nonattendance is a costly and pervasive problem. While prior research has identified stable patient-level predictors of attendance, far less is known about dynamic (i.e., time-varying) factors. Identifying dynamic predictors can clarify how clinical states relate to psychotherapy attendance and inform effective "just-in-time" interventions to promote attendance. The present study examines whether daily mood, as measured by responses to automated mobile phone-based text messages, prospectively predicts attendance in group cognitive-behavioral therapy (CBT) for depression. Fifty-six Spanish-speaking Latino patients with elevated depressive symptoms (46 women, mean age=50.92years, SD=10.90years), enrolled in a manualized program of group CBT, received daily automated mood-monitoring text messages. Patients' daily mood ratings, message response rate, and delay in responding were recorded. Patients' self-reported mood the day prior to a scheduled psychotherapy session significantly predicted attendance, even after controlling for patients' prior attendance history and age (OR=1.33, 95% CI [1.04, 1.70], p=.02). Positive mood corresponded to a greater likelihood of attendance. Our results demonstrate the clinical utility of automated mood-monitoring text messages in predicting attendance. These results underscore the value of text messaging, and other mobile technologies, as adjuncts to psychotherapy. Future work should explore the use of such monitoring to guide interventions to increase attendance, and ultimately the efficacy of psychotherapy. Copyright © 2017. Published by Elsevier Ltd.

  20. Decision tree-based learning to predict patient controlled analgesia consumption and readjustment

    Directory of Open Access Journals (Sweden)

    Hu Yuh-Jyh

    2012-11-01

    Full Text Available Abstract Background Appropriate postoperative pain management contributes to earlier mobilization, shorter hospitalization, and reduced cost. The under treatment of pain may impede short-term recovery and have a detrimental long-term effect on health. This study focuses on Patient Controlled Analgesia (PCA, which is a delivery system for pain medication. This study proposes and demonstrates how to use machine learning and data mining techniques to predict analgesic requirements and PCA readjustment. Methods The sample in this study included 1099 patients. Every patient was described by 280 attributes, including the class attribute. In addition to commonly studied demographic and physiological factors, this study emphasizes attributes related to PCA. We used decision tree-based learning algorithms to predict analgesic consumption and PCA control readjustment based on the first few hours of PCA medications. We also developed a nearest neighbor-based data cleaning method to alleviate the class-imbalance problem in PCA setting readjustment prediction. Results The prediction accuracies of total analgesic consumption (continuous dose and PCA dose and PCA analgesic requirement (PCA dose only by an ensemble of decision trees were 80.9% and 73.1%, respectively. Decision tree-based learning outperformed Artificial Neural Network, Support Vector Machine, Random Forest, Rotation Forest, and Naïve Bayesian classifiers in analgesic consumption prediction. The proposed data cleaning method improved the performance of every learning method in this study of PCA setting readjustment prediction. Comparative analysis identified the informative attributes from the data mining models and compared them with the correlates of analgesic requirement reported in previous works. Conclusion This study presents a real-world application of data mining to anesthesiology. Unlike previous research, this study considers a wider variety of predictive factors, including PCA

  1. Classification and Regression Tree Analysis of Clinical Patterns that Predict Survival in 127 Chinese Patients with Advanced Non-small Cell Lung Cancer Treated by Gefitinib Who Failed to Previous Chemotherapy

    Directory of Open Access Journals (Sweden)

    Ziping WANG

    2011-09-01

    Full Text Available Background and objective It has been proven that gefitinib produces only 10%-20% tumor regression in heavily pretreated, unselected non-small cell lung cancer (NSCLC patients as the second- and third-line setting. Asian, female, nonsmokers and adenocarcinoma are favorable factors; however, it is difficult to find a patient satisfying all the above clinical characteristics. The aim of this study is to identify novel predicting factors, and to explore the interactions between clinical variables and their impact on the survival of Chinese patients with advanced NSCLC who were heavily treated with gefitinib in the second- or third-line setting. Methods The clinical and follow-up data of 127 advanced NSCLC patients referred to the Cancer Hospital & Institute, Chinese Academy of Medical Sciences from March 2005 to March 2010 were analyzed. Multivariate analysis of progression-free survival (PFS was performed using recursive partitioning, which is referred to as the classification and regression tree (CART analysis. Results The median PFS of 127 eligible consecutive advanced NSCLC patients was 8.0 months (95%CI: 5.8-10.2. CART was performed with an initial split on first-line chemotherapy outcomes and a second split on patients’ age. Three terminal subgroups were formed. The median PFS of the three subsets ranged from 1.0 month (95%CI: 0.8-1.2 for those with progressive disease outcome after the first-line chemotherapy subgroup, 10 months (95%CI: 7.0-13.0 in patients with a partial response or stable disease in first-line chemotherapy and age <70, and 22.0 months for patients obtaining a partial response or stable disease in first-line chemotherapy at age 70-81 (95%CI: 3.8-40.1. Conclusion Partial response, stable disease in first-line chemotherapy and age ≥ 70 are closely correlated with long-term survival treated by gefitinib as a second- or third-line setting in advanced NSCLC. CART can be used to identify previously unappreciated patient

  2. A prediction method based on grey system theory in equipment condition based maintenance

    International Nuclear Information System (INIS)

    Yan, Shengyuan; Yan, Shengyuan; Zhang, Hongguo; Zhang, Zhijian; Peng, Minjun; Yang, Ming

    2007-01-01

    Grey prediction is a modeling method based on historical or present, known or indefinite information, which can be used for forecasting the development of the eigenvalues of the targeted equipment system and setting up the model by using less information. In this paper, the postulate of grey system theory, which includes the grey generating, the sorts of grey generating and the grey forecasting model, is introduced first. The concrete application process, which includes the grey prediction modeling, grey prediction, error calculation, equal dimension and new information approach, is introduced secondly. Application of a so-called 'Equal Dimension and New Information' (EDNI) technology in grey system theory is adopted in an application case, aiming at improving the accuracy of prediction without increasing the amount of calculation by replacing old data with new ones. The proposed method can provide a new way for solving the problem of eigenvalue data exploding in equal distance effectively, short time interval and real time prediction. The proposed method, which was based on historical or present, known or indefinite information, was verified by the vibration prediction of induced draft fan of a boiler of the Yantai Power Station in China, and the results show that the proposed method based on grey system theory is simple and provides a high accuracy in prediction. So, it is very useful and significant to the controlling and controllable management in safety production. (authors)

  3. Base pair probability estimates improve the prediction accuracy of RNA non-canonical base pairs.

    Directory of Open Access Journals (Sweden)

    Michael F Sloma

    2017-11-01

    Full Text Available Prediction of RNA tertiary structure from sequence is an important problem, but generating accurate structure models for even short sequences remains difficult. Predictions of RNA tertiary structure tend to be least accurate in loop regions, where non-canonical pairs are important for determining the details of structure. Non-canonical pairs can be predicted using a knowledge-based model of structure that scores nucleotide cyclic motifs, or NCMs. In this work, a partition function algorithm is introduced that allows the estimation of base pairing probabilities for both canonical and non-canonical interactions. Pairs that are predicted to be probable are more likely to be found in the true structure than pairs of lower probability. Pair probability estimates can be further improved by predicting the structure conserved across multiple homologous sequences using the TurboFold algorithm. These pairing probabilities, used in concert with prior knowledge of the canonical secondary structure, allow accurate inference of non-canonical pairs, an important step towards accurate prediction of the full tertiary structure. Software to predict non-canonical base pairs and pairing probabilities is now provided as part of the RNAstructure software package.

  4. Base pair probability estimates improve the prediction accuracy of RNA non-canonical base pairs.

    Science.gov (United States)

    Sloma, Michael F; Mathews, David H

    2017-11-01

    Prediction of RNA tertiary structure from sequence is an important problem, but generating accurate structure models for even short sequences remains difficult. Predictions of RNA tertiary structure tend to be least accurate in loop regions, where non-canonical pairs are important for determining the details of structure. Non-canonical pairs can be predicted using a knowledge-based model of structure that scores nucleotide cyclic motifs, or NCMs. In this work, a partition function algorithm is introduced that allows the estimation of base pairing probabilities for both canonical and non-canonical interactions. Pairs that are predicted to be probable are more likely to be found in the true structure than pairs of lower probability. Pair probability estimates can be further improved by predicting the structure conserved across multiple homologous sequences using the TurboFold algorithm. These pairing probabilities, used in concert with prior knowledge of the canonical secondary structure, allow accurate inference of non-canonical pairs, an important step towards accurate prediction of the full tertiary structure. Software to predict non-canonical base pairs and pairing probabilities is now provided as part of the RNAstructure software package.

  5. DEEPre: sequence-based enzyme EC number prediction by deep learning

    KAUST Repository

    Li, Yu

    2017-10-20

    Annotation of enzyme function has a broad range of applications, such as metagenomics, industrial biotechnology, and diagnosis of enzyme deficiency-caused diseases. However, the time and resource required make it prohibitively expensive to experimentally determine the function of every enzyme. Therefore, computational enzyme function prediction has become increasingly important. In this paper, we develop such an approach, determining the enzyme function by predicting the Enzyme Commission number.We propose an end-to-end feature selection and classification model training approach, as well as an automatic and robust feature dimensionality uniformization method, DEEPre, in the field of enzyme function prediction. Instead of extracting manuallycrafted features from enzyme sequences, our model takes the raw sequence encoding as inputs, extracting convolutional and sequential features from the raw encoding based on the classification result to directly improve the prediction performance. The thorough cross-fold validation experiments conducted on two large-scale datasets show that DEEPre improves the prediction performance over the previous state-of-the-art methods. In addition, our server outperforms five other servers in determining the main class of enzymes on a separate low-homology dataset. Two case studies demonstrate DEEPre\\'s ability to capture the functional difference of enzyme isoforms.The server could be accessed freely at http://www.cbrc.kaust.edu.sa/DEEPre.

  6. Structure-based methods to predict mutational resistance to diarylpyrimidine non-nucleoside reverse transcriptase inhibitors.

    Science.gov (United States)

    Azeem, Syeda Maryam; Muwonge, Alecia N; Thakkar, Nehaben; Lam, Kristina W; Frey, Kathleen M

    2018-01-01

    Resistance to non-nucleoside reverse transcriptase inhibitors (NNRTIs) is a leading cause of HIV treatment failure. Often included in antiviral therapy, NNRTIs are chemically diverse compounds that bind an allosteric pocket of enzyme target reverse transcriptase (RT). Several new NNRTIs incorporate flexibility in order to compensate for lost interactions with amino acid conferring mutations in RT. Unfortunately, even successful inhibitors such as diarylpyrimidine (DAPY) inhibitor rilpivirine are affected by mutations in RT that confer resistance. In order to aid drug design efforts, it would be efficient and cost effective to pre-evaluate NNRTI compounds in development using a structure-based computational approach. As proof of concept, we applied a residue scan and molecular dynamics strategy using RT crystal structures to predict mutations that confer resistance to DAPYs rilpivirine, etravirine, and investigational microbicide dapivirine. Our predictive values, changes in affinity and stability, are correlative with fold-resistance data for several RT mutants. Consistent with previous studies, mutation K101P is predicted to confer high-level resistance to DAPYs. These findings were further validated using structural analysis, molecular dynamics, and an enzymatic reverse transcription assay. Our results confirm that changes in affinity and stability for mutant complexes are predictive parameters of resistance as validated by experimental and clinical data. In future work, we believe that this computational approach may be useful to predict resistance mutations for inhibitors in development. Published by Elsevier Inc.

  7. DEEPre: sequence-based enzyme EC number prediction by deep learning

    KAUST Repository

    Li, Yu; Wang, Sheng; Umarov, Ramzan; Xie, Bingqing; Fan, Ming; Li, Lihua; Gao, Xin

    2017-01-01

    Annotation of enzyme function has a broad range of applications, such as metagenomics, industrial biotechnology, and diagnosis of enzyme deficiency-caused diseases. However, the time and resource required make it prohibitively expensive to experimentally determine the function of every enzyme. Therefore, computational enzyme function prediction has become increasingly important. In this paper, we develop such an approach, determining the enzyme function by predicting the Enzyme Commission number.We propose an end-to-end feature selection and classification model training approach, as well as an automatic and robust feature dimensionality uniformization method, DEEPre, in the field of enzyme function prediction. Instead of extracting manuallycrafted features from enzyme sequences, our model takes the raw sequence encoding as inputs, extracting convolutional and sequential features from the raw encoding based on the classification result to directly improve the prediction performance. The thorough cross-fold validation experiments conducted on two large-scale datasets show that DEEPre improves the prediction performance over the previous state-of-the-art methods. In addition, our server outperforms five other servers in determining the main class of enzymes on a separate low-homology dataset. Two case studies demonstrate DEEPre's ability to capture the functional difference of enzyme isoforms.The server could be accessed freely at http://www.cbrc.kaust.edu.sa/DEEPre.

  8. Phosphate-based glasses: Prediction of acoustical properties

    Science.gov (United States)

    El-Moneim, Amin Abd

    2016-04-01

    In this work, a comprehensive study has been carried out to predict the composition dependence of bulk modulus and ultrasonic attenuation coefficient in the phosphate-based glass systems PbO-P2O5, Li2O-TeO2-B2O3-P2O5, TiO2-Na2O-CaO-P2O5 and Cr2O3-doped Na2O-ZnO-P2O5 at room temperature. The prediction is based on (i) Makishima-Mackenzie theory, which correlates the bulk modulus with packing density and dissociation energy per unit volume, and (ii) Our recently presented semi-empirical formulas, which correlate the ultrasonic attenuation coefficient with the oxygen density, mean atomic ring size, first-order stretching force constant and experimental bulk modulus. Results revealed that our recently presented semi-empirical formulas can be applied successfully to predict changes of ultrasonic attenuation coefficient in binary PbO-P2O5 glasses at 10 MHz frequency and in quaternary Li2O-TeO2-B2O3-P2O5, TiO2-Na2O-CaO-P2O5 and Cr2O3-Na2O-ZnO-P2O5 glasses at 5 MHz frequency. Also, Makishima-Mackenzie theory appears to be valid for the studied glasses if the effect of the basic structural units that present in the glass network is taken into account.

  9. Stand diameter distribution modelling and prediction based on Richards function.

    Directory of Open Access Journals (Sweden)

    Ai-guo Duan

    Full Text Available The objective of this study was to introduce application of the Richards equation on modelling and prediction of stand diameter distribution. The long-term repeated measurement data sets, consisted of 309 diameter frequency distributions from Chinese fir (Cunninghamia lanceolata plantations in the southern China, were used. Also, 150 stands were used as fitting data, the other 159 stands were used for testing. Nonlinear regression method (NRM or maximum likelihood estimates method (MLEM were applied to estimate the parameters of models, and the parameter prediction method (PPM and parameter recovery method (PRM were used to predict the diameter distributions of unknown stands. Four main conclusions were obtained: (1 R distribution presented a more accurate simulation than three-parametric Weibull function; (2 the parameters p, q and r of R distribution proved to be its scale, location and shape parameters, and have a deep relationship with stand characteristics, which means the parameters of R distribution have good theoretical interpretation; (3 the ordinate of inflection point of R distribution has significant relativity with its skewness and kurtosis, and the fitted main distribution range for the cumulative diameter distribution of Chinese fir plantations was 0.4∼0.6; (4 the goodness-of-fit test showed diameter distributions of unknown stands can be well estimated by applying R distribution based on PRM or the combination of PPM and PRM under the condition that only quadratic mean DBH or plus stand age are known, and the non-rejection rates were near 80%, which are higher than the 72.33% non-rejection rate of three-parametric Weibull function based on the combination of PPM and PRM.

  10. Proportion of U.S. Civilian Population Ineligible for U.S. Air Force Enlistment Based on Current and Previous Weight Standards

    National Research Council Canada - National Science Library

    D'Mello, Tiffany A; Yamane, Grover K

    2007-01-01

    .... Until recently, gender-specific weight standards based on height were in place. However, in June 2006 the USAF implemented a new set of height-weight limits utilizing body mass index (BMI) criteria...

  11. Mining key elements for severe convection prediction based on CNN

    Science.gov (United States)

    Liu, Ming; Pan, Ning; Zhang, Changan; Sha, Hongzhou; Zhang, Bolei; Liu, Liang; Zhang, Meng

    2017-04-01

    Severe convective weather is a kind of weather disasters accompanied by heavy rainfall, gust wind, hail, etc. Along with recent developments on remote sensing and numerical modeling, there are high-volume and long-term observational and modeling data accumulated to capture massive severe convective events over particular areas and time periods. With those high-volume and high-variety weather data, most of the existing studies and methods carry out the dynamical laws, cause analysis, potential rule study, and prediction enhancement by utilizing the governing equations from fluid dynamics and thermodynamics. In this study, a key-element mining method is proposed for severe convection prediction based on convolution neural network (CNN). It aims to identify the key areas and key elements from huge amounts of historical weather data including conventional measurements, weather radar, satellite, so as numerical modeling and/or reanalysis data. Under this manner, the machine-learning based method could help the human forecasters on their decision-making on operational weather forecasts on severe convective weathers by extracting key information from the real-time and historical weather big data. In this paper, it first utilizes computer vision technology to complete the data preprocessing work of the meteorological variables. Then, it utilizes the information such as radar map and expert knowledge to annotate all images automatically. And finally, by using CNN model, it cloud analyze and evaluate each weather elements (e.g., particular variables, patterns, features, etc.), and identify key areas of those critical weather elements, then help forecasters quickly screen out the key elements from huge amounts of observation data by current weather conditions. Based on the rich weather measurement and model data (up to 10 years) over Fujian province in China, where the severe convective weathers are very active during the summer months, experimental tests are conducted with

  12. Coal demand prediction based on a support vector machine model

    Energy Technology Data Exchange (ETDEWEB)

    Jia, Cun-liang; Wu, Hai-shan; Gong, Dun-wei [China University of Mining & Technology, Xuzhou (China). School of Information and Electronic Engineering

    2007-01-15

    A forecasting model for coal demand of China using a support vector regression was constructed. With the selected embedding dimension, the output vectors and input vectors were constructed based on the coal demand of China from 1980 to 2002. After compared with lineal kernel and Sigmoid kernel, a radial basis function(RBF) was adopted as the kernel function. By analyzing the relationship between the error margin of prediction and the model parameters, the proper parameters were chosen. The support vector machines (SVM) model with multi-input and single output was proposed. Compared the predictor based on RBF neural networks with test datasets, the results show that the SVM predictor has higher precision and greater generalization ability. In the end, the coal demand from 2003 to 2006 is accurately forecasted. l0 refs., 2 figs., 4 tabs.

  13. Domain Adaptation for Pedestrian Detection Based on Prediction Consistency

    Directory of Open Access Journals (Sweden)

    Yu Li-ping

    2014-01-01

    Full Text Available Pedestrian detection is an active area of research in computer vision. It remains a quite challenging problem in many applications where many factors cause a mismatch between source dataset used to train the pedestrian detector and samples in the target scene. In this paper, we propose a novel domain adaptation model for merging plentiful source domain samples with scared target domain samples to create a scene-specific pedestrian detector that performs as well as rich target domain simples are present. Our approach combines the boosting-based learning algorithm with an entropy-based transferability, which is derived from the prediction consistency with the source classifications, to selectively choose the samples showing positive transferability in source domains to the target domain. Experimental results show that our approach can improve the detection rate, especially with the insufficient labeled data in target scene.

  14. Transcriptome dynamics-based operon prediction in prokaryotes.

    Science.gov (United States)

    Fortino, Vittorio; Smolander, Olli-Pekka; Auvinen, Petri; Tagliaferri, Roberto; Greco, Dario

    2014-05-16

    Inferring operon maps is crucial to understanding the regulatory networks of prokaryotic genomes. Recently, RNA-seq based transcriptome studies revealed that in many bacterial species the operon structure vary with the change of environmental conditions. Therefore, new computational solutions that use both static and dynamic data are necessary to create condition specific operon predictions. In this work, we propose a novel classification method that integrates RNA-seq based transcriptome profiles with genomic sequence features to accurately identify the operons that are expressed under a measured condition. The classifiers are trained on a small set of confirmed operons and then used to classify the remaining gene pairs of the organism studied. Finally, by linking consecutive gene pairs classified as operons, our computational approach produces condition-dependent operon maps. We evaluated our approach on various RNA-seq expression profiles of the bacteria Haemophilus somni, Porphyromonas gingivalis, Escherichia coli and Salmonella enterica. Our results demonstrate that, using features depending on both transcriptome dynamics and genome sequence characteristics, we can identify operon pairs with high accuracy. Moreover, the combination of DNA sequence and expression data results in more accurate predictions than each one alone. We present a computational strategy for the comprehensive analysis of condition-dependent operon maps in prokaryotes. Our method can be used to generate condition specific operon maps of many bacterial organisms for which high-resolution transcriptome data is available.

  15. Analyst-to-Analyst Variability in Simulation-Based Prediction

    Energy Technology Data Exchange (ETDEWEB)

    Glickman, Matthew R. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Romero, Vicente J. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2017-02-01

    This report describes findings from the culminating experiment of the LDRD project entitled, "Analyst-to-Analyst Variability in Simulation-Based Prediction". For this experiment, volunteer participants solving a given test problem in engineering and statistics were interviewed at different points in their solution process. These interviews are used to trace differing solutions to differing solution processes, and differing processes to differences in reasoning, assumptions, and judgments. The issue that the experiment was designed to illuminate -- our paucity of understanding of the ways in which humans themselves have an impact on predictions derived from complex computational simulations -- is a challenging and open one. Although solution of the test problem by analyst participants in this experiment has taken much more time than originally anticipated, and is continuing past the end of this LDRD, this project has provided a rare opportunity to explore analyst-to-analyst variability in significant depth, from which we derive evidence-based insights to guide further explorations in this important area.

  16. Predicting chick body mass by artificial intelligence-based models

    Directory of Open Access Journals (Sweden)

    Patricia Ferreira Ponciano Ferraz

    2014-07-01

    Full Text Available The objective of this work was to develop, validate, and compare 190 artificial intelligence-based models for predicting the body mass of chicks from 2 to 21 days of age subjected to different duration and intensities of thermal challenge. The experiment was conducted inside four climate-controlled wind tunnels using 210 chicks. A database containing 840 datasets (from 2 to 21-day-old chicks - with the variables dry-bulb air temperature, duration of thermal stress (days, chick age (days, and the daily body mass of chicks - was used for network training, validation, and tests of models based on artificial neural networks (ANNs and neuro-fuzzy networks (NFNs. The ANNs were most accurate in predicting the body mass of chicks from 2 to 21 days of age after they were subjected to the input variables, and they showed an R² of 0.9993 and a standard error of 4.62 g. The ANNs enable the simulation of different scenarios, which can assist in managerial decision-making, and they can be embedded in the heating control systems.

  17. Prediction Based Energy Balancing Forwarding in Cellular Networks

    Directory of Open Access Journals (Sweden)

    Yang Jian-Jun

    2017-01-01

    Full Text Available In the recent cellular network technologies, relay stations extend cell coverage and enhance signal strength for mobile users. However, busy traffic makes the relay stations in hot area run out of energy quickly. Energy is a very important factor in the forwarding of cellular network since mobile users(cell phones in hot cells often suffer from low throughput due to energy lack problems. In many situations, the energy lack problems take place because the energy loading is not balanced. In this paper, we present a prediction based forwarding algorithm to let a mobile node dynamically select the next relay station with highest potential energy capacity to resume communication. Key to this strategy is that a relay station only maintains three past status, and then it is able to predict the potential energy capacity. Then, the node selects the next hop with potential maximal energy. Moreover, a location based algorithm is developed to let the mobile node figure out the target region in order to avoid flooding. Simulations demonstrate that our approach significantly increase the aggregate throughput and decrease the delay in cellular network environment.

  18. Module-based outcome prediction using breast cancer compendia.

    Directory of Open Access Journals (Sweden)

    Martin H van Vliet

    Full Text Available BACKGROUND: The availability of large collections of microarray datasets (compendia, or knowledge about grouping of genes into pathways (gene sets, is typically not exploited when training predictors of disease outcome. These can be useful since a compendium increases the number of samples, while gene sets reduce the size of the feature space. This should be favorable from a machine learning perspective and result in more robust predictors. METHODOLOGY: We extracted modules of regulated genes from gene sets, and compendia. Through supervised analysis, we constructed predictors which employ modules predictive of breast cancer outcome. To validate these predictors we applied them to independent data, from the same institution (intra-dataset, and other institutions (inter-dataset. CONCLUSIONS: We show that modules derived from single breast cancer datasets achieve better performance on the validation data compared to gene-based predictors. We also show that there is a trend in compendium specificity and predictive performance: modules derived from a single breast cancer dataset, and a breast cancer specific compendium perform better compared to those derived from a human cancer compendium. Additionally, the module-based predictor provides a much richer insight into the underlying biology. Frequently selected gene sets are associated with processes such as cell cycle, E2F regulation, DNA damage response, proteasome and glycolysis. We analyzed two modules related to cell cycle, and the OCT1 transcription factor, respectively. On an individual basis, these modules provide a significant separation in survival subgroups on the training and independent validation data.

  19. Protein Function Prediction Based on Sequence and Structure Information

    KAUST Repository

    Smaili, Fatima Z.

    2016-05-25

    The number of available protein sequences in public databases is increasing exponentially. However, a significant fraction of these sequences lack functional annotation which is essential to our understanding of how biological systems and processes operate. In this master thesis project, we worked on inferring protein functions based on the primary protein sequence. In the approach we follow, 3D models are first constructed using I-TASSER. Functions are then deduced by structurally matching these predicted models, using global and local similarities, through three independent enzyme commission (EC) and gene ontology (GO) function libraries. The method was tested on 250 “hard” proteins, which lack homologous templates in both structure and function libraries. The results show that this method outperforms the conventional prediction methods based on sequence similarity or threading. Additionally, our method could be improved even further by incorporating protein-protein interaction information. Overall, the method we use provides an efficient approach for automated functional annotation of non-homologous proteins, starting from their sequence.

  20. The predictive value of the sacral base pressure test in detecting specific types of sacroiliac dysfunction

    Science.gov (United States)

    Mitchell, Travis D.; Urli, Kristina E.; Breitenbach, Jacques; Yelverton, Chris

    2007-01-01

    Abstract Objective This study aimed to evaluate the validity of the sacral base pressure test in diagnosing sacroiliac joint dysfunction. It also determined the predictive powers of the test in determining which type of sacroiliac joint dysfunction was present. Methods This was a double-blind experimental study with 62 participants. The results from the sacral base pressure test were compared against a cluster of previously validated tests of sacroiliac joint dysfunction to determine its validity and predictive powers. The external rotation of the feet, occurring during the sacral base pressure test, was measured using a digital inclinometer. Results There was no statistically significant difference in the results of the sacral base pressure test between the types of sacroiliac joint dysfunction. In terms of the results of validity, the sacral base pressure test was useful in identifying positive values of sacroiliac joint dysfunction. It was fairly helpful in correctly diagnosing patients with negative test results; however, it had only a “slight” agreement with the diagnosis for κ interpretation. Conclusions In this study, the sacral base pressure test was not a valid test for determining the presence of sacroiliac joint dysfunction or the type of dysfunction present. Further research comparing the agreement of the sacral base pressure test or other sacroiliac joint dysfunction tests with a criterion standard of diagnosis is necessary. PMID:19674694

  1. A prediction method for the wax deposition rate based on a radial basis function neural network

    Directory of Open Access Journals (Sweden)

    Ying Xie

    2017-06-01

    Full Text Available The radial basis function neural network is a popular supervised learning tool based on machinery learning technology. Its high precision having been proven, the radial basis function neural network has been applied in many areas. The accumulation of deposited materials in the pipeline may lead to the need for increased pumping power, a decreased flow rate or even to the total blockage of the line, with losses of production and capital investment, so research on predicting the wax deposition rate is significant for the safe and economical operation of an oil pipeline. This paper adopts the radial basis function neural network to predict the wax deposition rate by considering four main influencing factors, the pipe wall temperature gradient, pipe wall wax crystal solubility coefficient, pipe wall shear stress and crude oil viscosity, by the gray correlational analysis method. MATLAB software is employed to establish the RBF neural network. Compared with the previous literature, favorable consistency exists between the predicted outcomes and the experimental results, with a relative error of 1.5%. It can be concluded that the prediction method of wax deposition rate based on the RBF neural network is feasible.

  2. Fatigue Life Prediction of High Modulus Asphalt Concrete Based on the Local Stress-Strain Method

    Directory of Open Access Journals (Sweden)

    Mulian Zheng

    2017-03-01

    Full Text Available Previously published studies have proposed fatigue life prediction models for dense graded asphalt pavement based on flexural fatigue test. This study focused on the fatigue life prediction of High Modulus Asphalt Concrete (HMAC pavement using the local strain-stress method and direct tension fatigue test. First, the direct tension fatigue test at various strain levels was conducted on HMAC prism samples cut from plate specimens. Afterwards, their true stress-strain loop curves were obtained and modified to develop the strain-fatigue life equation. Then the nominal strain of HMAC course determined using finite element method was converted into local strain using the Neuber method. Finally, based on the established fatigue equation and converted local strain, a method to predict the pavement fatigue crack initiation life was proposed and the fatigue life of a typical HMAC overlay pavement which runs a risk of bottom-up cracking was predicted and validated. Results show that the proposed method was able to produce satisfactory crack initiation life.

  3. Episodic memories predict adaptive value-based decision-making

    Science.gov (United States)

    Murty, Vishnu; FeldmanHall, Oriel; Hunter, Lindsay E.; Phelps, Elizabeth A; Davachi, Lila

    2016-01-01

    Prior research illustrates that memory can guide value-based decision-making. For example, previous work has implicated both working memory and procedural memory (i.e., reinforcement learning) in guiding choice. However, other types of memories, such as episodic memory, may also influence decision-making. Here we test the role for episodic memory—specifically item versus associative memory—in supporting value-based choice. Participants completed a task where they first learned the value associated with trial unique lotteries. After a short delay, they completed a decision-making task where they could choose to re-engage with previously encountered lotteries, or new never before seen lotteries. Finally, participants completed a surprise memory test for the lotteries and their associated values. Results indicate that participants chose to re-engage more often with lotteries that resulted in high versus low rewards. Critically, participants not only formed detailed, associative memories for the reward values coupled with individual lotteries, but also exhibited adaptive decision-making only when they had intact associative memory. We further found that the relationship between adaptive choice and associative memory generalized to more complex, ecologically valid choice behavior, such as social decision-making. However, individuals more strongly encode experiences of social violations—such as being treated unfairly, suggesting a bias for how individuals form associative memories within social contexts. Together, these findings provide an important integration of episodic memory and decision-making literatures to better understand key mechanisms supporting adaptive behavior. PMID:26999046

  4. Distance matrix-based approach to protein structure prediction.

    Science.gov (United States)

    Kloczkowski, Andrzej; Jernigan, Robert L; Wu, Zhijun; Song, Guang; Yang, Lei; Kolinski, Andrzej; Pokarowski, Piotr

    2009-03-01

    Much structural information is encoded in the internal distances; a distance matrix-based approach can be used to predict protein structure and dynamics, and for structural refinement. Our approach is based on the square distance matrix D = [r(ij)(2)] containing all square distances between residues in proteins. This distance matrix contains more information than the contact matrix C, that has elements of either 0 or 1 depending on whether the distance r (ij) is greater or less than a cutoff value r (cutoff). We have performed spectral decomposition of the distance matrices D = sigma lambda(k)V(k)V(kT), in terms of eigenvalues lambda kappa and the corresponding eigenvectors v kappa and found that it contains at most five nonzero terms. A dominant eigenvector is proportional to r (2)--the square distance of points from the center of mass, with the next three being the principal components of the system of points. By predicting r (2) from the sequence we can approximate a distance matrix of a protein with an expected RMSD value of about 7.3 A, and by combining it with the prediction of the first principal component we can improve this approximation to 4.0 A. We can also explain the role of hydrophobic interactions for the protein structure, because r is highly correlated with the hydrophobic profile of the sequence. Moreover, r is highly correlated with several sequence profiles which are useful in protein structure prediction, such as contact number, the residue-wise contact order (RWCO) or mean square fluctuations (i.e. crystallographic temperature factors). We have also shown that the next three components are related to spatial directionality of the secondary structure elements, and they may be also predicted from the sequence, improving overall structure prediction. We have also shown that the large number of available HIV-1 protease structures provides a remarkable sampling of conformations, which can be viewed as direct structural information about the

  5. Population PK modelling and simulation based on fluoxetine and norfluoxetine concentrations in milk: a milk concentration-based prediction model.

    Science.gov (United States)

    Tanoshima, Reo; Bournissen, Facundo Garcia; Tanigawara, Yusuke; Kristensen, Judith H; Taddio, Anna; Ilett, Kenneth F; Begg, Evan J; Wallach, Izhar; Ito, Shinya

    2014-10-01

    Population pharmacokinetic (pop PK) modelling can be used for PK assessment of drugs in breast milk. However, complex mechanistic modelling of a parent and an active metabolite using both blood and milk samples is challenging. We aimed to develop a simple predictive pop PK model for milk concentration-time profiles of a parent and a metabolite, using data on fluoxetine (FX) and its active metabolite, norfluoxetine (NFX), in milk. Using a previously published data set of drug concentrations in milk from 25 women treated with FX, a pop PK model predictive of milk concentration-time profiles of FX and NFX was developed. Simulation was performed with the model to generate FX and NFX concentration-time profiles in milk of 1000 mothers. This milk concentration-based pop PK model was compared with the previously validated plasma/milk concentration-based pop PK model of FX. Milk FX and NFX concentration-time profiles were described reasonably well by a one compartment model with a FX-to-NFX conversion coefficient. Median values of the simulated relative infant dose on a weight basis (sRID: weight-adjusted daily doses of FX and NFX through breastmilk to the infant, expressed as a fraction of therapeutic FX daily dose per body weight) were 0.028 for FX and 0.029 for NFX. The FX sRID estimates were consistent with those of the plasma/milk-based pop PK model. A predictive pop PK model based on only milk concentrations can be developed for simultaneous estimation of milk concentration-time profiles of a parent (FX) and an active metabolite (NFX). © 2014 The British Pharmacological Society.

  6. Predictive Multiscale Modeling of Nanocellulose Based Materials and Systems

    International Nuclear Information System (INIS)

    Kovalenko, Andriy

    2014-01-01

    Cellulose Nanocrysals (CNC) is a renewable biodegradable biopolymer with outstanding mechanical properties made from highly abundant natural source, and therefore is very attractive as reinforcing additive to replace petroleum-based plastics in biocomposite materials, foams, and gels. Large-scale applications of CNC are currently limited due to its low solubility in non-polar organic solvents used in existing polymerization technologies. The solvation properties of CNC can be improved by chemical modification of its surface. Development of effective surface modifications has been rather slow because extensive chemical modifications destabilize the hydrogen bonding network of cellulose and deteriorate the mechanical properties of CNC. We employ predictive multiscale theory, modeling, and simulation to gain a fundamental insight into the effect of CNC surface modifications on hydrogen bonding, CNC crystallinity, solvation thermodynamics, and CNC compatibilization with the existing polymerization technologies, so as to rationally design green nanomaterials with improved solubility in non-polar solvents, controlled liquid crystal ordering and optimized extrusion properties. An essential part of this multiscale modeling approach is the statistical- mechanical 3D-RISM-KH molecular theory of solvation, coupled with quantum mechanics, molecular mechanics, and multistep molecular dynamics simulation. The 3D-RISM-KH theory provides predictive modeling of both polar and non-polar solvents, solvent mixtures, and electrolyte solutions in a wide range of concentrations and thermodynamic states. It properly accounts for effective interactions in solution such as steric effects, hydrophobicity and hydrophilicity, hydrogen bonding, salt bridges, buffer, co-solvent, and successfully predicts solvation effects and processes in bulk liquids, solvation layers at solid surface, and in pockets and other inner spaces of macromolecules and supramolecular assemblies. This methodology

  7. Predictive Multiscale Modeling of Nanocellulose Based Materials and Systems

    Science.gov (United States)

    Kovalenko, Andriy

    2014-08-01

    Cellulose Nanocrysals (CNC) is a renewable biodegradable biopolymer with outstanding mechanical properties made from highly abundant natural source, and therefore is very attractive as reinforcing additive to replace petroleum-based plastics in biocomposite materials, foams, and gels. Large-scale applications of CNC are currently limited due to its low solubility in non-polar organic solvents used in existing polymerization technologies. The solvation properties of CNC can be improved by chemical modification of its surface. Development of effective surface modifications has been rather slow because extensive chemical modifications destabilize the hydrogen bonding network of cellulose and deteriorate the mechanical properties of CNC. We employ predictive multiscale theory, modeling, and simulation to gain a fundamental insight into the effect of CNC surface modifications on hydrogen bonding, CNC crystallinity, solvation thermodynamics, and CNC compatibilization with the existing polymerization technologies, so as to rationally design green nanomaterials with improved solubility in non-polar solvents, controlled liquid crystal ordering and optimized extrusion properties. An essential part of this multiscale modeling approach is the statistical- mechanical 3D-RISM-KH molecular theory of solvation, coupled with quantum mechanics, molecular mechanics, and multistep molecular dynamics simulation. The 3D-RISM-KH theory provides predictive modeling of both polar and non-polar solvents, solvent mixtures, and electrolyte solutions in a wide range of concentrations and thermodynamic states. It properly accounts for effective interactions in solution such as steric effects, hydrophobicity and hydrophilicity, hydrogen bonding, salt bridges, buffer, co-solvent, and successfully predicts solvation effects and processes in bulk liquids, solvation layers at solid surface, and in pockets and other inner spaces of macromolecules and supramolecular assemblies. This methodology

  8. MO-FG-303-03: Demonstration of Universal Knowledge-Based 3D Dose Prediction

    Energy Technology Data Exchange (ETDEWEB)

    Shiraishi, S; Moore, K L [University of California, San Diego, La Jolla, CA (United States)

    2015-06-15

    Purpose: To demonstrate a knowledge-based 3D dose prediction methodology that can accurately predict achievable radiotherapy distributions. Methods: Using previously treated plans as input, an artificial neural network (ANN) was trained to predict 3D dose distributions based on 14 patient-specific anatomical parameters including the distance (r) to planning target volume (PTV) boundary, organ-at-risk (OAR) boundary distances, and angular position ( θ,φ). 23 prostate and 49 stereotactic radiosurgery (SRS) cases with ≥1 nearby OARs were studied. All were planned with volumetric-modulated arc therapy (VMAT) to prescription doses of 81Gy for prostate and 12–30Gy for SRS. Site-specific ANNs were trained using all prostate 23 plans and using a 24 randomly-selected subset for the SRS model. The remaining 25 SRS plans were used to validate the model. To quantify predictive accuracy, the dose difference between the clinical plan and prediction were calculated on a voxel-by-voxel basis δD(r,θ,φ)=Dclin(r,θ,φ)-Dpred(r, θ,φ). Grouping voxels by boundary distance, the mean <δ Dr>=(1/N)Σ -θ,φ D(r,θ,φ) and inter-quartile range (IQR) quantified the accuracy of this method for deriving DVH estimations. The standard deviation (σ) of δ D quantified the 3D dose prediction error on a voxel-by-voxel basis. Results: The ANNs were highly accurate in predictive ability for both prostate and SRS plans. For prostate, <δDr> ranged from −0.8% to +0.6% (max IQR=3.8%) over r=0–32mm, while 3D dose prediction accuracy averaged from σ=5–8% across the same range. For SRS, from r=0–34mm the training set <δDr> ranged from −3.7% to +1.5% (max IQR=4.4%) while the validation set <δDr> ranged from −2.2% to +5.8% (max IQR=5.3%). 3D dose prediction accuracy averaged σ=2.5% for the training set and σ=4.0% over the same interval. Conclusion: The study demonstrates this technique’s ability to predict achievable 3D dose distributions for VMAT SRS and prostate. Future

  9. MO-FG-303-03: Demonstration of Universal Knowledge-Based 3D Dose Prediction

    International Nuclear Information System (INIS)

    Shiraishi, S; Moore, K L

    2015-01-01

    Purpose: To demonstrate a knowledge-based 3D dose prediction methodology that can accurately predict achievable radiotherapy distributions. Methods: Using previously treated plans as input, an artificial neural network (ANN) was trained to predict 3D dose distributions based on 14 patient-specific anatomical parameters including the distance (r) to planning target volume (PTV) boundary, organ-at-risk (OAR) boundary distances, and angular position ( θ,φ). 23 prostate and 49 stereotactic radiosurgery (SRS) cases with ≥1 nearby OARs were studied. All were planned with volumetric-modulated arc therapy (VMAT) to prescription doses of 81Gy for prostate and 12–30Gy for SRS. Site-specific ANNs were trained using all prostate 23 plans and using a 24 randomly-selected subset for the SRS model. The remaining 25 SRS plans were used to validate the model. To quantify predictive accuracy, the dose difference between the clinical plan and prediction were calculated on a voxel-by-voxel basis δD(r,θ,φ)=Dclin(r,θ,φ)-Dpred(r, θ,φ). Grouping voxels by boundary distance, the mean =(1/N)Σ -θ,φ D(r,θ,φ) and inter-quartile range (IQR) quantified the accuracy of this method for deriving DVH estimations. The standard deviation (σ) of δ D quantified the 3D dose prediction error on a voxel-by-voxel basis. Results: The ANNs were highly accurate in predictive ability for both prostate and SRS plans. For prostate, ranged from −0.8% to +0.6% (max IQR=3.8%) over r=0–32mm, while 3D dose prediction accuracy averaged from σ=5–8% across the same range. For SRS, from r=0–34mm the training set ranged from −3.7% to +1.5% (max IQR=4.4%) while the validation set ranged from −2.2% to +5.8% (max IQR=5.3%). 3D dose prediction accuracy averaged σ=2.5% for the training set and σ=4.0% over the same interval. Conclusion: The study demonstrates this technique’s ability to predict achievable 3D dose distributions for VMAT SRS and prostate. Future investigations will attempt to

  10. Novel immunohistochemistry-based signatures to predict metastatic site of triple-negative breast cancers.

    Science.gov (United States)

    Klimov, Sergey; Rida, Padmashree Cg; Aleskandarany, Mohammed A; Green, Andrew R; Ellis, Ian O; Janssen, Emiel Am; Rakha, Emad A; Aneja, Ritu

    2017-09-05

    Although distant metastasis (DM) in breast cancer (BC) is the most lethal form of recurrence and the most common underlying cause of cancer related deaths, the outcome following the development of DM is related to the site of metastasis. Triple negative BC (TNBC) is an aggressive form of BC characterised by early recurrences and high mortality. Athough multiple variables can be used to predict the risk of metastasis, few markers can predict the specific site of metastasis. This study aimed at identifying a biomarker signature to predict particular sites of DM in TNBC. A clinically annotated series of 322 TNBC were immunohistochemically stained with 133 biomarkers relevant to BC, to develop multibiomarker models for predicting metastasis to the bone, liver, lung and brain. Patients who experienced metastasis to each site were compared with those who did not, by gradually filtering the biomarker set via a two-tailed t-test and Cox univariate analyses. Biomarker combinations were finally ranked based on statistical significance, and evaluated in multivariable analyses. Our final models were able to stratify TNBC patients into high risk groups that showed over 5, 6, 7 and 8 times higher risk of developing metastasis to the bone, liver, lung and brain, respectively, than low-risk subgroups. These models for predicting site-specific metastasis retained significance following adjustment for tumour size, patient age and chemotherapy status. Our novel IHC-based biomarkers signatures, when assessed in primary TNBC tumours, enable prediction of specific sites of metastasis, and potentially unravel biomarkers previously unknown in site tropism.

  11. Method of predicting Splice Sites based on signal interactions

    Directory of Open Access Journals (Sweden)

    Deogun Jitender S

    2006-04-01

    Full Text Available Abstract Background Predicting and proper ranking of canonical splice sites (SSs is a challenging problem in bioinformatics and machine learning communities. Any progress in SSs recognition will lead to better understanding of splicing mechanism. We introduce several new approaches of combining a priori knowledge for improved SS detection. First, we design our new Bayesian SS sensor based on oligonucleotide counting. To further enhance prediction quality, we applied our new de novo motif detection tool MHMMotif to intronic ends and exons. We combine elements found with sensor information using Naive Bayesian Network, as implemented in our new tool SpliceScan. Results According to our tests, the Bayesian sensor outperforms the contemporary Maximum Entropy sensor for 5' SS detection. We report a number of putative Exonic (ESE and Intronic (ISE Splicing Enhancers found by MHMMotif tool. T-test statistics on mouse/rat intronic alignments indicates, that detected elements are on average more conserved as compared to other oligos, which supports our assumption of their functional importance. The tool has been shown to outperform the SpliceView, GeneSplicer, NNSplice, Genio and NetUTR tools for the test set of human genes. SpliceScan outperforms all contemporary ab initio gene structural prediction tools on the set of 5' UTR gene fragments. Conclusion Designed methods have many attractive properties, compared to existing approaches. Bayesian sensor, MHMMotif program and SpliceScan tools are freely available on our web site. Reviewers This article was reviewed by Manyuan Long, Arcady Mushegian and Mikhail Gelfand.

  12. Five-year efficacy and safety of tenofovir-based salvage therapy for patients with chronic hepatitis B who previously failed LAM/ADV therapy.

    Science.gov (United States)

    Lim, Lucy; Thompson, Alexander; Patterson, Scott; George, Jacob; Strasser, Simone; Lee, Alice; Sievert, William; Nicoll, Amanda; Desmond, Paul; Roberts, Stuart; Marion, Kaye; Bowden, Scott; Locarnini, Stephen; Angus, Peter

    2017-06-01

    Multidrug-resistant HBV continues to be an important clinical problem. The TDF-109 study demonstrated that TDF±LAM is an effective salvage therapy through 96 weeks for LAM-resistant patients who previously failed ADV add-on or switch therapy. We evaluated the 5-year efficacy and safety outcomes in patients receiving long-term TDF±LAM in the TDF-109 study. A total of 59 patients completed the first phase of the TDF-109 study and 54/59 were rolled over into a long-term prospective open-label study of TDF±LAM 300 mg daily. Results are reported at the end of year 5 of treatment. At year 5, 75% (45/59) had achieved viral suppression by intent-to-treat analysis. Per-protocol assessment revealed 83% (45/54) were HBV DNA undetectable. Nine patients remained HBV DNA detectable, however 8/9 had very low HBV DNA levels (<264IU/mL) and did not meet virological criteria for virological breakthrough (VBT). One patient experienced VBT, but this was in the setting of documented non-compliance. The response was independent of baseline LAM therapy or mutations conferring ADV resistance. Four patients discontinued TDF, one patient was lost to follow-up and one died from hepatocellular carcinoma. Long-term TDF treatment appears to be safe and effective in patients with prior failure of LAM and a suboptimal response to ADV therapy. These findings confirm that TDF has a high genetic barrier to resistance is active against multidrug-resistant HBV, and should be the preferred oral anti-HBV agent in CHB patients who fail treatment with LAM and ADV. © 2016 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  13. Evaluating the Predictive Power of Multivariate Tensor-based Morphometry in Alzheimers Disease Progression via Convex Fused Sparse Group Lasso.

    Science.gov (United States)

    Tsao, Sinchai; Gajawelli, Niharika; Zhou, Jiayu; Shi, Jie; Ye, Jieping; Wang, Yalin; Lepore, Natasha

    2014-03-21

    Prediction of Alzheimers disease (AD) progression based on baseline measures allows us to understand disease progression and has implications in decisions concerning treatment strategy. To this end we combine a predictive multi-task machine learning method 1 with novel MR-based multivariate morphometric surface map of the hippocampus 2 to predict future cognitive scores of patients. Previous work by Zhou et al. 1 has shown that a multi-task learning framework that performs prediction of all future time points (or tasks) simultaneously can be used to encode both sparsity as well as temporal smoothness. They showed that this can be used in predicting cognitive outcomes of Alzheimers Disease Neuroimaging Initiative (ADNI) subjects based on FreeSurfer-based baseline MRI features, MMSE score demographic information and ApoE status. Whilst volumetric information may hold generalized information on brain status, we hypothesized that hippocampus specific information may be more useful in predictive modeling of AD. To this end, we applied Shi et al. 2 s recently developed multivariate tensor-based (mTBM) parametric surface analysis method to extract features from the hippocampal surface. We show that by combining the power of the multi-task framework with the sensitivity of mTBM features of the hippocampus surface, we are able to improve significantly improve predictive performance of ADAS cognitive scores 6, 12, 24, 36 and 48 months from baseline.

  14. Evaluating the predictive power of multivariate tensor-based morphometry in Alzheimer's disease progression via convex fused sparse group Lasso

    Science.gov (United States)

    Tsao, Sinchai; Gajawelli, Niharika; Zhou, Jiayu; Shi, Jie; Ye, Jieping; Wang, Yalin; Lepore, Natasha

    2014-03-01

    Prediction of Alzheimers disease (AD) progression based on baseline measures allows us to understand disease progression and has implications in decisions concerning treatment strategy. To this end we combine a predictive multi-task machine learning method1 with novel MR-based multivariate morphometric surface map of the hippocampus2 to predict future cognitive scores of patients. Previous work by Zhou et al.1 has shown that a multi-task learning framework that performs prediction of all future time points (or tasks) simultaneously can be used to encode both sparsity as well as temporal smoothness. They showed that this can be used in predicting cognitive outcomes of Alzheimers Disease Neuroimaging Initiative (ADNI) subjects based on FreeSurfer-based baseline MRI features, MMSE score demographic information and ApoE status. Whilst volumetric information may hold generalized information on brain status, we hypothesized that hippocampus specific information may be more useful in predictive modeling of AD. To this end, we applied Shi et al.2s recently developed multivariate tensor-based (mTBM) parametric surface analysis method to extract features from the hippocampal surface. We show that by combining the power of the multi-task framework with the sensitivity of mTBM features of the hippocampus surface, we are able to improve significantly improve predictive performance of ADAS cognitive scores 6, 12, 24, 36 and 48 months from baseline.

  15. Prediction of spectral acceleration response ordinates based on PGA attenuation

    Science.gov (United States)

    Graizer, V.; Kalkan, E.

    2009-01-01

    Developed herein is a new peak ground acceleration (PGA)-based predictive model for 5% damped pseudospectral acceleration (SA) ordinates of free-field horizontal component of ground motion from shallow-crustal earthquakes. The predictive model of ground motion spectral shape (i.e., normalized spectrum) is generated as a continuous function of few parameters. The proposed model eliminates the classical exhausted matrix of estimator coefficients, and provides significant ease in its implementation. It is structured on the Next Generation Attenuation (NGA) database with a number of additions from recent Californian events including 2003 San Simeon and 2004 Parkfield earthquakes. A unique feature of the model is its new functional form explicitly integrating PGA as a scaling factor. The spectral shape model is parameterized within an approximation function using moment magnitude, closest distance to the fault (fault distance) and VS30 (average shear-wave velocity in the upper 30 m) as independent variables. Mean values of its estimator coefficients were computed by fitting an approximation function to spectral shape of each record using robust nonlinear optimization. Proposed spectral shape model is independent of the PGA attenuation, allowing utilization of various PGA attenuation relations to estimate the response spectrum of earthquake recordings.

  16. Prediction of recombinant protein overexpression in Escherichia coli using a machine learning based model (RPOLP).

    Science.gov (United States)

    Habibi, Narjeskhatoon; Norouzi, Alireza; Mohd Hashim, Siti Z; Shamsir, Mohd Shahir; Samian, Razip

    2015-11-01

    Recombinant protein overexpression, an important biotechnological process, is ruled by complex biological rules which are mostly unknown, is in need of an intelligent algorithm so as to avoid resource-intensive lab-based trial and error experiments in order to determine the expression level of the recombinant protein. The purpose of this study is to propose a predictive model to estimate the level of recombinant protein overexpression for the first time in the literature using a machine learning approach based on the sequence, expression vector, and expression host. The expression host was confined to Escherichia coli which is the most popular bacterial host to overexpress recombinant proteins. To provide a handle to the problem, the overexpression level was categorized as low, medium and high. A set of features which were likely to affect the overexpression level was generated based on the known facts (e.g. gene length) and knowledge gathered from related literature. Then, a representative sub-set of features generated in the previous objective was determined using feature selection techniques. Finally a predictive model was developed using random forest classifier which was able to adequately classify the multi-class imbalanced small dataset constructed. The result showed that the predictive model provided a promising accuracy of 80% on average, in estimating the overexpression level of a recombinant protein. Copyright © 2015 Elsevier Ltd. All rights reserved.

  17. Mechanism-Based Classification of PAH Mixtures to Predict Carcinogenic Potential.

    Science.gov (United States)

    Tilton, Susan C; Siddens, Lisbeth K; Krueger, Sharon K; Larkin, Andrew J; Löhr, Christiane V; Williams, David E; Baird, William M; Waters, Katrina M

    2015-07-01

    We have previously shown that relative potency factors and DNA adduct measurements are inadequate for predicting carcinogenicity of certain polycyclic aromatic hydrocarbons (PAHs) and PAH mixtures, particularly those that function through alternate pathways or exhibit greater promotional activity compared to benzo[a]pyrene (BaP). Therefore, we developed a pathway-based approach for classification of tumor outcome after dermal exposure to PAH/mixtures. FVB/N mice were exposed to dibenzo[def,p]chrysene (DBC), BaP, or environmental PAH mixtures (Mix 1-3) following a 2-stage initiation/promotion skin tumor protocol. Resulting tumor incidence could be categorized by carcinogenic potency as DBC > BaP = Mix2 = Mix3 > Mix1 = Control, based on statistical significance. Gene expression profiles measured in skin of mice collected 12 h post-initiation were compared with tumor outcome for identification of short-term bioactivity profiles. A Bayesian integration model was utilized to identify biological pathways predictive of PAH carcinogenic potential during initiation. Integration of probability matrices from four enriched pathways (P PAH mixtures. These data further provide a 'source-to-outcome' model that could be used to predict PAH interactions during tumorigenesis and provide an example of how mode-of-action-based risk assessment could be employed for environmental PAH mixtures. © The Author 2015. Published by Oxford University Press on behalf of the Society of Toxicology. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  18. An IL28B genotype-based clinical prediction model for treatment of chronic hepatitis C.

    Directory of Open Access Journals (Sweden)

    Thomas R O'Brien

    Full Text Available Genetic variation in IL28B and other factors are associated with sustained virological response (SVR after pegylated-interferon/ribavirin treatment for chronic hepatitis C (CHC. Using data from the HALT-C Trial, we developed a model to predict a patient's probability of SVR based on IL28B genotype and clinical variables.HALT-C enrolled patients with advanced CHC who had failed previous interferon-based treatment. Subjects were re-treated with pegylated-interferon/ribavirin during trial lead-in. We used step-wise logistic regression to calculate adjusted odds ratios (aOR and create the predictive model. Leave-one-out cross-validation was used to predict a priori probabilities of SVR and determine area under the receiver operator characteristics curve (AUC.Among 646 HCV genotype 1-infected European American patients, 14.2% achieved SVR. IL28B rs12979860-CC genotype was the strongest predictor of SVR (aOR, 7.56; p10% (43.3% of subjects had an SVR rate of 27.9% and accounted for 84.8% of subjects actually achieving SVR. To verify that consideration of both IL28B genotype and clinical variables is required for treatment decisions, we calculated AUC values from published data for the IDEAL Study.A clinical prediction model based on IL28B genotype and clinical variables can yield useful individualized predictions of the probability of treatment success that could increase SVR rates and decrease the frequency of futile treatment among patients with CHC.

  19. Sequence Based Prediction of Antioxidant Proteins Using a Classifier Selection Strategy.

    Directory of Open Access Journals (Sweden)

    Lina Zhang

    Full Text Available Antioxidant proteins perform significant functions in maintaining oxidation/antioxidation balance and have potential therapies for some diseases. Accurate identification of antioxidant proteins could contribute to revealing physiological processes of oxidation/antioxidation balance and developing novel antioxidation-based drugs. In this study, an ensemble method is presented to predict antioxidant proteins with hybrid features, incorporating SSI (Secondary Structure Information, PSSM (Position Specific Scoring Matrix, RSA (Relative Solvent Accessibility, and CTD (Composition, Transition, Distribution. The prediction results of the ensemble predictor are determined by an average of prediction results of multiple base classifiers. Based on a classifier selection strategy, we obtain an optimal ensemble classifier composed of RF (Random Forest, SMO (Sequential Minimal Optimization, NNA (Nearest Neighbor Algorithm, and J48 with an accuracy of 0.925. A Relief combined with IFS (Incremental Feature Selection method is adopted to obtain optimal features from hybrid features. With the optimal features, the ensemble method achieves improved performance with a sensitivity of 0.95, a specificity of 0.93, an accuracy of 0.94, and an MCC (Matthew's Correlation Coefficient of 0.880, far better than the existing method. To evaluate the prediction performance objectively, the proposed method is compared with existing methods on the same independent testing dataset. Encouragingly, our method performs better than previous studies. In addition, our method achieves more balanced performance with a sensitivity of 0.878 and a specificity of 0.860. These results suggest that the proposed ensemble method can be a potential candidate for antioxidant protein prediction. For public access, we develop a user-friendly web server for antioxidant protein identification that is freely accessible at http://antioxidant.weka.cc.

  20. Population-based Neisseria gonorrhoeae, Chlamydia trachomatis and Trichomonas vaginalis Prevalence Using Discarded, Deidentified Urine Specimens Previously Collected for Drug Testing (Open Access Publisher’s Version)

    Science.gov (United States)

    2017-10-24

    trichomonas vaginalis testing, Melinda Balansay-ames, chris Myers and gary Brice for Pcr- based sex determination testing, and Kimberly De Vera for...2017-053355 rEFErEnCEs 1 torrone e , Papp J, Weinstock H. centers for Disease control and Prevention (cDc). Prevalence of Chlamydia trachomatis genital...infection among persons aged 14-39 years-United States, 2007-2012. MMWR Morb Mortal Wkly Rep 2014;63:834–7. 2 rietmeijer ca, Hopkins e , geisler WM

  1. The prediction of surface temperature in the new seasonal prediction system based on the MPI-ESM coupled climate model

    Science.gov (United States)

    Baehr, J.; Fröhlich, K.; Botzet, M.; Domeisen, D. I. V.; Kornblueh, L.; Notz, D.; Piontek, R.; Pohlmann, H.; Tietsche, S.; Müller, W. A.

    2015-05-01

    A seasonal forecast system is presented, based on the global coupled climate model MPI-ESM as used for CMIP5 simulations. We describe the initialisation of the system and analyse its predictive skill for surface temperature. The presented system is initialised in the atmospheric, oceanic, and sea ice component of the model from reanalysis/observations with full field nudging in all three components. For the initialisation of the ensemble, bred vectors with a vertically varying norm are implemented in the ocean component to generate initial perturbations. In a set of ensemble hindcast simulations, starting each May and November between 1982 and 2010, we analyse the predictive skill. Bias-corrected ensemble forecasts for each start date reproduce the observed surface temperature anomalies at 2-4 months lead time, particularly in the tropics. Niño3.4 sea surface temperature anomalies show a small root-mean-square error and predictive skill up to 6 months. Away from the tropics, predictive skill is mostly limited to the ocean, and to regions which are strongly influenced by ENSO teleconnections. In summary, the presented seasonal prediction system based on a coupled climate model shows predictive skill for surface temperature at seasonal time scales comparable to other seasonal prediction systems using different underlying models and initialisation strategies. As the same model underlying our seasonal prediction system—with a different initialisation—is presently also used for decadal predictions, this is an important step towards seamless seasonal-to-decadal climate predictions.

  2. Feature-Based and String-Based Models for Predicting RNA-Protein Interaction

    Directory of Open Access Journals (Sweden)

    Donald Adjeroh

    2018-03-01

    Full Text Available In this work, we study two approaches for the problem of RNA-Protein Interaction (RPI. In the first approach, we use a feature-based technique by combining extracted features from both sequences and secondary structures. The feature-based approach enhanced the prediction accuracy as it included much more available information about the RNA-protein pairs. In the second approach, we apply search algorithms and data structures to extract effective string patterns for prediction of RPI, using both sequence information (protein and RNA sequences, and structure information (protein and RNA secondary structures. This led to different string-based models for predicting interacting RNA-protein pairs. We show results that demonstrate the effectiveness of the proposed approaches, including comparative results against leading state-of-the-art methods.

  3. Research on cardiovascular disease prediction based on distance metric learning

    Science.gov (United States)

    Ni, Zhuang; Liu, Kui; Kang, Guixia

    2018-04-01

    Distance metric learning algorithm has been widely applied to medical diagnosis and exhibited its strengths in classification problems. The k-nearest neighbour (KNN) is an efficient method which treats each feature equally. The large margin nearest neighbour classification (LMNN) improves the accuracy of KNN by learning a global distance metric, which did not consider the locality of data distributions. In this paper, we propose a new distance metric algorithm adopting cosine metric and LMNN named COS-SUBLMNN which takes more care about local feature of data to overcome the shortage of LMNN and improve the classification accuracy. The proposed methodology is verified on CVDs patient vector derived from real-world medical data. The Experimental results show that our method provides higher accuracy than KNN and LMNN did, which demonstrates the effectiveness of the Risk predictive model of CVDs based on COS-SUBLMNN.

  4. Adaptive DIT-Based Fringe Tracking and Prediction at IOTA

    Science.gov (United States)

    Wilson, Edward; Pedretti, Ettore; Bregman, Jesse; Mah, Robert W.; Traub, Wesley A.

    2004-01-01

    An automatic fringe tracking system has been developed and implemented at the Infrared Optical Telescope Array (IOTA). In testing during May 2002, the system successfully minimized the optical path differences (OPDs) for all three baselines at IOTA. Based on sliding window discrete Fourier transform (DFT) calculations that were optimized for computational efficiency and robustness to atmospheric disturbances, the algorithm has also been tested extensively on off-line data. Implemented in ANSI C on the 266 MHZ PowerPC processor running the VxWorks real-time operating system, the algorithm runs in approximately 2.0 milliseconds per scan (including all three interferograms), using the science camera and piezo scanners to measure and correct the OPDs. Preliminary analysis on an extension of this algorithm indicates a potential for predictive tracking, although at present, real-time implementation of this extension would require significantly more computational capacity.

  5. Adaptive learning compressive tracking based on Markov location prediction

    Science.gov (United States)

    Zhou, Xingyu; Fu, Dongmei; Yang, Tao; Shi, Yanan

    2017-03-01

    Object tracking is an interdisciplinary research topic in image processing, pattern recognition, and computer vision which has theoretical and practical application value in video surveillance, virtual reality, and automatic navigation. Compressive tracking (CT) has many advantages, such as efficiency and accuracy. However, when there are object occlusion, abrupt motion and blur, similar objects, and scale changing, the CT has the problem of tracking drift. We propose the Markov object location prediction to get the initial position of the object. Then CT is used to locate the object accurately, and the classifier parameter adaptive updating strategy is given based on the confidence map. At the same time according to the object location, extract the scale features, which is able to deal with object scale variations effectively. Experimental results show that the proposed algorithm has better tracking accuracy and robustness than current advanced algorithms and achieves real-time performance.

  6. Optimization of arterial age prediction models based in pulse wave

    Energy Technology Data Exchange (ETDEWEB)

    Scandurra, A G [Bioengineering Laboratory, Electronic Department, Mar del Plata University (Argentina); Meschino, G J [Bioengineering Laboratory, Electronic Department, Mar del Plata University (Argentina); Passoni, L I [Bioengineering Laboratory, Electronic Department, Mar del Plata University (Argentina); Dai Pra, A L [Engineering Aplied Artificial Intelligence Group, Mathematics Department, Mar del Plata University (Argentina); Introzzi, A R [Bioengineering Laboratory, Electronic Department, Mar del Plata University (Argentina); Clara, F M [Bioengineering Laboratory, Electronic Department, Mar del Plata University (Argentina)

    2007-11-15

    We propose the detection of early arterial ageing through a prediction model of arterial age based in the coherence assumption between the pulse wave morphology and the patient's chronological age. Whereas we evaluate several methods, a Sugeno fuzzy inference system is selected. Models optimization is approached using hybrid methods: parameter adaptation with Artificial Neural Networks and Genetic Algorithms. Features selection was performed according with their projection on main factors of the Principal Components Analysis. The model performance was tested using the bootstrap error type .632E. The model presented an error smaller than 8.5%. This result encourages including this process as a diagnosis module into the device for pulse analysis that has been developed by the Bioengineering Laboratory staff.

  7. Mining Behavior Based Safety Data to Predict Safety Performance

    Energy Technology Data Exchange (ETDEWEB)

    Jeffrey C. Joe

    2010-06-01

    The Idaho National Laboratory (INL) operates a behavior based safety program called Safety Observations Achieve Results (SOAR). This peer-to-peer observation program encourages employees to perform in-field observations of each other's work practices and habits (i.e., behaviors). The underlying premise of conducting these observations is that more serious accidents are prevented from occurring because lower level “at risk” behaviors are identified and corrected before they can propagate into culturally accepted “unsafe” behaviors that result in injuries or fatalities. Although the approach increases employee involvement in safety, the premise of the program has not been subject to sufficient empirical evaluation. The INL now has a significant amount of SOAR data on these lower level “at risk” behaviors. This paper describes the use of data mining techniques to analyze these data to determine whether they can predict if and when a more serious accident will occur.

  8. Stress corrosion cracking of nickel base alloys characterization and prediction

    International Nuclear Information System (INIS)

    Santarini, G.; Pinard-Legry, G.

    1988-01-01

    For many years, studies have been carried out in several laboratories to characterize the IGSCC (Intergranular Stress Corrosion Cracking) behaviour of nickel base alloys in aqueous environments. For their relative shortness, CERTs (Constant Extension Rate Tests) have been extensively used, especially at the Corrosion Department of the CEA. However, up to recently, the results obtained with this method remained qualitative. This paper presents a first approach to a quantitative interpretation of CERT results. The basic datum used is the crack trace depth distribution determined on a specimen section at the end of a CERT. It is shown that this information can be used for the calculation of initiation and growth parameters which quantitatively characterize IGSCC phenomenon. Moreover, the rationale proposed should lead to the determination of intrinsic cracking parameters, and so, to in-service behaviour prediction

  9. A Prediction-based Smart Meter Data Generator

    DEFF Research Database (Denmark)

    Iftikhar, Nadeem; Liu, Xiufeng; Nordbjerg, Finn Ebertsen

    2016-01-01

    With the prevalence of cloud computing and In-ternet of Things (IoT), smart meters have become one of the main components of smart city strategy. Smart meters generate large amounts of fine-grained data that is used to provide useful information to consumers and utility companies for decision......, mainly due to privacy issues. This paper proposes a smart meter data generator that can generate realistic energy consumption data by making use of a small real-world dataset as seed. The generator generates data using a prediction-based method that depends on historical energy consumption patterns along......-making. Now-a-days, smart meter analytics systems consist of analytical algorithms that process massive amounts of data. These analytics algorithms require ample amounts of realistic data for testing and verification purposes. However, it is usually difficult to obtain adequate amounts of realistic data...

  10. Demand Management Based on Model Predictive Control Techniques

    Directory of Open Access Journals (Sweden)

    Yasser A. Davizón

    2014-01-01

    Full Text Available Demand management (DM is the process that helps companies to sell the right product to the right customer, at the right time, and for the right price. Therefore the challenge for any company is to determine how much to sell, at what price, and to which market segment while maximizing its profits. DM also helps managers efficiently allocate undifferentiated units of capacity to the available demand with the goal of maximizing revenue. This paper introduces control system approach to demand management with dynamic pricing (DP using the model predictive control (MPC technique. In addition, we present a proper dynamical system analogy based on active suspension and a stability analysis is provided via the Lyapunov direct method.

  11. Optimization of arterial age prediction models based in pulse wave

    International Nuclear Information System (INIS)

    Scandurra, A G; Meschino, G J; Passoni, L I; Dai Pra, A L; Introzzi, A R; Clara, F M

    2007-01-01

    We propose the detection of early arterial ageing through a prediction model of arterial age based in the coherence assumption between the pulse wave morphology and the patient's chronological age. Whereas we evaluate several methods, a Sugeno fuzzy inference system is selected. Models optimization is approached using hybrid methods: parameter adaptation with Artificial Neural Networks and Genetic Algorithms. Features selection was performed according with their projection on main factors of the Principal Components Analysis. The model performance was tested using the bootstrap error type .632E. The model presented an error smaller than 8.5%. This result encourages including this process as a diagnosis module into the device for pulse analysis that has been developed by the Bioengineering Laboratory staff

  12. Operational Numerical Weather Prediction systems based on Linux cluster architectures

    International Nuclear Information System (INIS)

    Pasqui, M.; Baldi, M.; Gozzini, B.; Maracchi, G.; Giuliani, G.; Montagnani, S.

    2005-01-01

    The progress in weather forecast and atmospheric science has been always closely linked to the improvement of computing technology. In order to have more accurate weather forecasts and climate predictions, more powerful computing resources are needed, in addition to more complex and better-performing numerical models. To overcome such a large computing request, powerful workstations or massive parallel systems have been used. In the last few years, parallel architectures, based on the Linux operating system, have been introduced and became popular, representing real high performance-low cost systems. In this work the Linux cluster experience achieved at the Laboratory far Meteorology and Environmental Analysis (LaMMA-CNR-IBIMET) is described and tips and performances analysed

  13. Prediction-based association control scheme in dense femtocell networks

    Science.gov (United States)

    Pham, Ngoc-Thai; Huynh, Thong; Hwang, Won-Joo; You, Ilsun; Choo, Kim-Kwang Raymond

    2017-01-01

    The deployment of large number of femtocell base stations allows us to extend the coverage and efficiently utilize resources in a low cost manner. However, the small cell size of femtocell networks can result in frequent handovers to the mobile user, and consequently throughput degradation. Thus, in this paper, we propose predictive association control schemes to improve the system’s effective throughput. Our design focuses on reducing handover frequency without impacting on throughput. The proposed schemes determine handover decisions that contribute most to the network throughput and are proper for distributed implementations. The simulation results show significant gains compared with existing methods in terms of handover frequency and network throughput perspective. PMID:28328992

  14. Diagnostic Accuracy of Robot-Guided, Software Based Transperineal MRI/TRUS Fusion Biopsy of the Prostate in a High Risk Population of Previously Biopsy Negative Men

    Directory of Open Access Journals (Sweden)

    Malte Kroenig

    2016-01-01

    Full Text Available Objective. In this study, we compared prostate cancer detection rates between MRI-TRUS fusion targeted and systematic biopsies using a robot-guided, software based transperineal approach. Methods and Patients. 52 patients received a MRIT/TRUS fusion followed by a systematic volume adapted biopsy using the same robot-guided transperineal approach. The primary outcome was the detection rate of clinically significant disease (Gleason grade ≥ 4. Secondary outcomes were detection rate of all cancers, sampling efficiency and utility, and serious adverse event rate. Patients received no antibiotic prophylaxis. Results. From 52 patients, 519 targeted biopsies from 135 lesions and 1561 random biopsies were generated (total n=2080. Overall detection rate of clinically significant PCa was 44.2% (23/52 and 50.0% (26/52 for target and random biopsy, respectively. Sampling efficiency as the median number of cores needed to detect clinically significant prostate cancer was 9 for target (IQR: 6–14.0 and 32 (IQR: 24–32 for random biopsy. The utility as the number of additionally detected clinically significant PCa cases by either strategy was 0% (0/52 for target and 3.9% (2/52 for random biopsy. Conclusions. MRI/TRUS fusion based target biopsy did not show an advantage in the overall detection rate of clinically significant prostate cancer.

  15. Fault trend prediction of device based on support vector regression

    International Nuclear Information System (INIS)

    Song Meicun; Cai Qi

    2011-01-01

    The research condition of fault trend prediction and the basic theory of support vector regression (SVR) were introduced. SVR was applied to the fault trend prediction of roller bearing, and compared with other methods (BP neural network, gray model, and gray-AR model). The results show that BP network tends to overlearn and gets into local minimum so that the predictive result is unstable. It also shows that the predictive result of SVR is stabilization, and SVR is superior to BP neural network, gray model and gray-AR model in predictive precision. SVR is a kind of effective method of fault trend prediction. (authors)

  16. DNA methylation-based measures of biological age: meta-analysis predicting time to death

    Science.gov (United States)

    Chen, Brian H.; Marioni, Riccardo E.; Colicino, Elena; Peters, Marjolein J.; Ward-Caviness, Cavin K.; Tsai, Pei-Chien; Roetker, Nicholas S.; Just, Allan C.; Demerath, Ellen W.; Guan, Weihua; Bressler, Jan; Fornage, Myriam; Studenski, Stephanie; Vandiver, Amy R.; Moore, Ann Zenobia; Tanaka, Toshiko; Kiel, Douglas P.; Liang, Liming; Vokonas, Pantel; Schwartz, Joel; Lunetta, Kathryn L.; Murabito, Joanne M.; Bandinelli, Stefania; Hernandez, Dena G.; Melzer, David; Nalls, Michael; Pilling, Luke C.; Price, Timothy R.; Singleton, Andrew B.; Gieger, Christian; Holle, Rolf; Kretschmer, Anja; Kronenberg, Florian; Kunze, Sonja; Linseisen, Jakob; Meisinger, Christine; Rathmann, Wolfgang; Waldenberger, Melanie; Visscher, Peter M.; Shah, Sonia; Wray, Naomi R.; McRae, Allan F.; Franco, Oscar H.; Hofman, Albert; Uitterlinden, André G.; Absher, Devin; Assimes, Themistocles; Levine, Morgan E.; Lu, Ake T.; Tsao, Philip S.; Hou, Lifang; Manson, JoAnn E.; Carty, Cara L.; LaCroix, Andrea Z.; Reiner, Alexander P.; Spector, Tim D.; Feinberg, Andrew P.; Levy, Daniel; Baccarelli, Andrea; van Meurs, Joyce; Bell, Jordana T.; Peters, Annette; Deary, Ian J.; Pankow, James S.; Ferrucci, Luigi; Horvath, Steve

    2016-01-01

    Estimates of biological age based on DNA methylation patterns, often referred to as “epigenetic age”, “DNAm age”, have been shown to be robust biomarkers of age in humans. We previously demonstrated that independent of chronological age, epigenetic age assessed in blood predicted all-cause mortality in four human cohorts. Here, we expanded our original observation to 13 different cohorts for a total sample size of 13,089 individuals, including three racial/ethnic groups. In addition, we examined whether incorporating information on blood cell composition into the epigenetic age metrics improves their predictive power for mortality. All considered measures of epigenetic age acceleration were predictive of mortality (p≤8.2×10−9), independent of chronological age, even after adjusting for additional risk factors (p<5.4×10−4), and within the racial/ethnic groups that we examined (non-Hispanic whites, Hispanics, African Americans). Epigenetic age estimates that incorporated information on blood cell composition led to the smallest p-values for time to death (p=7.5×10−43). Overall, this study a) strengthens the evidence that epigenetic age predicts all-cause mortality above and beyond chronological age and traditional risk factors, and b) demonstrates that epigenetic age estimates that incorporate information on blood cell counts lead to highly significant associations with all-cause mortality. PMID:27690265

  17. New Diagnosis of AIDS Based on Salmonella enterica subsp. I (enterica Enteritidis (A Meningitis in a Previously Immunocompetent Adult in the United States

    Directory of Open Access Journals (Sweden)

    Andrew C. Elton

    2017-01-01

    Full Text Available Salmonella meningitis is a rare manifestation of meningitis typically presenting in neonates and the elderly. This infection typically associates with foodborne outbreaks in developing nations and AIDS-endemic regions. We report a case of a 19-year-old male presenting with altered mental status after 3-day absence from work at a Wisconsin tourist area. He was febrile, tachycardic, and tachypneic with a GCS of 8. The patient was intubated and a presumptive diagnosis of meningitis was made. Treatment was initiated with ceftriaxone, vancomycin, acyclovir, dexamethasone, and fluid resuscitation. A lumbar puncture showed cloudy CSF with Gram negative rods. He was admitted to the ICU. CSF culture confirmed Salmonella enterica subsp. I (enterica Enteritidis (A. Based on this finding, a 4th-generation HIV antibody/p24 antigen test was sent. When this returned positive, a CD4 count was obtained and showed 3 cells/mm3, confirming AIDS. The patient ultimately received 38 days of ceftriaxone, was placed on elvitegravir, cobicistat, emtricitabine, and tenofovir alafenamide (Genvoya for HIV/AIDS, and was discharged neurologically intact after a 44-day admission.

  18. Power load prediction based on GM (1,1)

    Science.gov (United States)

    Wu, Di

    2017-05-01

    Currently, Chinese power load prediction is highly focused; the paper deeply studies grey prediction and applies it to Chinese electricity consumption during the recent 14 years; through after-test test, it obtains grey prediction which has good adaptability to medium and long-term power load.

  19. Analysis of energy-based algorithms for RNA secondary structure prediction

    Directory of Open Access Journals (Sweden)

    Hajiaghayi Monir

    2012-02-01

    Full Text Available Abstract Background RNA molecules play critical roles in the cells of organisms, including roles in gene regulation, catalysis, and synthesis of proteins. Since RNA function depends in large part on its folded structures, much effort has been invested in developing accurate methods for prediction of RNA secondary structure from the base sequence. Minimum free energy (MFE predictions are widely used, based on nearest neighbor thermodynamic parameters of Mathews, Turner et al. or those of Andronescu et al. Some recently proposed alternatives that leverage partition function calculations find the structure with maximum expected accuracy (MEA or pseudo-expected accuracy (pseudo-MEA methods. Advances in prediction methods are typically benchmarked using sensitivity, positive predictive value and their harmonic mean, namely F-measure, on datasets of known reference structures. Since such benchmarks document progress in improving accuracy of computational prediction methods, it is important to understand how measures of accuracy vary as a function of the reference datasets and whether advances in algorithms or thermodynamic parameters yield statistically significant improvements. Our work advances such understanding for the MFE and (pseudo-MEA-based methods, with respect to the latest datasets and energy parameters. Results We present three main findings. First, using the bootstrap percentile method, we show that the average F-measure accuracy of the MFE and (pseudo-MEA-based algorithms, as measured on our largest datasets with over 2000 RNAs from diverse families, is a reliable estimate (within a 2% range with high confidence of the accuracy of a population of RNA molecules represented by this set. However, average accuracy on smaller classes of RNAs such as a class of 89 Group I introns used previously in benchmarking algorithm accuracy is not reliable enough to draw meaningful conclusions about the relative merits of the MFE and MEA-based algorithms

  20. Football league win prediction based on online and league table data

    Science.gov (United States)

    Par, Prateek; Gupt, Ankit Kumar; Singh, Samarth; Khare, Neelu; Bhattachrya, Sweta

    2017-11-01

    As we are proceeding towards an internet driven world, the impact of internet is increasing in our day to lives. This not only gives impact on the virtual world but also leave a mark in the real world. The social media sites contains huge amount of information, the only thing is to collect the relevant data and analyse the data to form a real world prediction and it can do far more than that. In this paper we study the relationship between the twitter data and the normal data analysis to predict the winning team in the NFL (National Football League).The prediction is based on the data collected on the on-going league which includes performance of each player and their previous statistics. Alongside with the data available online we are combining the twitter data which we extracted by the tweets pertaining to specific teams and games in the NFL season and use them alongside statistical game data to build predictive models for future or the outcome of the game i.e. which team will lose or win depending upon the statistical data available. Specifically the tweets within the 24 hours of match will be considered and the main focus of twitter data will be upon the last hours of tweets i.e. pre-match twitter data and post-match twitter data. We are experimenting on the data and using twitter data we are trying to increase the performance of the existing predictive models that uses only the game stats to predict the future.

  1. Improved feature selection based on genetic algorithms for real time disruption prediction on JET

    International Nuclear Information System (INIS)

    Rattá, G.A.; Vega, J.; Murari, A.

    2012-01-01

    Highlights: ► A new signal selection methodology to improve disruption prediction is reported. ► The approach is based on Genetic Algorithms. ► An advanced predictor has been created with the new set of signals. ► The new system obtains considerably higher prediction rates. - Abstract: The early prediction of disruptions is an important aspect of the research in the field of Tokamak control. A very recent predictor, called “Advanced Predictor Of Disruptions” (APODIS), developed for the “Joint European Torus” (JET), implements the real time recognition of incoming disruptions with the best success rate achieved ever and an outstanding stability for long periods following training. In this article, a new methodology to select the set of the signals’ parameters in order to maximize the performance of the predictor is reported. The approach is based on “Genetic Algorithms” (GAs). With the feature selection derived from GAs, a new version of APODIS has been developed. The results are significantly better than the previous version not only in terms of success rates but also in extending the interval before the disruption in which reliable predictions are achieved. Correct disruption predictions with a success rate in excess of 90% have been achieved 200 ms before the time of the disruption. The predictor response is compared with that of JET's Protection System (JPS) and the ADODIS predictor is shown to be far superior. Both systems have been carefully tested with a wide number of discharges to understand their relative merits and the most profitable directions of further improvements.

  2. Improved feature selection based on genetic algorithms for real time disruption prediction on JET

    Energy Technology Data Exchange (ETDEWEB)

    Ratta, G.A., E-mail: garatta@gateme.unsj.edu.ar [GATEME, Facultad de Ingenieria, Universidad Nacional de San Juan, Avda. San Martin 1109 (O), 5400 San Juan (Argentina); JET EFDA, Culham Science Centre, OX14 3DB Abingdon (United Kingdom); Vega, J. [Asociacion EURATOM/CIEMAT para Fusion, Avda. Complutense, 40, 28040 Madrid (Spain); JET EFDA, Culham Science Centre, OX14 3DB Abingdon (United Kingdom); Murari, A. [Associazione EURATOM-ENEA per la Fusione, Consorzio RFX, 4-35127 Padova (Italy); JET EFDA, Culham Science Centre, OX14 3DB Abingdon (United Kingdom)

    2012-09-15

    Highlights: Black-Right-Pointing-Pointer A new signal selection methodology to improve disruption prediction is reported. Black-Right-Pointing-Pointer The approach is based on Genetic Algorithms. Black-Right-Pointing-Pointer An advanced predictor has been created with the new set of signals. Black-Right-Pointing-Pointer The new system obtains considerably higher prediction rates. - Abstract: The early prediction of disruptions is an important aspect of the research in the field of Tokamak control. A very recent predictor, called 'Advanced Predictor Of Disruptions' (APODIS), developed for the 'Joint European Torus' (JET), implements the real time recognition of incoming disruptions with the best success rate achieved ever and an outstanding stability for long periods following training. In this article, a new methodology to select the set of the signals' parameters in order to maximize the performance of the predictor is reported. The approach is based on 'Genetic Algorithms' (GAs). With the feature selection derived from GAs, a new version of APODIS has been developed. The results are significantly better than the previous version not only in terms of success rates but also in extending the interval before the disruption in which reliable predictions are achieved. Correct disruption predictions with a success rate in excess of 90% have been achieved 200 ms before the time of the disruption. The predictor response is compared with that of JET's Protection System (JPS) and the ADODIS predictor is shown to be far superior. Both systems have been carefully tested with a wide number of discharges to understand their relative merits and the most profitable directions of further improvements.

  3. MEGADOCK-Web: an integrated database of high-throughput structure-based protein-protein interaction predictions.

    Science.gov (United States)

    Hayashi, Takanori; Matsuzaki, Yuri; Yanagisawa, Keisuke; Ohue, Masahito; Akiyama, Yutaka

    2018-05-08

    Protein-protein interactions (PPIs) play several roles in living cells, and computational PPI prediction is a major focus of many researchers. The three-dimensional (3D) structure and binding surface are important for the design of PPI inhibitors. Therefore, rigid body protein-protein docking calculations for two protein structures are expected to allow elucidation of PPIs different from known complexes in terms of 3D structures because known PPI information is not explicitly required. We have developed rapid PPI prediction software based on protein-protein docking, called MEGADOCK. In order to fully utilize the benefits of computational PPI predictions, it is necessary to construct a comprehensive database to gather prediction results and their predicted 3D complex structures and to make them easily accessible. Although several databases exist that provide predicted PPIs, the previous databases do not contain a sufficient number of entries for the purpose of discovering novel PPIs. In this study, we constructed an integrated database of MEGADOCK PPI predictions, named MEGADOCK-Web. MEGADOCK-Web provides more than 10 times the number of PPI predictions than previous databases and enables users to conduct PPI predictions that cannot be found in conventional PPI prediction databases. In MEGADOCK-Web, there are 7528 protein chains and 28,331,628 predicted PPIs from all possible combinations of those proteins. Each protein structure is annotated with PDB ID, chain ID, UniProt AC, related KEGG pathway IDs, and known PPI pairs. Additionally, MEGADOCK-Web provides four powerful functions: 1) searching precalculated PPI predictions, 2) providing annotations for each predicted protein pair with an experimentally known PPI, 3) visualizing candidates that may interact with the query protein on biochemical pathways, and 4) visualizing predicted complex structures through a 3D molecular viewer. MEGADOCK-Web provides a huge amount of comprehensive PPI predictions based on

  4. Foundation Settlement Prediction Based on a Novel NGM Model

    Directory of Open Access Journals (Sweden)

    Peng-Yu Chen

    2014-01-01

    Full Text Available Prediction of foundation or subgrade settlement is very important during engineering construction. According to the fact that there are lots of settlement-time sequences with a nonhomogeneous index trend, a novel grey forecasting model called NGM (1,1,k,c model is proposed in this paper. With an optimized whitenization differential equation, the proposed NGM (1,1,k,c model has the property of white exponential law coincidence and can predict a pure nonhomogeneous index sequence precisely. We used two case studies to verify the predictive effect of NGM (1,1,k,c model for settlement prediction. The results show that this model can achieve excellent prediction accuracy; thus, the model is quite suitable for simulation and prediction of approximate nonhomogeneous index sequence and has excellent application value in settlement prediction.

  5. A critical pressure based panel method for prediction of unsteady loading of marine propellers under cavitation

    International Nuclear Information System (INIS)

    Liu, P.; Bose, N.; Colbourne, B.

    2002-01-01

    A simple numerical procedure is established and implemented into a time domain panel method to predict hydrodynamic performance of marine propellers with sheet cavitation. This paper describes the numerical formulations and procedures to construct this integration. Predicted hydrodynamic loads were compared with both a previous numerical model and experimental measurements for a propeller in steady flow. The current method gives a substantial improvement in thrust and torque coefficient prediction over a previous numerical method at low cavitation numbers of less than 2.0, where severe cavitation occurs. Predicted pressure coefficient distributions are also presented. (author)

  6. Prediction of strain energy-based liquefaction resistance of sand-silt mixtures: An evolutionary approach

    Science.gov (United States)

    Baziar, Mohammad H.; Jafarian, Yaser; Shahnazari, Habib; Movahed, Vahid; Amin Tutunchian, Mohammad

    2011-11-01

    Liquefaction is a catastrophic type of ground failure, which usually occurs in loose saturated soil deposits under earthquake excitations. A new predictive model is presented in this study to estimate the amount of strain energy density, which is required for the liquefaction triggering of sand-silt mixtures. A wide-ranging database containing the results of cyclic tests on sand-silt mixtures was first gathered from previously published studies. Input variables of the model were chosen from the available understandings evolved from the previous studies on the strain energy-based liquefaction potential assessment. In order to avoid overtraining, two sets of validation data were employed and a particular monitoring was made on the behavior of the evolved models. Results of a comprehensive parametric study on the proposed model are in accord with the previously published experimental observations. Accordingly, the amount of strain energy required for liquefaction onset increases with increase in initial effective overburden pressure, relative density, and mean grain size. The effect of nonplastic fines on strain energy-based liquefaction resistance shows a more complicated behavior. Accordingly, liquefaction resistance increases with increase in fines up to about 10-15% and then starts to decline for a higher increase in fines content. Further verifications of the model were carried out using the valuable results of some downhole array data as well as centrifuge model tests. These verifications confirm that the proposed model, which was derived from laboratory data, can be successfully utilized under field conditions.

  7. A Duration Prediction Using a Material-Based Progress Management Methodology for Construction Operation Plans

    Directory of Open Access Journals (Sweden)

    Yongho Ko

    2017-04-01

    Full Text Available Precise and accurate prediction models for duration and cost enable contractors to improve their decision making for effective resource management in terms of sustainability in construction. Previous studies have been limited to cost-based estimations, but this study focuses on a material-based progress management method. Cost-based estimations typically used in construction, such as the earned value method, rely on comparing the planned budget with the actual cost. However, accurately planning budgets requires analysis of many factors, such as the financial status of the sectors involved. Furthermore, there is a higher possibility of changes in the budget than in the total amount of material used during construction, which is deduced from the quantity take-off from drawings and specifications. Accordingly, this study proposes a material-based progress management methodology, which was developed using different predictive analysis models (regression, neural network, and auto-regressive moving average as well as datasets on material and labor, which can be extracted from daily work reports from contractors. A case study on actual datasets was conducted, and the results show that the proposed methodology can be efficiently used for progress management in construction.

  8. Climate-based models for pulsed resources improve predictability of consumer population dynamics: outbreaks of house mice in forest ecosystems.

    Directory of Open Access Journals (Sweden)

    E Penelope Holland

    Full Text Available Accurate predictions of the timing and magnitude of consumer responses to episodic seeding events (masts are important for understanding ecosystem dynamics and for managing outbreaks of invasive species generated by masts. While models relating consumer populations to resource fluctuations have been developed successfully for a range of natural and modified ecosystems, a critical gap that needs addressing is better prediction of resource pulses. A recent model used change in summer temperature from one year to the next (ΔT for predicting masts for forest and grassland plants in New Zealand. We extend this climate-based method in the framework of a model for consumer-resource dynamics to predict invasive house mouse (Mus musculus outbreaks in forest ecosystems. Compared with previous mast models based on absolute temperature, the ΔT method for predicting masts resulted in an improved model for mouse population dynamics. There was also a threshold effect of ΔT on the likelihood of an outbreak occurring. The improved climate-based method for predicting resource pulses and consumer responses provides a straightforward rule of thumb for determining, with one year's advance warning, whether management intervention might be required in invaded ecosystems. The approach could be applied to consumer-resource systems worldwide where climatic variables are used to model the size and duration of resource pulses, and may have particular relevance for ecosystems where global change scenarios predict increased variability in climatic events.

  9. Prediction of beta-turns at over 80% accuracy based on an ensemble of predicted secondary structures and multiple alignments.

    Science.gov (United States)

    Zheng, Ce; Kurgan, Lukasz

    2008-10-10

    beta-turn is a secondary protein structure type that plays significant role in protein folding, stability, and molecular recognition. To date, several methods for prediction of beta-turns from protein sequences were developed, but they are characterized by relatively poor prediction quality. The novelty of the proposed sequence-based beta-turn predictor stems from the usage of a window based information extracted from four predicted three-state secondary structures, which together with a selected set of position specific scoring matrix (PSSM) values serve as an input to the support vector machine (SVM) predictor. We show that (1) all four predicted secondary structures are useful; (2) the most useful information extracted from the predicted secondary structure includes the structure of the predicted residue, secondary structure content in a window around the predicted residue, and features that indicate whether the predicted residue is inside a secondary structure segment; (3) the PSSM values of Asn, Asp, Gly, Ile, Leu, Met, Pro, and Val were among the top ranked features, which corroborates with recent studies. The Asn, Asp, Gly, and Pro indicate potential beta-turns, while the remaining four amino acids are useful to predict non-beta-turns. Empirical evaluation using three nonredundant datasets shows favorable Q total, Q predicted and MCC values when compared with over a dozen of modern competing methods. Our method is the first to break the 80% Q total barrier and achieves Q total = 80.9%, MCC = 0.47, and Q predicted higher by over 6% when compared with the second best method. We use feature selection to reduce the dimensionality of the feature vector used as the input for the proposed prediction method. The applied feature set is smaller by 86, 62 and 37% when compared with the second and two third-best (with respect to MCC) competing methods, respectively. Experiments show that the proposed method constitutes an improvement over the competing prediction

  10. Star-sensor-based predictive Kalman filter for satelliteattitude estimation

    Institute of Scientific and Technical Information of China (English)

    林玉荣; 邓正隆

    2002-01-01

    A real-time attitude estimation algorithm, namely the predictive Kalman filter, is presented. This algorithm can accurately estimate the three-axis attitude of a satellite using only star sensor measurements. The implementation of the filter includes two steps: first, predicting the torque modeling error, and then estimating the attitude. Simulation results indicate that the predictive Kalman filter provides robust performance in the presence of both significant errors in the assumed model and in the initial conditions.

  11. The predictive effect of fear-avoidance beliefs on low back pain among newly qualified health care workers with and without previous low back pain: a prospective cohort study

    DEFF Research Database (Denmark)

    Jensen, Jette Nygaard; Albertsen, Karen; Borg, Vilhelm

    2009-01-01

    Health care workers have a high prevalence of low back pain (LBP). Although physical exposures in the working environment are linked to an increased risk of LBP, it has been suggested that individual coping strategies, for example fear-avoidance beliefs, could also be important in the development...... and maintenance of LBP. Accordingly, the main objective of this study was to examine (1) the association between physical work load and LBP, (2) the predictive effect of fear-avoidance beliefs on the development of LBP, and (3) the moderating effect of fear-avoidance beliefs on the association between physical...

  12. Predicting volume of distribution with decision tree-based regression methods using predicted tissue:plasma partition coefficients.

    Science.gov (United States)

    Freitas, Alex A; Limbu, Kriti; Ghafourian, Taravat

    2015-01-01

    Volume of distribution is an important pharmacokinetic property that indicates the extent of a drug's distribution in the body tissues. This paper addresses the problem of how to estimate the apparent volume of distribution at steady state (Vss) of chemical compounds in the human body using decision tree-based regression methods from the area of data mining (or machine learning). Hence, the pros and cons of several different types of decision tree-based regression methods have been discussed. The regression methods predict Vss using, as predictive features, both the compounds' molecular descriptors and the compounds' tissue:plasma partition coefficients (Kt:p) - often used in physiologically-based pharmacokinetics. Therefore, this work has assessed whether the data mining-based prediction of Vss can be made more accurate by using as input not only the compounds' molecular descriptors but also (a subset of) their predicted Kt:p values. Comparison of the models that used only molecular descriptors, in particular, the Bagging decision tree (mean fold error of 2.33), with those employing predicted Kt:p values in addition to the molecular descriptors, such as the Bagging decision tree using adipose Kt:p (mean fold error of 2.29), indicated that the use of predicted Kt:p values as descriptors may be beneficial for accurate prediction of Vss using decision trees if prior feature selection is applied. Decision tree based models presented in this work have an accuracy that is reasonable and similar to the accuracy of reported Vss inter-species extrapolations in the literature. The estimation of Vss for new compounds in drug discovery will benefit from methods that are able to integrate large and varied sources of data and flexible non-linear data mining methods such as decision trees, which can produce interpretable models. Graphical AbstractDecision trees for the prediction of tissue partition coefficient and volume of distribution of drugs.

  13. Paroxysmal atrial fibrillation prediction based on HRV analysis and non-dominated sorting genetic algorithm III.

    Science.gov (United States)

    Boon, K H; Khalil-Hani, M; Malarvili, M B

    2018-01-01

    This paper presents a method that able to predict the paroxysmal atrial fibrillation (PAF). The method uses shorter heart rate variability (HRV) signals when compared to existing methods, and achieves good prediction accuracy. PAF is a common cardiac arrhythmia that increases the health risk of a patient, and the development of an accurate predictor of the onset of PAF is clinical important because it increases the possibility to electrically stabilize and prevent the onset of atrial arrhythmias with different pacing techniques. We propose a multi-objective optimization algorithm based on the non-dominated sorting genetic algorithm III for optimizing the baseline PAF prediction system, that consists of the stages of pre-processing, HRV feature extraction, and support vector machine (SVM) model. The pre-processing stage comprises of heart rate correction, interpolation, and signal detrending. After that, time-domain, frequency-domain, non-linear HRV features are extracted from the pre-processed data in feature extraction stage. Then, these features are used as input to the SVM for predicting the PAF event. The proposed optimization algorithm is used to optimize the parameters and settings of various HRV feature extraction algorithms, select the best feature subsets, and tune the SVM parameters simultaneously for maximum prediction performance. The proposed method achieves an accuracy rate of 87.7%, which significantly outperforms most of the previous works. This accuracy rate is achieved even with the HRV signal length being reduced from the typical 30 min to just 5 min (a reduction of 83%). Furthermore, another significant result is the sensitivity rate, which is considered more important that other performance metrics in this paper, can be improved with the trade-off of lower specificity. Copyright © 2017 Elsevier B.V. All rights reserved.

  14. ANNIT - An Efficient Inversion Algorithm based on Prediction Principles

    Science.gov (United States)

    Růžek, B.; Kolář, P.

    2009-04-01

    Solution of inverse problems represents meaningful job in geophysics. The amount of data is continuously increasing, methods of modeling are being improved and the computer facilities are also advancing great technical progress. Therefore the development of new and efficient algorithms and computer codes for both forward and inverse modeling is still up to date. ANNIT is contributing to this stream since it is a tool for efficient solution of a set of non-linear equations. Typical geophysical problems are based on parametric approach. The system is characterized by a vector of parameters p, the response of the system is characterized by a vector of data d. The forward problem is usually represented by unique mapping F(p)=d. The inverse problem is much more complex and the inverse mapping p=G(d) is available in an analytical or closed form only exceptionally and generally it may not exist at all. Technically, both forward and inverse mapping F and G are sets of non-linear equations. ANNIT solves such situation as follows: (i) joint subspaces {pD, pM} of original data and model spaces D, M, resp. are searched for, within which the forward mapping F is sufficiently smooth that the inverse mapping G does exist, (ii) numerical approximation of G in subspaces {pD, pM} is found, (iii) candidate solution is predicted by using this numerical approximation. ANNIT is working in an iterative way in cycles. The subspaces {pD, pM} are searched for by generating suitable populations of individuals (models) covering data and model spaces. The approximation of the inverse mapping is made by using three methods: (a) linear regression, (b) Radial Basis Function Network technique, (c) linear prediction (also known as "Kriging"). The ANNIT algorithm has built in also an archive of already evaluated models. Archive models are re-used in a suitable way and thus the number of forward evaluations is minimized. ANNIT is now implemented both in MATLAB and SCILAB. Numerical tests show good

  15. Deterministic Predictions of Vessel Responses Based on Past Measurements

    DEFF Research Database (Denmark)

    Nielsen, Ulrik Dam; Jensen, Jørgen Juncher

    2017-01-01

    The paper deals with a prediction procedure from which global wave-induced responses can be deterministically predicted a short time, 10-50 s, ahead of current time. The procedure relies on the autocorrelation function and takes into account prior measurements only; i.e. knowledge about wave...

  16. Connecting clinical and actuarial prediction with rule-based methods

    NARCIS (Netherlands)

    Fokkema, M.; Smits, N.; Kelderman, H.; Penninx, B.W.J.H.

    2015-01-01

    Meta-analyses comparing the accuracy of clinical versus actuarial prediction have shown actuarial methods to outperform clinical methods, on average. However, actuarial methods are still not widely used in clinical practice, and there has been a call for the development of actuarial prediction

  17. Online credit card fraud prediction based on hierarchical temporal ...

    African Journals Online (AJOL)

    This understanding gave birth to the Hierarchical Temporal Memory (HTM) which holds a lot of promises in the area of time-series prediction and anomaly detection problems. This paper demonstrates the behaviour of an HTM model with respect to its learning and prediction of online credit card fraud. The model was ...

  18. Viscosity Prediction of Hydrocarbon Mixtures Based on the Friction Theory

    DEFF Research Database (Denmark)

    Zeberg-Mikkelsen, Claus Kjær; Cisneros, Sergio; Stenby, Erling Halfdan

    2001-01-01

    The application and capability of the friction theory (f-theory) for viscosity predictions of hydrocarbon fluids is further illustrated by predicting the viscosity of binary and ternary liquid mixtures composed of n-alkanes ranging from n-pentane to n-decane for wide ranges of temperature and from...

  19. Optimization of condition-based asset management using a predictive health model

    NARCIS (Netherlands)

    Bajracharya, G.; Koltunowicz, T.; Negenborn, R.R.; Papp, Z.; Djairam, D.; De Schutter, B.; Smit, J.J.

    2009-01-01

    In this paper, a model predictive framework is used to optimize the operation and maintenance actions of power system equipment based on the predicted health sate of this equipment. In particular, this framework is used to predict the health state of transformers based on their usage. The health

  20. Comparison of classification methods for voxel-based prediction of acute ischemic stroke outcome following intra-arterial intervention

    Science.gov (United States)

    Winder, Anthony J.; Siemonsen, Susanne; Flottmann, Fabian; Fiehler, Jens; Forkert, Nils D.

    2017-03-01

    Voxel-based tissue outcome prediction in acute ischemic stroke patients is highly relevant for both clinical routine and research. Previous research has shown that features extracted from baseline multi-parametric MRI datasets have a high predictive value and can be used for the training of classifiers, which can generate tissue outcome predictions for both intravenous and conservative treatments. However, with the recent advent and popularization of intra-arterial thrombectomy treatment, novel research specifically addressing the utility of predictive classi- fiers for thrombectomy intervention is necessary for a holistic understanding of current stroke treatment options. The aim of this work was to develop three clinically viable tissue outcome prediction models using approximate nearest-neighbor, generalized linear model, and random decision forest approaches and to evaluate the accuracy of predicting tissue outcome after intra-arterial treatment. Therefore, the three machine learning models were trained, evaluated, and compared using datasets of 42 acute ischemic stroke patients treated with intra-arterial thrombectomy. Classifier training utilized eight voxel-based features extracted from baseline MRI datasets and five global features. Evaluation of classifier-based predictions was performed via comparison to the known tissue outcome, which was determined in follow-up imaging, using the Dice coefficient and leave-on-patient-out cross validation. The random decision forest prediction model led to the best tissue outcome predictions with a mean Dice coefficient of 0.37. The approximate nearest-neighbor and generalized linear model performed equally suboptimally with average Dice coefficients of 0.28 and 0.27 respectively, suggesting that both non-linearity and machine learning are desirable properties of a classifier well-suited to the intra-arterial tissue outcome prediction problem.

  1. Ensemble-based Regional Climate Prediction: Political Impacts

    Science.gov (United States)

    Miguel, E.; Dykema, J.; Satyanath, S.; Anderson, J. G.

    2008-12-01

    Accurate forecasts of regional climate, including temperature and precipitation, have significant implications for human activities, not just economically but socially. Sub Saharan Africa is a region that has displayed an exceptional propensity for devastating civil wars. Recent research in political economy has revealed a strong statistical relationship between year to year fluctuations in precipitation and civil conflict in this region in the 1980s and 1990s. To investigate how climate change may modify the regional risk of civil conflict in the future requires a probabilistic regional forecast that explicitly accounts for the community's uncertainty in the evolution of rainfall under anthropogenic forcing. We approach the regional climate prediction aspect of this question through the application of a recently demonstrated method called generalized scalar prediction (Leroy et al. 2009), which predicts arbitrary scalar quantities of the climate system. This prediction method can predict change in any variable or linear combination of variables of the climate system averaged over a wide range spatial scales, from regional to hemispheric to global. Generalized scalar prediction utilizes an ensemble of model predictions to represent the community's uncertainty range in climate modeling in combination with a timeseries of any type of observational data that exhibits sensitivity to the scalar of interest. It is not necessary to prioritize models in deriving with the final prediction. We present the results of the application of generalized scalar prediction for regional forecasts of temperature and precipitation and Sub Saharan Africa. We utilize the climate predictions along with the established statistical relationship between year-to-year rainfall variability in Sub Saharan Africa to investigate the potential impact of climate change on civil conflict within that region.

  2. AlzhCPI: A knowledge base for predicting chemical-protein interactions towards Alzheimer's disease.

    Directory of Open Access Journals (Sweden)

    Jiansong Fang

    Full Text Available Alzheimer's disease (AD is a complicated progressive neurodegeneration disorder. To confront AD, scientists are searching for multi-target-directed ligands (MTDLs to delay disease progression. The in silico prediction of chemical-protein interactions (CPI can accelerate target identification and drug discovery. Previously, we developed 100 binary classifiers to predict the CPI for 25 key targets against AD using the multi-target quantitative structure-activity relationship (mt-QSAR method. In this investigation, we aimed to apply the mt-QSAR method to enlarge the model library to predict CPI towards AD. Another 104 binary classifiers were further constructed to predict the CPI for 26 preclinical AD targets based on the naive Bayesian (NB and recursive partitioning (RP algorithms. The internal 5-fold cross-validation and external test set validation were applied to evaluate the performance of the training sets and test set, respectively. The area under the receiver operating characteristic curve (ROC for the test sets ranged from 0.629 to 1.0, with an average of 0.903. In addition, we developed a web server named AlzhCPI to integrate the comprehensive information of approximately 204 binary classifiers, which has potential applications in network pharmacology and drug repositioning. AlzhCPI is available online at http://rcidm.org/AlzhCPI/index.html. To illustrate the applicability of AlzhCPI, the developed system was employed for the systems pharmacology-based investigation of shichangpu against AD to enhance the understanding of the mechanisms of action of shichangpu from a holistic perspective.

  3. DFT-based prediction of reactivity of short-chain alcohol dehydrogenase

    Science.gov (United States)

    Stawoska, I.; Dudzik, A.; Wasylewski, M.; Jemioła-Rzemińska, M.; Skoczowski, A.; Strzałka, K.; Szaleniec, M.

    2017-06-01

    The reaction mechanism of ketone reduction by short chain dehydrogenase/reductase, ( S)-1-phenylethanol dehydrogenase from Aromatoleum aromaticum, was studied with DFT methods using cluster model approach. The characteristics of the hydride transfer process were investigated based on reaction of acetophenone and its eight structural analogues. The results confirmed previously suggested concomitant transfer of hydride from NADH to carbonyl C atom of the substrate with proton transfer from Tyr to carbonyl O atom. However, additional coupled motion of the next proton in the proton-relay system, between O2' ribose hydroxyl and Tyr154 was observed. The protonation of Lys158 seems not to affect the pKa of Tyr154, as the stable tyrosyl anion was observed only for a neutral Lys158 in the high pH model. The calculated reaction energies and reaction barriers were calibrated by calorimetric and kinetic methods. This allowed an excellent prediction of the reaction enthalpies (R2 = 0.93) and a good prediction of the reaction kinetics (R2 = 0.89). The observed relations were validated in prediction of log K eq obtained for real whole-cell reactor systems that modelled industrial synthesis of S-alcohols.

  4. Predicting clinical symptoms of attention deficit hyperactivity disorder based on temporal patterns between and within intrinsic connectivity networks.

    Science.gov (United States)

    Wang, Xun-Heng; Jiao, Yun; Li, Lihua

    2017-10-24

    Attention deficit hyperactivity disorder (ADHD) is a common brain disorder with high prevalence in school-age children. Previously developed machine learning-based methods have discriminated patients with ADHD from normal controls by providing label information of the disease for individuals. Inattention and impulsivity are the two most significant clinical symptoms of ADHD. However, predicting clinical symptoms (i.e., inattention and impulsivity) is a challenging task based on neuroimaging data. The goal of this study is twofold: to build predictive models for clinical symptoms of ADHD based on resting-state fMRI and to mine brain networks for predictive patterns of inattention and impulsivity. To achieve this goal, a cohort of 74 boys with ADHD and a cohort of 69 age-matched normal controls were recruited from the ADHD-200 Consortium. Both structural and resting-state fMRI images were obtained for each participant. Temporal patterns between and within intrinsic connectivity networks (ICNs) were applied as raw features in the predictive models. Specifically, sample entropy was taken asan intra-ICN feature, and phase synchronization (PS) was used asan inter-ICN feature. The predictive models were based on the least absolute shrinkage and selectionator operator (LASSO) algorithm. The performance of the predictive model for inattention is r=0.79 (p<10 -8 ), and the performance of the predictive model for impulsivity is r=0.48 (p<10 -8 ). The ICN-related predictive patterns may provide valuable information for investigating the brain network mechanisms of ADHD. In summary, the predictive models for clinical symptoms could be beneficial for personalizing ADHD medications. Copyright © 2017 IBRO. Published by Elsevier Ltd. All rights reserved.

  5. Preoperative screening: value of previous tests.

    Science.gov (United States)

    Macpherson, D S; Snow, R; Lofgren, R P

    1990-12-15

    To determine the frequency of tests done in the year before elective surgery that might substitute for preoperative screening tests and to determine the frequency of test results that change from a normal value to a value likely to alter perioperative management. Retrospective cohort analysis of computerized laboratory data (complete blood count, sodium, potassium, and creatinine levels, prothrombin time, and partial thromboplastin time). Urban tertiary care Veterans Affairs Hospital. Consecutive sample of 1109 patients who had elective surgery in 1988. At admission, 7549 preoperative tests were done, 47% of which duplicated tests performed in the previous year. Of 3096 previous results that were normal as defined by hospital reference range and done closest to the time of but before admission (median interval, 2 months), 13 (0.4%; 95% CI, 0.2% to 0.7%), repeat values were outside a range considered acceptable for surgery. Most of the abnormalities were predictable from the patient's history, and most were not noted in the medical record. Of 461 previous tests that were abnormal, 78 (17%; CI, 13% to 20%) repeat values at admission were outside a range considered acceptable for surgery (P less than 0.001, frequency of clinically important abnormalities of patients with normal previous results with those with abnormal previous results). Physicians evaluating patients preoperatively could safely substitute the previous test results analyzed in this study for preoperative screening tests if the previous tests are normal and no obvious indication for retesting is present.

  6. Data Prediction for Public Events in Professional Domains Based on Improved RNN- LSTM

    Science.gov (United States)

    Song, Bonan; Fan, Chunxiao; Wu, Yuexin; Sun, Juanjuan

    2018-02-01

    The traditional data services of prediction for emergency or non-periodic events usually cannot generate satisfying result or fulfill the correct prediction purpose. However, these events are influenced by external causes, which mean certain a priori information of these events generally can be collected through the Internet. This paper studied the above problems and proposed an improved model—LSTM (Long Short-term Memory) dynamic prediction and a priori information sequence generation model by combining RNN-LSTM and public events a priori information. In prediction tasks, the model is qualified for determining trends, and its accuracy also is validated. This model generates a better performance and prediction results than the previous one. Using a priori information can increase the accuracy of prediction; LSTM can better adapt to the changes of time sequence; LSTM can be widely applied to the same type of prediction tasks, and other prediction tasks related to time sequence.

  7. DNABP: Identification of DNA-Binding Proteins Based on Feature Selection Using a Random Forest and Predicting Binding Residues.

    Science.gov (United States)

    Ma, Xin; Guo, Jing; Sun, Xiao

    2016-01-01

    DNA-binding proteins are fundamentally important in cellular processes. Several computational-based methods have been developed to improve the prediction of DNA-binding proteins in previous years. However, insufficient work has been done on the prediction of DNA-binding proteins from protein sequence information. In this paper, a novel predictor, DNABP (DNA-binding proteins), was designed to predict DNA-binding proteins using the random forest (RF) classifier with a hybrid feature. The hybrid feature contains two types of novel sequence features, which reflect information about the conservation of physicochemical properties of the amino acids, and the binding propensity of DNA-binding residues and non-binding propensities of non-binding residues. The comparisons with each feature demonstrated that these two novel features contributed most to the improvement in predictive ability. Furthermore, to improve the prediction performance of the DNABP model, feature selection using the minimum redundancy maximum relevance (mRMR) method combined with incremental feature selection (IFS) was carried out during the model construction. The results showed that the DNABP model could achieve 86.90% accuracy, 83.76% sensitivity, 90.03% specificity and a Matthews correlation coefficient of 0.727. High prediction accuracy and performance comparisons with previous research suggested that DNABP could be a useful approach to identify DNA-binding proteins from sequence information. The DNABP web server system is freely available at http://www.cbi.seu.edu.cn/DNABP/.

  8. Prediction of beta-turns at over 80% accuracy based on an ensemble of predicted secondary structures and multiple alignments

    Directory of Open Access Journals (Sweden)

    Kurgan Lukasz

    2008-10-01

    Full Text Available Abstract Background β-turn is a secondary protein structure type that plays significant role in protein folding, stability, and molecular recognition. To date, several methods for prediction of β-turns from protein sequences were developed, but they are characterized by relatively poor prediction quality. The novelty of the proposed sequence-based β-turn predictor stems from the usage of a window based information extracted from four predicted three-state secondary structures, which together with a selected set of position specific scoring matrix (PSSM values serve as an input to the support vector machine (SVM predictor. Results We show that (1 all four predicted secondary structures are useful; (2 the most useful information extracted from the predicted secondary structure includes the structure of the predicted residue, secondary structure content in a window around the predicted residue, and features that indicate whether the predicted residue is inside a secondary structure segment; (3 the PSSM values of Asn, Asp, Gly, Ile, Leu, Met, Pro, and Val were among the top ranked features, which corroborates with recent studies. The Asn, Asp, Gly, and Pro indicate potential β-turns, while the remaining four amino acids are useful to predict non-β-turns. Empirical evaluation using three nonredundant datasets shows favorable Qtotal, Qpredicted and MCC values when compared with over a dozen of modern competing methods. Our method is the first to break the 80% Qtotal barrier and achieves Qtotal = 80.9%, MCC = 0.47, and Qpredicted higher by over 6% when compared with the second best method. We use feature selection to reduce the dimensionality of the feature vector used as the input for the proposed prediction method. The applied feature set is smaller by 86, 62 and 37% when compared with the second and two third-best (with respect to MCC competing methods, respectively. Conclusion Experiments show that the proposed method constitutes an

  9. A prediction method based on wavelet transform and multiple models fusion for chaotic time series

    International Nuclear Information System (INIS)

    Zhongda, Tian; Shujiang, Li; Yanhong, Wang; Yi, Sha

    2017-01-01

    In order to improve the prediction accuracy of chaotic time series, a prediction method based on wavelet transform and multiple models fusion is proposed. The chaotic time series is decomposed and reconstructed by wavelet transform, and approximate components and detail components are obtained. According to different characteristics of each component, least squares support vector machine (LSSVM) is used as predictive model for approximation components. At the same time, an improved free search algorithm is utilized for predictive model parameters optimization. Auto regressive integrated moving average model (ARIMA) is used as predictive model for detail components. The multiple prediction model predictive values are fusion by Gauss–Markov algorithm, the error variance of predicted results after fusion is less than the single model, the prediction accuracy is improved. The simulation results are compared through two typical chaotic time series include Lorenz time series and Mackey–Glass time series. The simulation results show that the prediction method in this paper has a better prediction.

  10. DIOPS: A PC-Based Wave, Tide and Surf Prediction System

    National Research Council Canada - National Science Library

    Allard, Richard; Dykes, James; Kaihatu, James; Wakeham, Dean

    2005-01-01

    The Distributed Integrated Ocean Prediciton System (DIOPS) is a PC-based wave tide and surf prediction system designed to provide DoD accurate and timely surf predictions for essentially any world-wide location...

  11. Predictive analytics technology review: Similarity-based modeling and beyond

    Energy Technology Data Exchange (ETDEWEB)

    Herzog, James; Doan, Don; Gandhi, Devang; Nieman, Bill

    2010-09-15

    Over 11 years ago, SmartSignal introduced Predictive Analytics for eliminating equipment failures, using its patented SBM technology. SmartSignal continues to lead and dominate the market and, in 2010, went one step further and introduced Predictive Diagnostics. Now, SmartSignal is combining Predictive Diagnostics with RCM methodology and industry expertise. FMEA logic reengineers maintenance work management, eliminates unneeded inspections, and focuses efforts on the real issues. This integrated solution significantly lowers maintenance costs, protects against critical asset failures, and improves commercial availability, and reduces work orders 20-40%. Learn how.

  12. Genomic prediction in families of perennial ryegrass based on genotyping-by-sequencing

    DEFF Research Database (Denmark)

    Ashraf, Bilal

    In this thesis we investigate the potential for genomic prediction in perennial ryegrass using genotyping-by-sequencing (GBS) data. Association method based on family-based breeding systems was developed, genomic heritabilities, genomic prediction accurancies and effects of some key factors wer...... explored. Results show that low sequencing depth caused underestimation of allele substitution effects in GWAS and overestimation of genomic heritability in prediction studies. Other factors susch as SNP marker density, population structure and size of training population influenced accuracy of genomic...... prediction. Overall, GBS allows for genomic prediction in breeding families of perennial ryegrass and holds good potential to expedite genetic gain and encourage the application of genomic prediction...

  13. Safety prediction for basic components of safety critical software based on static testing

    International Nuclear Information System (INIS)

    Son, H.S.; Seong, P.H.

    2001-01-01

    The purpose of this work is to develop a safety prediction method, with which we can predict the risk of software components based on static testing results at the early development stage. The predictive model combines the major factor with the quality factor for the components, both of which are calculated based on the measures proposed in this work. The application to a safety-critical software system demonstrates the feasibility of the safety prediction method. (authors)

  14. Safety prediction for basic components of safety-critical software based on static testing

    International Nuclear Information System (INIS)

    Son, H.S.; Seong, P.H.

    2000-01-01

    The purpose of this work is to develop a safety prediction method, with which we can predict the risk of software components based on static testing results at the early development stage. The predictive model combines the major factor with the quality factor for the components, which are calculated based on the measures proposed in this work. The application to a safety-critical software system demonstrates the feasibility of the safety prediction method. (authors)

  15. An Efficient Deterministic Approach to Model-based Prediction Uncertainty

    Data.gov (United States)

    National Aeronautics and Space Administration — Prognostics deals with the prediction of the end of life (EOL) of a system. EOL is a random variable, due to the presence of process noise and uncertainty in the...

  16. Annotation-Based Whole Genomic Prediction and Selection

    DEFF Research Database (Denmark)

    Kadarmideen, Haja; Do, Duy Ngoc; Janss, Luc

    Genomic selection is widely used in both animal and plant species, however, it is performed with no input from known genomic or biological role of genetic variants and therefore is a black box approach in a genomic era. This study investigated the role of different genomic regions and detected QTLs...... in their contribution to estimated genomic variances and in prediction of genomic breeding values by applying SNP annotation approaches to feed efficiency. Ensembl Variant Predictor (EVP) and Pig QTL database were used as the source of genomic annotation for 60K chip. Genomic prediction was performed using the Bayes...... classes. Predictive accuracy was 0.531, 0.532, 0.302, and 0.344 for DFI, RFI, ADG and BF, respectively. The contribution per SNP to total genomic variance was similar among annotated classes across different traits. Predictive performance of SNP classes did not significantly differ from randomized SNP...

  17. Grammar-based feature generation for time-series prediction

    CERN Document Server

    De Silva, Anthony Mihirana

    2015-01-01

    This book proposes a novel approach for time-series prediction using machine learning techniques with automatic feature generation. Application of machine learning techniques to predict time-series continues to attract considerable attention due to the difficulty of the prediction problems compounded by the non-linear and non-stationary nature of the real world time-series. The performance of machine learning techniques, among other things, depends on suitable engineering of features. This book proposes a systematic way for generating suitable features using context-free grammar. A number of feature selection criteria are investigated and a hybrid feature generation and selection algorithm using grammatical evolution is proposed. The book contains graphical illustrations to explain the feature generation process. The proposed approaches are demonstrated by predicting the closing price of major stock market indices, peak electricity load and net hourly foreign exchange client trade volume. The proposed method ...

  18. Design and development of a Personality Prediction System based on Mobile-Phone based Metrics

    OpenAIRE

    Alonso Aguilar, Carlos

    2017-01-01

    The need of communication between people is something that is associated in our nature as human beings, but the way people do it has completely changed since the smartphone and Internet appeared. Otherwise, knowing human personality of someone is something really difficult that we gain after working communication skills with others. Based on this two principal points in my TFG election, whose aim is predict human personality by recollecting information of smartphones, using Big Data and Machi...

  19. A model-based prognostic approach to predict interconnect failure using impedance analysis

    Energy Technology Data Exchange (ETDEWEB)

    Kwon, Dae Il; Yoon, Jeong Ah [Dept. of System Design and Control Engineering. Ulsan National Institute of Science and Technology, Ulsan (Korea, Republic of)

    2016-10-15

    The reliability of electronic assemblies is largely affected by the health of interconnects, such as solder joints, which provide mechanical, electrical and thermal connections between circuit components. During field lifecycle conditions, interconnects are often subjected to a DC open circuit, one of the most common interconnect failure modes, due to cracking. An interconnect damaged by cracking is sometimes extremely hard to detect when it is a part of a daisy-chain structure, neighboring with other healthy interconnects that have not yet cracked. This cracked interconnect may seem to provide a good electrical contact due to the compressive load applied by the neighboring healthy interconnects, but it can cause the occasional loss of electrical continuity under operational and environmental loading conditions in field applications. Thus, cracked interconnects can lead to the intermittent failure of electronic assemblies and eventually to permanent failure of the product or the system. This paper introduces a model-based prognostic approach to quantitatively detect and predict interconnect failure using impedance analysis and particle filtering. Impedance analysis was previously reported as a sensitive means of detecting incipient changes at the surface of interconnects, such as cracking, based on the continuous monitoring of RF impedance. To predict the time to failure, particle filtering was used as a prognostic approach using the Paris model to address the fatigue crack growth. To validate this approach, mechanical fatigue tests were conducted with continuous monitoring of RF impedance while degrading the solder joints under test due to fatigue cracking. The test results showed the RF impedance consistently increased as the solder joints were degraded due to the growth of cracks, and particle filtering predicted the time to failure of the interconnects similarly to their actual timesto- failure based on the early sensitivity of RF impedance.

  20. NMR-based urine analysis in rats: prediction of proximal tubule kidney toxicity and phospholipidosis.

    Science.gov (United States)

    Lienemann, Kai; Plötz, Thomas; Pestel, Sabine

    2008-01-01

    The aim of safety pharmacology is early detection of compound-induced side-effects. NMR-based urine analysis followed by multivariate data analysis (metabonomics) identifies efficiently differences between toxic and non-toxic compounds; but in most cases multiple administrations of the test compound are necessary. We tested the feasibility of detecting proximal tubule kidney toxicity and phospholipidosis with metabonomics techniques after single compound administration as an early safety pharmacology approach. Rats were treated orally, intravenously, inhalatively or intraperitoneally with different test compounds. Urine was collected at 0-8 h and 8-24 h after compound administration, and (1)H NMR-patterns were recorded from the samples. Variation of post-processing and feature extraction methods led to different views on the data. Support Vector Machines were trained on these different data sets and then aggregated as experts in an Ensemble. Finally, validity was monitored with a cross-validation study using a training, validation, and test data set. Proximal tubule kidney toxicity could be predicted with reasonable total classification accuracy (85%), specificity (88%) and sensitivity (78%). In comparison to alternative histological studies, results were obtained quicker, compound need was reduced, and very importantly fewer animals were needed. In contrast, the induction of phospholipidosis by the test compounds could not be predicted using NMR-based urine analysis or the previously published biomarker PAG. NMR-based urine analysis was shown to effectively predict proximal tubule kidney toxicity after single compound administration in rats. Thus, this experimental design allows early detection of toxicity risks with relatively low amounts of compound in a reasonably short period of time.

  1. KFC2: a knowledge-based hot spot prediction method based on interface solvation, atomic density, and plasticity features.

    Science.gov (United States)

    Zhu, Xiaolei; Mitchell, Julie C

    2011-09-01

    Hot spots constitute a small fraction of protein-protein interface residues, yet they account for a large fraction of the binding affinity. Based on our previous method (KFC), we present two new methods (KFC2a and KFC2b) that outperform other methods at hot spot prediction. A number of improvements were made in developing these new methods. First, we created a training data set that contained a similar number of hot spot and non-hot spot residues. In addition, we generated 47 different features, and different numbers of features were used to train the models to avoid over-fitting. Finally, two feature combinations were selected: One (used in KFC2a) is composed of eight features that are mainly related to solvent accessible surface area and local plasticity; the other (KFC2b) is composed of seven features, only two of which are identical to those used in KFC2a. The two models were built using support vector machines (SVM). The two KFC2 models were then tested on a mixed independent test set, and compared with other methods such as Robetta, FOLDEF, HotPoint, MINERVA, and KFC. KFC2a showed the highest predictive accuracy for hot spot residues (True Positive Rate: TPR = 0.85); however, the false positive rate was somewhat higher than for other models. KFC2b showed the best predictive accuracy for hot spot residues (True Positive Rate: TPR = 0.62) among all methods other than KFC2a, and the False Positive Rate (FPR = 0.15) was comparable with other highly predictive methods. Copyright © 2011 Wiley-Liss, Inc.

  2. Simulation-Based Performance Evaluation of Predictive-Hashing Based Multicast Authentication Protocol

    Directory of Open Access Journals (Sweden)

    Seonho Choi

    2012-12-01

    Full Text Available A predictive-hashing based Denial-of-Service (DoS resistant multicast authentication protocol was proposed based upon predictive-hashing, one-way key chain, erasure codes, and distillation codes techniques [4, 5]. It was claimed that this new scheme should be more resistant to various types of DoS attacks, and its worst-case resource requirements were derived in terms of coarse-level system parameters including CPU times for signature verification and erasure/distillation decoding operations, attack levels, etc. To show the effectiveness of our approach and to analyze exact resource requirements in various attack scenarios with different parameter settings, we designed and implemented an attack simulator which is platformindependent. Various attack scenarios may be created with different attack types and parameters against a receiver equipped with the predictive-hashing based protocol. The design of the simulator is explained, and the simulation results are presented with detailed resource usage statistics. In addition, resistance level to various types of DoS attacks is formulated with a newly defined resistance metric. By comparing these results to those from another approach, PRABS [8], we show that the resistance level of our protocol is greatly enhanced even in the presence of many attack streams.

  3. Online Sentence Comprehension in PPA: Verb-Based Integration and Prediction

    Directory of Open Access Journals (Sweden)

    Jennifer E Mack

    2015-05-01

    Full Text Available Introduction. Impaired language comprehension is frequently observed in primary progressive aphasia (PPA. Word comprehension deficits are characteristic of the semantic variant (PPA-S whereas sentence comprehension deficits are more prevalent in the agrammatic (PPA-G and logopenic (PPA-L variants (Amici et al., 2007; Gorno-Tempini et al., 2011; Thompson et al., 2013. Word and sentence comprehension deficits have also been shown to have distinct neural substrates in PPA (Mesulam, Thompson, Weintraub, & Rogalski, in press. However, little is known about the relationship between word and sentence comprehension processes in PPA, specifically how words are accessed, combined, and used to predict upcoming elements within a sentence. A previous study demonstrated that listeners with stroke-induced agrammatic aphasia rapidly access verb meanings and use them to semantically integrate verb-arguments; however, they show deficits in using verb meanings predictively (Mack, Ji, & Thompson, 2013. The present study tested whether listeners with PPA are able to access verb meanings and to use this information to integrate and predict verb-arguments. Methods. Fifteen adults with PPA (8 with PPA-G, 3 with PPA-L, and 4 with PPA-S and ten age-matched controls participated in two eyetracking experiments. In both experiments, participants heard sentences with restrictive verbs that were semantically compatible with only one object in a four-picture visual array (e.g., eat when the array included a cake and three non-edible objects and unrestrictive verbs (e.g., move that were compatible with all four objects. The verb-based integration experiment tested access to verb meaning and its effects on integration of the direct object (e.g., Susan will eat/move the cake; the verb-based prediction experiment examined prediction of the direct object (e.g., Susan will eat/move the …. The dependent variable was the rate of fixations on the target picture (e.g., the cake in the

  4. Predicting infant cortical surface development using a 4D varifold-based learning framework and local topography-based shape morphing.

    Science.gov (United States)

    Rekik, Islem; Li, Gang; Lin, Weili; Shen, Dinggang

    2016-02-01

    Longitudinal neuroimaging analysis methods have remarkably advanced our understanding of early postnatal brain development. However, learning predictive models to trace forth the evolution trajectories of both normal and abnormal cortical shapes remains broadly absent. To fill this critical gap, we pioneered the first prediction model for longitudinal developing cortical surfaces in infants using a spatiotemporal current-based learning framework solely from the baseline cortical surface. In this paper, we detail this prediction model and even further improve its performance by introducing two key variants. First, we use the varifold metric to overcome the limitations of the current metric for surface registration that was used in our preliminary study. We also extend the conventional varifold-based surface registration model for pairwise registration to a spatiotemporal surface regression model. Second, we propose a morphing process of the baseline surface using its topographic attributes such as normal direction and principal curvature sign. Specifically, our method learns from longitudinal data both the geometric (vertices positions) and dynamic (temporal evolution trajectories) features of the infant cortical surface, comprising a training stage and a prediction stage. In the training stage, we use the proposed varifold-based shape regression model to estimate geodesic cortical shape evolution trajectories for each training subject. We then build an empirical mean spatiotemporal surface atlas. In the prediction stage, given an infant, we select the best learnt features from training subjects to simultaneously predict the cortical surface shapes at all later timepoints, based on similarity metrics between this baseline surface and the learnt baseline population average surface atlas. We used a leave-one-out cross validation method to predict the inner cortical surface shape at 3, 6, 9 and 12 months of age from the baseline cortical surface shape at birth. Our

  5. Structured prediction models for RNN based sequence labeling in clinical text.

    Science.gov (United States)

    Jagannatha, Abhyuday N; Yu, Hong

    2016-11-01

    Sequence labeling is a widely used method for named entity recognition and information extraction from unstructured natural language data. In clinical domain one major application of sequence labeling involves extraction of medical entities such as medication, indication, and side-effects from Electronic Health Record narratives. Sequence labeling in this domain, presents its own set of challenges and objectives. In this work we experimented with various CRF based structured learning models with Recurrent Neural Networks. We extend the previously studied LSTM-CRF models with explicit modeling of pairwise potentials. We also propose an approximate version of skip-chain CRF inference with RNN potentials. We use these methodologies for structured prediction in order to improve the exact phrase detection of various medical entities.

  6. Predicted carbonation of existing concrete building based on the Indonesian tropical micro-climate

    Science.gov (United States)

    Hilmy, M.; Prabowo, H.

    2018-03-01

    This paper is aimed to predict the carbonation progress based on the previous mathematical model. It shortly explains the nature of carbonation including the processes and effects. Environmental humidity and temperature of the existing concrete building are measured and compared to data from local Meteorological, Climatological, and Geophysical Agency. The data gained are expressed in the form of annual hygrothermal values which will use as the input parameter in carbonation model. The physical properties of the observed building such as its location, dimensions, and structural material used are quantified. These data then utilized as an important input parameter for carbonation coefficients. The relationships between relative humidity and the rate of carbonation established. The results can provide a basis for repair and maintenance of existing concrete buildings and the sake of service life analysis of them.

  7. A New Predictive Model Based on the ABC Optimized Multivariate Adaptive Regression Splines Approach for Predicting the Remaining Useful Life in Aircraft Engines

    Directory of Open Access Journals (Sweden)

    Paulino José García Nieto

    2016-05-01

    Full Text Available Remaining useful life (RUL estimation is considered as one of the most central points in the prognostics and health management (PHM. The present paper describes a nonlinear hybrid ABC–MARS-based model for the prediction of the remaining useful life of aircraft engines. Indeed, it is well-known that an accurate RUL estimation allows failure prevention in a more controllable way so that the effective maintenance can be carried out in appropriate time to correct impending faults. The proposed hybrid model combines multivariate adaptive regression splines (MARS, which have been successfully adopted for regression problems, with the artificial bee colony (ABC technique. This optimization technique involves parameter setting in the MARS training procedure, which significantly influences the regression accuracy. However, its use in reliability applications has not yet been widely explored. Bearing this in mind, remaining useful life values have been predicted here by using the hybrid ABC–MARS-based model from the remaining measured parameters (input variables for aircraft engines with success. A correlation coefficient equal to 0.92 was obtained when this hybrid ABC–MARS-based model was applied to experimental data. The agreement of this model with experimental data confirmed its good performance. The main advantage of this predictive model is that it does not require information about the previous operation states of the aircraft engine.

  8. Embryo quality predictive models based on cumulus cells gene expression

    Directory of Open Access Journals (Sweden)

    Devjak R

    2016-06-01

    Full Text Available Since the introduction of in vitro fertilization (IVF in clinical practice of infertility treatment, the indicators for high quality embryos were investigated. Cumulus cells (CC have a specific gene expression profile according to the developmental potential of the oocyte they are surrounding, and therefore, specific gene expression could be used as a biomarker. The aim of our study was to combine more than one biomarker to observe improvement in prediction value of embryo development. In this study, 58 CC samples from 17 IVF patients were analyzed. This study was approved by the Republic of Slovenia National Medical Ethics Committee. Gene expression analysis [quantitative real time polymerase chain reaction (qPCR] for five genes, analyzed according to embryo quality level, was performed. Two prediction models were tested for embryo quality prediction: a binary logistic and a decision tree model. As the main outcome, gene expression levels for five genes were taken and the area under the curve (AUC for two prediction models were calculated. Among tested genes, AMHR2 and LIF showed significant expression difference between high quality and low quality embryos. These two genes were used for the construction of two prediction models: the binary logistic model yielded an AUC of 0.72 ± 0.08 and the decision tree model yielded an AUC of 0.73 ± 0.03. Two different prediction models yielded similar predictive power to differentiate high and low quality embryos. In terms of eventual clinical decision making, the decision tree model resulted in easy-to-interpret rules that are highly applicable in clinical practice.

  9. Research on bearing life prediction based on support vector machine and its application

    International Nuclear Information System (INIS)

    Sun Chuang; Zhang Zhousuo; He Zhengjia

    2011-01-01

    Life prediction of rolling element bearing is the urgent demand in engineering practice, and the effective life prediction technique is beneficial to predictive maintenance. Support vector machine (SVM) is a novel machine learning method based on statistical learning theory, and is of advantage in prediction. This paper develops SVM-based model for bearing life prediction. The inputs of the model are features of bearing vibration signal and the output is the bearing running time-bearing failure time ratio. The model is built base on a few failed bearing data, and it can fuse information of the predicted bearing. So it is of advantage to bearing life prediction in practice. The model is applied to life prediction of a bearing, and the result shows the proposed model is of high precision.

  10. A Geometrical-based Vertical Gain Correction for Signal Strength Prediction of Downtilted Base Station Antennas in Urban Areas

    DEFF Research Database (Denmark)

    Rodriguez, Ignacio; Nguyen, Huan Cong; Sørensen, Troels Bundgaard

    2012-01-01

    -based extension to standard empirical path loss prediction models can give quite reasonable accuracy in predicting the signal strength from tilted base station antennas in small urban macro-cells. Our evaluation is based on measurements on several sectors in a 2.6 GHz Long Term Evolution (LTE) cellular network......, with electrical antenna downtilt in the range from 0 to 10 degrees, as well as predictions based on ray-tracing and 3D building databases covering the measurement area. Although the calibrated ray-tracing predictions are highly accurate compared with the measured data, the combined LOS/NLOS COST-WI model...

  11. Parallel protein secondary structure prediction based on neural networks.

    Science.gov (United States)

    Zhong, Wei; Altun, Gulsah; Tian, Xinmin; Harrison, Robert; Tai, Phang C; Pan, Yi

    2004-01-01

    Protein secondary structure prediction has a fundamental influence on today's bioinformatics research. In this work, binary and tertiary classifiers of protein secondary structure prediction are implemented on Denoeux belief neural network (DBNN) architecture. Hydrophobicity matrix, orthogonal matrix, BLOSUM62 and PSSM (position specific scoring matrix) are experimented separately as the encoding schemes for DBNN. The experimental results contribute to the design of new encoding schemes. New binary classifier for Helix versus not Helix ( approximately H) for DBNN produces prediction accuracy of 87% when PSSM is used for the input profile. The performance of DBNN binary classifier is comparable to other best prediction methods. The good test results for binary classifiers open a new approach for protein structure prediction with neural networks. Due to the time consuming task of training the neural networks, Pthread and OpenMP are employed to parallelize DBNN in the hyperthreading enabled Intel architecture. Speedup for 16 Pthreads is 4.9 and speedup for 16 OpenMP threads is 4 in the 4 processors shared memory architecture. Both speedup performance of OpenMP and Pthread is superior to that of other research. With the new parallel training algorithm, thousands of amino acids can be processed in reasonable amount of time. Our research also shows that hyperthreading technology for Intel architecture is efficient for parallel biological algorithms.

  12. Artificial neural network based particle size prediction of polymeric nanoparticles.

    Science.gov (United States)

    Youshia, John; Ali, Mohamed Ehab; Lamprecht, Alf

    2017-10-01

    Particle size of nanoparticles and the respective polydispersity are key factors influencing their biopharmaceutical behavior in a large variety of therapeutic applications. Predicting these attributes would skip many preliminary studies usually required to optimize formulations. The aim was to build a mathematical model capable of predicting the particle size of polymeric nanoparticles produced by a pharmaceutical polymer of choice. Polymer properties controlling the particle size were identified as molecular weight, hydrophobicity and surface activity, and were quantified by measuring polymer viscosity, contact angle and interfacial tension, respectively. A model was built using artificial neural network including these properties as input with particle size and polydispersity index as output. The established model successfully predicted particle size of nanoparticles covering a range of 70-400nm prepared from other polymers. The percentage bias for particle prediction was 2%, 4% and 6%, for the training, validation and testing data, respectively. Polymer surface activity was found to have the highest impact on the particle size followed by viscosity and finally hydrophobicity. Results of this study successfully highlighted polymer properties affecting particle size and confirmed the usefulness of artificial neural networks in predicting the particle size and polydispersity of polymeric nanoparticles. Copyright © 2017 Elsevier B.V. All rights reserved.

  13. An Internet of Things System for Underground Mine Air Quality Pollutant Prediction Based on Azure Machine Learning.

    Science.gov (United States)

    Jo, ByungWan; Khan, Rana Muhammad Asad

    2018-03-21

    The implementation of wireless sensor networks (WSNs) for monitoring the complex, dynamic, and harsh environment of underground coal mines (UCMs) is sought around the world to enhance safety. However, previously developed smart systems are limited to monitoring or, in a few cases, can report events. Therefore, this study introduces a reliable, efficient, and cost-effective internet of things (IoT) system for air quality monitoring with newly added features of assessment and pollutant prediction. This system is comprised of sensor modules, communication protocols, and a base station, running Azure Machine Learning (AML) Studio over it. Arduino-based sensor modules with eight different parameters were installed at separate locations of an operational UCM. Based on the sensed data, the proposed system assesses mine air quality in terms of the mine environment index (MEI). Principal component analysis (PCA) identified CH₄, CO, SO₂, and H₂S as the most influencing gases significantly affecting mine air quality. The results of PCA were fed into the ANN model in AML studio, which enabled the prediction of MEI. An optimum number of neurons were determined for both actual input and PCA-based input parameters. The results showed a better performance of the PCA-based ANN for MEI prediction, with R ² and RMSE values of 0.6654 and 0.2104, respectively. Therefore, the proposed Arduino and AML-based system enhances mine environmental safety by quickly assessing and predicting mine air quality.

  14. An Internet of Things System for Underground Mine Air Quality Pollutant Prediction Based on Azure Machine Learning

    Directory of Open Access Journals (Sweden)

    ByungWan Jo

    2018-03-01

    Full Text Available The implementation of wireless sensor networks (WSNs for monitoring the complex, dynamic, and harsh environment of underground coal mines (UCMs is sought around the world to enhance safety. However, previously developed smart systems are limited to monitoring or, in a few cases, can report events. Therefore, this study introduces a reliable, efficient, and cost-effective internet of things (IoT system for air quality monitoring with newly added features of assessment and pollutant prediction. This system is comprised of sensor modules, communication protocols, and a base station, running Azure Machine Learning (AML Studio over it. Arduino-based sensor modules with eight different parameters were installed at separate locations of an operational UCM. Based on the sensed data, the proposed system assesses mine air quality in terms of the mine environment index (MEI. Principal component analysis (PCA identified CH4, CO, SO2, and H2S as the most influencing gases significantly affecting mine air quality. The results of PCA were fed into the ANN model in AML studio, which enabled the prediction of MEI. An optimum number of neurons were determined for both actual input and PCA-based input parameters. The results showed a better performance of the PCA-based ANN for MEI prediction, with R2 and RMSE values of 0.6654 and 0.2104, respectively. Therefore, the proposed Arduino and AML-based system enhances mine environmental safety by quickly assessing and predicting mine air quality.

  15. Prediction of Geological Subsurfaces Based on Gaussian Random Field Models

    Energy Technology Data Exchange (ETDEWEB)

    Abrahamsen, Petter

    1997-12-31

    During the sixties, random functions became practical tools for predicting ore reserves with associated precision measures in the mining industry. This was the start of the geostatistical methods called kriging. These methods are used, for example, in petroleum exploration. This thesis reviews the possibilities for using Gaussian random functions in modelling of geological subsurfaces. It develops methods for including many sources of information and observations for precise prediction of the depth of geological subsurfaces. The simple properties of Gaussian distributions make it possible to calculate optimal predictors in the mean square sense. This is done in a discussion of kriging predictors. These predictors are then extended to deal with several subsurfaces simultaneously. It is shown how additional velocity observations can be used to improve predictions. The use of gradient data and even higher order derivatives are also considered and gradient data are used in an example. 130 refs., 44 figs., 12 tabs.

  16. Prediction of base flows from catchment characteristics: a case study from Zimbabwe

    NARCIS (Netherlands)

    Mazvimavi, D.; Meijerink, A.M.J.; Stein, A.

    2004-01-01

    Base flows make up the flows of most rivers in Zimbabwe during the dry season. Prediction of base flows from basin characteristics is necessary for water resources planning of ungauged basins. Linear regression and artificial neural networks were used to predict the base flow index (BFI) from basin

  17. Distributed estimation based on observations prediction in wireless sensor networks

    KAUST Repository

    Bouchoucha, Taha

    2015-03-19

    We consider wireless sensor networks (WSNs) used for distributed estimation of unknown parameters. Due to the limited bandwidth, sensor nodes quantize their noisy observations before transmission to a fusion center (FC) for the estimation process. In this letter, the correlation between observations is exploited to reduce the mean-square error (MSE) of the distributed estimation. Specifically, sensor nodes generate local predictions of their observations and then transmit the quantized prediction errors (innovations) to the FC rather than the quantized observations. The analytic and numerical results show that transmitting the innovations rather than the observations mitigates the effect of quantization noise and hence reduces the MSE. © 2015 IEEE.

  18. Learning-based Nonlinear Model Predictive Control to Improve Vision-based Mobile Robot Path Tracking

    Science.gov (United States)

    2015-07-01

    corresponding cost function to be J(u) = ( xd − x)TQx ( xd − x) + uTRu, (20) where Qx ∈ RKnx×Knx is positive semi-definite, R and u are as in (3), xd is a...sequence of desired states, xd = ( xd ,k+1, . . . , xd ,k+K), x is a sequence of predicted states, x = (xk+1, . . . ,xk+K), and K is the given prediction...vact,k−1+b, ωact,k−1+b), based ωk θk vk xd ,i−1 xd ,i xd ,i+1 xk yk Figure 5: Definition of the robot velocities, vk and ωk, and three pose variables

  19. Prediction-based Audiovisual Fusion for Classification of Non-Linguistic Vocalisations

    NARCIS (Netherlands)

    Petridis, Stavros; Pantic, Maja

    Prediction plays a key role in recent computational models of the brain and it has been suggested that the brain constantly makes multisensory spatiotemporal predictions. Inspired by these findings we tackle the problem of audiovisual fusion from a new perspective based on prediction. We train

  20. Fuzzy logic system for BBT based fertility prediction | Yazed | Journal ...

    African Journals Online (AJOL)

    ... been obtained with the accuracy of 95 % and 80 respectively. Besides, this prediction system using fuzzy logic could improve the current practice in the FAM technique by integrating it with an Internet of Things (IoT) technology for automatic BBT charting and monitoring. Keywords: family planning; fertility; BBT; fuzzy logic.

  1. Predicting Academic Performance Based on Students' Blog and Microblog Posts

    NARCIS (Netherlands)

    Dascalu, Mihai; Popescu, Elvira; Becheru, Alexandru; Crossley, Scott; Trausan-Matu, Stefan

    2016-01-01

    This study investigates the degree to which textual complexity indices applied on students’ online contributions, corroborated with a longitudinal analysis performed on their weekly posts, predict academic performance. The source of student writing consists of blog and microblog posts, created in

  2. Prediction of speech intelligibility based on an auditory preprocessing model

    DEFF Research Database (Denmark)

    Christiansen, Claus Forup Corlin; Pedersen, Michael Syskind; Dau, Torsten

    2010-01-01

    in noise experiment was used for training and an ideal binary mask experiment was used for evaluation. All three models were able to capture the trends in the speech in noise training data well, but the proposed model provides a better prediction of the binary mask test data, particularly when the binary...... masks degenerate to a noise vocoder....

  3. Behavior-Based Budget Management Using Predictive Analytics

    Energy Technology Data Exchange (ETDEWEB)

    Troy Hiltbrand

    2013-03-01

    Historically, the mechanisms to perform forecasting have primarily used two common factors as a basis for future predictions: time and money. While time and money are very important aspects of determining future budgetary spend patterns, organizations represent a complex system of unique individuals with a myriad of associated behaviors and all of these behaviors have bearing on how budget is utilized. When looking to forecasted budgets, it becomes a guessing game about how budget managers will behave under a given set of conditions. This becomes relatively messy when human nature is introduced, as different managers will react very differently under similar circumstances. While one manager becomes ultra conservative during periods of financial austerity, another might be un-phased and continue to spend as they have in the past. Both might revert into a state of budgetary protectionism masking what is truly happening at a budget holder level, in order to keep as much budget and influence as possible while at the same time sacrificing the greater good of the organization. To more accurately predict future outcomes, the models should consider both time and money and other behavioral patterns that have been observed across the organization. The field of predictive analytics is poised to provide the tools and methodologies needed for organizations to do just this: capture and leverage behaviors of the past to predict the future.

  4. Prediction of Seismic Damage-Based Degradation in RC Structures

    DEFF Research Database (Denmark)

    Kirkegaard, Poul Henning; Gupta, Vinay K.; Nielsen, Søren R.K.

    Estimation of structural damage from known increase in the fundamental period of a structure after an earthquake or prediction of degradation of stiffness and strength for known damage requires reliable correlations between these response functionals. This study proposes a modified Clough-Johnsto...

  5. Flexible nurse staffing based on hourly bed census predictions

    NARCIS (Netherlands)

    Kortbeek, Nikky; Braaksma, Aleida; Burger, C.A.J.; Bakker, P.J.M; Boucherie, Richardus J.

    2012-01-01

    Workload on nursing wards depends highly on patient arrivals and patient lengths of stay, which are both inherently variable. Predicting this workload and staffing nurses accordingly is essential for guaranteeing quality of care in a cost effective manner. This paper introduces a stochastic method

  6. Flexible nurse staffing based on hourly bed census predictions

    NARCIS (Netherlands)

    Kortbeek, Nikky; Braaksma, Aleida; Burger, C.A.J.; Bakker, P.J.M; Boucherie, Richardus J.

    Workloads in nursing wards depend highly on patient arrivals and lengths of stay, both of which are inherently variable. Predicting these workloads and staffing nurses accordingly are essential for guaranteeing quality of care in a cost-effective manner. This paper introduces a stochastic method

  7. Predicting Social Anxiety Treatment Outcome based on Therapeutic Email Conversations

    NARCIS (Netherlands)

    Hoogendoorn, M.; Berger, Thomas; Schulz, Ava; Stolz, Timo; Szolovits, Peter

    2016-01-01

    Predicting therapeutic outcome in the mental health domain is of utmost importance to enable therapists to provide the most effective treatment to a patient. Using information from the writings of a patient can potentially be a valuable source of information, especially now that more and more

  8. Scanpath Based N-Gram Models for Predicting Reading Behavior

    DEFF Research Database (Denmark)

    Mishra, Abhijit; Bhattacharyya, Pushpak; Carl, Michael

    2013-01-01

    Predicting reading behavior is a difficult task. Reading behavior depends on various linguistic factors (e.g. sentence length, structural complexity etc.) and other factors (e.g individual's reading style, age etc.). Ideally, a reading model should be similar to a language model where the model i...

  9. Prediction of critical thinking disposition based on mentoring among ...

    African Journals Online (AJOL)

    The results of study showed that there was a significantly positive correlation between Mentoring and Critical thinking disposition among faculty members. The findings showed that 67% of variance of critical thinking disposition was defined by predictive variables. The faculty members evaluated themselves in all mentoring ...

  10. Well-log based prediction of thermal conductivity

    DEFF Research Database (Denmark)

    Fuchs, Sven; Förster, Andrea

    Rock thermal conductivity (TC) is paramount for the determination of heat flow and the calculation of temperature profiles. Due to the scarcity of drill cores compared to the availability of petrophysical well logs, methods are desired to indirectly predict TC in sedimentary basins. Most...

  11. Sequence-based feature prediction and annotation of proteins

    DEFF Research Database (Denmark)

    Juncker, Agnieszka; Jensen, Lars J.; Pierleoni, Andrea

    2009-01-01

    A recent trend in computational methods for annotation of protein function is that many prediction tools are combined in complex workflows and pipelines to facilitate the analysis of feature combinations, for example, the entire repertoire of kinase-binding motifs in the human proteome....

  12. Predictability of depression severity based on posterior alpha oscillations

    NARCIS (Netherlands)

    Jiang, H.; Popov, T.; Jylänki, P.P.; Bi, K.; Yao, Z.; Lu, Q.; Jensen, O.; Gerven, M.A.J. van

    2016-01-01

    Objective: We aimed to integrate neural data and an advanced machine learning technique to predict individual major depressive disorder (MDD) patient severity. Methods: MEG data was acquired from 22 MDD patients and 22 healthy controls (HC) resting awake with eyes closed. Individual power spectra

  13. Does Personality Predict Depression and Use of an Internet-Based Intervention for Depression among Adolescents?

    Directory of Open Access Journals (Sweden)

    Hans Christian B. Vangberg

    2012-01-01

    Full Text Available Background. Focus upon depression and prevention of its occurrence among adolescents is increasing. Novel ways of dealing with this serious problem have become available especially by means of internet-based prevention and treatment programs of depression and anxiety. The use of Internet-based intervention programs among adolescents has revealed some difficulties in implementation that need to be further elucidated. The aim of this study is to investigate the association between personality and adolescent depression and the characteristics of users of an Internet-based intervention program. Method. The Junior Temperament and Character Inventory (JTCI, the General Self-Efficacy scale (GSE and the Centre for Epidemiological Studies-Depression scale (CES-D have been administered to a sample (=1234 of Norwegian senior high-school students. Results. Multiple regression analysis revealed associations between depression and gender, and several JTCI domains and facets. In line with previous findings in adults, high Harm Avoidance and low Self-Directedness emerged as the strongest predictors of adolescent depressive symptoms. Further, in logistic regression analysis with the covariates JTCI, GSE and CES-D, the only significant variables predicting use/non-use were the CES-D and the temperament domain Reward Dependence. Conclusion. The results in this study revealed level of depressive symptoms as the strongest predictor of the use of the Internet based intervention and that personality might provide useful information about the users.

  14. Accurate prediction of stability changes in protein mutants by combining machine learning with structure based computational mutagenesis.

    Science.gov (United States)

    Masso, Majid; Vaisman, Iosif I

    2008-09-15

    Accurate predictive models for the impact of single amino acid substitutions on protein stability provide insight into protein structure and function. Such models are also valuable for the design and engineering of new proteins. Previously described methods have utilized properties of protein sequence or structure to predict the free energy change of mutants due to thermal (DeltaDeltaG) and denaturant (DeltaDeltaG(H2O)) denaturations, as well as mutant thermal stability (DeltaT(m)), through the application of either computational energy-based approaches or machine learning techniques. However, accuracy associated with applying these methods separately is frequently far from optimal. We detail a computational mutagenesis technique based on a four-body, knowledge-based, statistical contact potential. For any mutation due to a single amino acid replacement in a protein, the method provides an empirical normalized measure of the ensuing environmental perturbation occurring at every residue position. A feature vector is generated for the mutant by considering perturbations at the mutated position and it's ordered six nearest neighbors in the 3-dimensional (3D) protein structure. These predictors of stability change are evaluated by applying machine learning tools to large training sets of mutants derived from diverse proteins that have been experimentally studied and described. Predictive models based on our combined approach are either comparable to, or in many cases significantly outperform, previously published results. A web server with supporting documentation is available at http://proteins.gmu.edu/automute.

  15. Prediction of welding shrinkage deformation of bridge steel box girder based on wavelet neural network

    Science.gov (United States)

    Tao, Yulong; Miao, Yunshui; Han, Jiaqi; Yan, Feiyun

    2018-05-01

    Aiming at the low accuracy of traditional forecasting methods such as linear regression method, this paper presents a prediction method for predicting the relationship between bridge steel box girder and its displacement with wavelet neural network. Compared with traditional forecasting methods, this scheme has better local characteristics and learning ability, which greatly improves the prediction ability of deformation. Through analysis of the instance and found that after compared with the traditional prediction method based on wavelet neural network, the rigid beam deformation prediction accuracy is higher, and is superior to the BP neural network prediction results, conform to the actual demand of engineering design.

  16. Novel Approach for the Recognition and Prediction of Multi-Function Radar Behaviours Based on Predictive State Representations

    Directory of Open Access Journals (Sweden)

    Jian Ou

    2017-03-01

    Full Text Available The extensive applications of multi-function radars (MFRs have presented a great challenge to the technologies of radar countermeasures (RCMs and electronic intelligence (ELINT. The recently proposed cognitive electronic warfare (CEW provides a good solution, whose crux is to perceive present and future MFR behaviours, including the operating modes, waveform parameters, scheduling schemes, etc. Due to the variety and complexity of MFR waveforms, the existing approaches have the drawbacks of inefficiency and weak practicability in prediction. A novel method for MFR behaviour recognition and prediction is proposed based on predictive state representation (PSR. With the proposed approach, operating modes of MFR are recognized by accumulating the predictive states, instead of using fixed transition probabilities that are unavailable in the battlefield. It helps to reduce the dependence of MFR on prior information. And MFR signals can be quickly predicted by iteratively using the predicted observation, avoiding the very large computation brought by the uncertainty of future observations. Simulations with a hypothetical MFR signal sequence in a typical scenario are presented, showing that the proposed methods perform well and efficiently, which attests to their validity.

  17. Novel Approach for the Recognition and Prediction of Multi-Function Radar Behaviours Based on Predictive State Representations.

    Science.gov (United States)

    Ou, Jian; Chen, Yongguang; Zhao, Feng; Liu, Jin; Xiao, Shunping

    2017-03-19

    The extensive applications of multi-function radars (MFRs) have presented a great challenge to the technologies of radar countermeasures (RCMs) and electronic intelligence (ELINT). The recently proposed cognitive electronic warfare (CEW) provides a good solution, whose crux is to perceive present and future MFR behaviours, including the operating modes, waveform parameters, scheduling schemes, etc. Due to the variety and complexity of MFR waveforms, the existing approaches have the drawbacks of inefficiency and weak practicability in prediction. A novel method for MFR behaviour recognition and prediction is proposed based on predictive state representation (PSR). With the proposed approach, operating modes of MFR are recognized by accumulating the predictive states, instead of using fixed transition probabilities that are unavailable in the battlefield. It helps to reduce the dependence of MFR on prior information. And MFR signals can be quickly predicted by iteratively using the predicted observation, avoiding the very large computation brought by the uncertainty of future observations. Simulations with a hypothetical MFR signal sequence in a typical scenario are presented, showing that the proposed methods perform well and efficiently, which attests to their validity.

  18. DNA methylation-based forensic age prediction using artificial neural networks and next generation sequencing.

    Science.gov (United States)

    Vidaki, Athina; Ballard, David; Aliferi, Anastasia; Miller, Thomas H; Barron, Leon P; Syndercombe Court, Denise

    2017-05-01

    The ability to estimate the age of the donor from recovered biological material at a crime scene can be of substantial value in forensic investigations. Aging can be complex and is associated with various molecular modifications in cells that accumulate over a person's lifetime including epigenetic patterns. The aim of this study was to use age-specific DNA methylation patterns to generate an accurate model for the prediction of chronological age using data from whole blood. In total, 45 age-associated CpG sites were selected based on their reported age coefficients in a previous extensive study and investigated using publicly available methylation data obtained from 1156 whole blood samples (aged 2-90 years) analysed with Illumina's genome-wide methylation platforms (27K/450K). Applying stepwise regression for variable selection, 23 of these CpG sites were identified that could significantly contribute to age prediction modelling and multiple regression analysis carried out with these markers provided an accurate prediction of age (R 2 =0.92, mean absolute error (MAE)=4.6 years). However, applying machine learning, and more specifically a generalised regression neural network model, the age prediction significantly improved (R 2 =0.96) with a MAE=3.3 years for the training set and 4.4 years for a blind test set of 231 cases. The machine learning approach used 16 CpG sites, located in 16 different genomic regions, with the top 3 predictors of age belonged to the genes NHLRC1, SCGN and CSNK1D. The proposed model was further tested using independent cohorts of 53 monozygotic twins (MAE=7.1 years) and a cohort of 1011 disease state individuals (MAE=7.2 years). Furthermore, we highlighted the age markers' potential applicability in samples other than blood by predicting age with similar accuracy in 265 saliva samples (R 2 =0.96) with a MAE=3.2 years (training set) and 4.0 years (blind test). In an attempt to create a sensitive and accurate age prediction test, a next

  19. Research on Improved Depth Belief Network-Based Prediction of Cardiovascular Diseases

    Directory of Open Access Journals (Sweden)

    Peng Lu

    2018-01-01

    Full Text Available Quantitative analysis and prediction can help to reduce the risk of cardiovascular disease. Quantitative prediction based on traditional model has low accuracy. The variance of model prediction based on shallow neural network is larger. In this paper, cardiovascular disease prediction model based on improved deep belief network (DBN is proposed. Using the reconstruction error, the network depth is determined independently, and unsupervised training and supervised optimization are combined. It ensures the accuracy of model prediction while guaranteeing stability. Thirty experiments were performed independently on the Statlog (Heart and Heart Disease Database data sets in the UCI database. Experimental results showed that the mean of prediction accuracy was 91.26% and 89.78%, respectively. The variance of prediction accuracy was 5.78 and 4.46, respectively.

  20. Predictive factors for cosmetic surgery: a hospital-based investigation.

    Science.gov (United States)

    Li, Jun; Li, Qian; Zhou, Bei; Gao, Yanli; Ma, Jiehua; Li, Jingyun

    2016-01-01

    Cosmetic surgery is becoming increasingly popular in China. However, reports on the predictive factors for cosmetic surgery in Chinese individuals are scarce in the literature. We retrospectively analyzed 4550 cosmetic surgeries performed from January 2010 to December 2014 at a single center in China. Data collection included patient demographics and type of cosmetic surgery. Predictive factors were age, sex, marital status, occupational status, educational degree, and having had children. Predictive factors for the three major cosmetic surgeries were determined using a logistic regression analysis. Patients aged 19-34 years accounted for the most popular surgical procedures (76.9 %). The most commonly requested procedures were eye surgery, Botox injection, and nevus removal. Logistic regression analysis showed that higher education level (college, P = 0.01, OR 1.21) was predictive for eye surgery. Age (19-34 years, P = 0.00, OR 33.39; 35-50, P = 0.00, OR 31.34; ≥51, P = 0.00, OR 16.42), female sex (P = 0.00, OR 9.19), employment (service occupations, P = 0.00, OR 2.31; non-service occupations, P = 0.00, OR 1.76), and higher education level (college, P = 0.00, OR 1.39) were independent predictive factors for Botox injection. Married status (P = 0.00, OR 1.57), employment (non-service occupations, P = 0.00, OR 1.50), higher education level (masters, P = 0.00, OR 6.61), and having children (P = 0.00, OR 1.45) were independent predictive factors for nevus removal. The principal three cosmetic surgeries (eye surgery, Botox injection, and nevus removal) were associated with multiple variables. Patients employed in non-service occupations were more inclined to undergo Botox injection and nevus removal. Cohort study, Level III.

  1. Automated Predictive Big Data Analytics Using Ontology Based Semantics.

    Science.gov (United States)

    Nural, Mustafa V; Cotterell, Michael E; Peng, Hao; Xie, Rui; Ma, Ping; Miller, John A

    2015-10-01

    Predictive analytics in the big data era is taking on an ever increasingly important role. Issues related to choice on modeling technique, estimation procedure (or algorithm) and efficient execution can present significant challenges. For example, selection of appropriate and optimal models for big data analytics often requires careful investigation and considerable expertise which might not always be readily available. In this paper, we propose to use semantic technology to assist data analysts and data scientists in selecting appropriate modeling techniques and building specific models as well as the rationale for the techniques and models selected. To formally describe the modeling techniques, models and results, we developed the Analytics Ontology that supports inferencing for semi-automated model selection. The SCALATION framework, which currently supports over thirty modeling techniques for predictive big data analytics is used as a testbed for evaluating the use of semantic technology.

  2. BPP: a sequence-based algorithm for branch point prediction.

    Science.gov (United States)

    Zhang, Qing; Fan, Xiaodan; Wang, Yejun; Sun, Ming-An; Shao, Jianlin; Guo, Dianjing

    2017-10-15

    Although high-throughput sequencing methods have been proposed to identify splicing branch points in the human genome, these methods can only detect a small fraction of the branch points subject to the sequencing depth, experimental cost and the expression level of the mRNA. An accurate computational model for branch point prediction is therefore an ongoing objective in human genome research. We here propose a novel branch point prediction algorithm that utilizes information on the branch point sequence and the polypyrimidine tract. Using experimentally validated data, we demonstrate that our proposed method outperforms existing methods. Availability and implementation: https://github.com/zhqingit/BPP. djguo@cuhk.edu.hk. Supplementary data are available at Bioinformatics online. © The Author (2017). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com

  3. Prediction of breast cancer risk based on common genetic variants in women of East Asian ancestry.

    Science.gov (United States)

    Wen, Wanqing; Shu, Xiao-Ou; Guo, Xingyi; Cai, Qiuyin; Long, Jirong; Bolla, Manjeet K; Michailidou, Kyriaki; Dennis, Joe; Wang, Qin; Gao, Yu-Tang; Zheng, Ying; Dunning, Alison M; García-Closas, Montserrat; Brennan, Paul; Chen, Shou-Tung; Choi, Ji-Yeob; Hartman, Mikael; Ito, Hidemi; Lophatananon, Artitaya; Matsuo, Keitaro; Miao, Hui; Muir, Kenneth; Sangrajrang, Suleeporn; Shen, Chen-Yang; Teo, Soo H; Tseng, Chiu-Chen; Wu, Anna H; Yip, Cheng Har; Simard, Jacques; Pharoah, Paul D P; Hall, Per; Kang, Daehee; Xiang, Yongbing; Easton, Douglas F; Zheng, Wei

    2016-12-08

    Approximately 100 common breast cancer susceptibility alleles have been identified in genome-wide association studies (GWAS). The utility of these variants in breast cancer risk prediction models has not been evaluated adequately in women of Asian ancestry. We evaluated 88 breast cancer risk variants that were identified previously by GWAS in 11,760 cases and 11,612 controls of Asian ancestry. SNPs confirmed to be associated with breast cancer risk in Asian women were used to construct a polygenic risk score (PRS). The relative and absolute risks of breast cancer by the PRS percentiles were estimated based on the PRS distribution, and were used to stratify women into different levels of breast cancer risk. We confirmed significant associations with breast cancer risk for SNPs in 44 of the 78 previously reported loci at P women in the middle quintile of the PRS, women in the top 1% group had a 2.70-fold elevated risk of breast cancer (95% CI: 2.15-3.40). The risk prediction model with the PRS had an area under the receiver operating characteristic curve of 0.606. The lifetime risk of breast cancer for Shanghai Chinese women in the lowest and highest 1% of the PRS was 1.35% and 10.06%, respectively. Approximately one-half of GWAS-identified breast cancer risk variants can be directly replicated in East Asian women. Collectively, common genetic variants are important predictors for breast cancer risk. Using common genetic variants for breast cancer could help identify women at high risk of breast cancer.

  4. Predicting Software Projects Cost Estimation Based on Mining Historical Data

    OpenAIRE

    Najadat, Hassan; Alsmadi, Izzat; Shboul, Yazan

    2012-01-01

    In this research, a hybrid cost estimation model is proposed to produce a realistic prediction model that takes into consideration software project, product, process, and environmental elements. A cost estimation dataset is built from a large number of open source projects. Those projects are divided into three domains: communication, finance, and game projects. Several data mining techniques are used to classify software projects in terms of their development complexity. Data mining techniqu...

  5. Decline Curve Based Models for Predicting Natural Gas Well Performance

    OpenAIRE

    Kamari, Arash; Mohammadi, Amir H.; Lee, Moonyong; Mahmood, Tariq; Bahadori, Alireza

    2016-01-01

    The productivity of a gas well declines over its production life as cannot cover economic policies. To overcome such problems, the production performance of gas wells should be predicted by applying reliable methods to analyse the decline trend. Therefore, reliable models are developed in this study on the basis of powerful artificial intelligence techniques viz. the artificial neural network (ANN) modelling strategy, least square support vector machine (LSSVM) approach, adaptive neuro-fuzzy ...

  6. Multiple-Factor Based Sparse Urban Travel Time Prediction

    Directory of Open Access Journals (Sweden)

    Xinyan Zhu

    2018-02-01

    Full Text Available The prediction of travel time is challenging given the sparseness of real-time traffic data and the uncertainty of travel, because it is influenced by multiple factors on the congested urban road networks. In our paper, we propose a three-layer neural network from big probe vehicles data incorporating multi-factors to estimate travel time. The procedure includes the following three steps. First, we aggregate data according to the travel time of a single taxi traveling a target link on working days as traffic flows display similar traffic patterns over a weekly cycle. We then extract feature relationships between target and adjacent links at 30 min interval. About 224,830,178 records are extracted from probe vehicles. Second, we design a three-layer artificial neural network model. The number of neurons in input layer is eight, and the number of neurons in output layer is one. Finally, the trained neural network model is used for link travel time prediction. Different factors are included to examine their influence on the link travel time. Our model is verified using historical data from probe vehicles collected from May to July 2014 in Wuhan, China. The results show that we could obtain the link travel time prediction results using the designed artificial neural network model and detect the influence of different factors on link travel time.

  7. Time Series Outlier Detection Based on Sliding Window Prediction

    Directory of Open Access Journals (Sweden)

    Yufeng Yu

    2014-01-01

    Full Text Available In order to detect outliers in hydrological time series data for improving data quality and decision-making quality related to design, operation, and management of water resources, this research develops a time series outlier detection method for hydrologic data that can be used to identify data that deviate from historical patterns. The method first built a forecasting model on the history data and then used it to predict future values. Anomalies are assumed to take place if the observed values fall outside a given prediction confidence interval (PCI, which can be calculated by the predicted value and confidence coefficient. The use of PCI as threshold is mainly on the fact that it considers the uncertainty in the data series parameters in the forecasting model to address the suitable threshold selection problem. The method performs fast, incremental evaluation of data as it becomes available, scales to large quantities of data, and requires no preclassification of anomalies. Experiments with different hydrologic real-world time series showed that the proposed methods are fast and correctly identify abnormal data and can be used for hydrologic time series analysis.

  8. Introducing spatial information into predictive NF-kappaB modelling--an agent-based approach.

    Directory of Open Access Journals (Sweden)

    Mark Pogson

    2008-06-01

    Full Text Available Nature is governed by local interactions among lower-level sub-units, whether at the cell, organ, organism, or colony level. Adaptive system behaviour emerges via these interactions, which integrate the activity of the sub-units. To understand the system level it is necessary to understand the underlying local interactions. Successful models of local interactions at different levels of biological organisation, including epithelial tissue and ant colonies, have demonstrated the benefits of such 'agent-based' modelling. Here we present an agent-based approach to modelling a crucial biological system--the intracellular NF-kappaB signalling pathway. The pathway is vital to immune response regulation, and is fundamental to basic survival in a range of species. Alterations in pathway regulation underlie a variety of diseases, including atherosclerosis and arthritis. Our modelling of individual molecules, receptors and genes provides a more comprehensive outline of regulatory network mechanisms than previously possible with equation-based approaches. The method also permits consideration of structural parameters in pathway regulation; here we predict that inhibition of NF-kappaB is directly affected by actin filaments of the cytoskeleton sequestering excess inhibitors, therefore regulating steady-state and feedback behaviour.

  9. FERAL : Network-based classifier with application to breast cancer outcome prediction

    NARCIS (Netherlands)

    Allahyar, A.; De Ridder, J.

    2015-01-01

    Motivation: Breast cancer outcome prediction based on gene expression profiles is an important strategy for personalize patient care. To improve performance and consistency of discovered markers of the initial molecular classifiers, network-based outcome prediction methods (NOPs) have been proposed.

  10. Predicting nosocomial lower respiratory tract infections by a risk index based system

    NARCIS (Netherlands)

    Chen, Yong; Shan, Xue; Zhao, Jingya; Han, Xuelin; Tian, Shuguang; Chen, Fangyan; Su, Xueting; Sun, Yansong; Huang, Liuyu; Grundmann, Hajo; Wang, Hongyuan; Han, Li

    2017-01-01

    Although belonging to one of the most common type of nosocomial infection, there was currently no simple prediction model for lower respiratory tract infections (LRTIs). This study aims to develop a risk index based system for predicting nosocomial LRTIs based on data from a large point-prevalence

  11. Efficient predictive model-based and fuzzy control for green urban mobility

    NARCIS (Netherlands)

    Jamshidnejad, A.

    2017-01-01

    In this thesis, we develop efficient predictive model-based control approaches, including model-predictive control (MPC) andmodel-based fuzzy control, for application in urban traffic networks with the aim of reducing a combination of the total time spent by the vehicles within the network and the

  12. Estimation of genetic connectedness diagnostics based on prediction errors without the prediction error variance-covariance matrix.

    Science.gov (United States)

    Holmes, John B; Dodds, Ken G; Lee, Michael A

    2017-03-02

    An important issue in genetic evaluation is the comparability of random effects (breeding values), particularly between pairs of animals in different contemporary groups. This is usually referred to as genetic connectedness. While various measures of connectedness have been proposed in the literature, there is general agreement that the most appropriate measure is some function of the prediction error variance-covariance matrix. However, obtaining the prediction error variance-covariance matrix is computationally demanding for large-scale genetic evaluations. Many alternative statistics have been proposed that avoid the computational cost of obtaining the prediction error variance-covariance matrix, such as counts of genetic links between contemporary groups, gene flow matrices, and functions of the variance-covariance matrix of estimated contemporary group fixed effects. In this paper, we show that a correction to the variance-covariance matrix of estimated contemporary group fixed effects will produce the exact prediction error variance-covariance matrix averaged by contemporary group for univariate models in the presence of single or multiple fixed effects and one random effect. We demonstrate the correction for a series of models and show that approximations to the prediction error matrix based solely on the variance-covariance matrix of estimated contemporary group fixed effects are inappropriate in certain circumstances. Our method allows for the calculation of a connectedness measure based on the prediction error variance-covariance matrix by calculating only the variance-covariance matrix of estimated fixed effects. Since the number of fixed effects in genetic evaluation is usually orders of magnitudes smaller than the number of random effect levels, the computational requirements for our method should be reduced.

  13. Can we predict cognitive deficits based on cognitive complaints?

    Directory of Open Access Journals (Sweden)

    Ewa Małgorzata Szepietowska

    2017-03-01

    Full Text Available Objective: The aim of the study was to determine whether the intensity of cognitive complaints can, in conjunction with other selected variables, predict the general level of cognitive functions evaluated with the Montreal Cognitive Assessment (MoCA test. Current reports do not show clear conclusions on this subject. Some data indicate that cognitive complaints have a predictive value for low scores in standardised tasks, suggesting cognitive dysfunction (e.g. mild cognitive impairment. Other data, however, do not support the predictive role of complaints, and show no relationship to exist between the complaints and the results of cognitive tests. Material and methods: The study included 118 adults (58 women and 60 men. We used the MoCA test, a self-report questionnaire assessing the intensity of cognitive complaints (Patient-Reported Outcomes in Cognitive Impairment – PROCOG and Dysexecutive Questionnaire/Self – DEX-S, and selected subtests of the Wechsler Adult Intelligence Scale-Revised (WAIS-R PL. On the basis of the results from the MoCA test, two separate groups were created, one comprising respondents with lower results, and one – those who obtained scores indicating a normal level of cognitive function. We compared these groups according to the severity of the complaints and the results obtained with the other methods. Logistic regression analysis was performed taking into account the independent variables (gender, age, result in PROCOG, DEX-S, and neurological condition and the dependent variable (dichotomized result in MoCA. Results: Groups with different levels of performance in MoCA differed in regards of some cognitive abilities and the severity of complaints related to semantic memory, anxiety associated with a sense of deficit and loss of skills, but provided similar self-assessments regarding the efficiency of episodic memory, long-term memory, social skills and executive functions. The severity of complaints does not allow

  14. Incorporating information on predicted solvent accessibility to the co-evolution-based study of protein interactions.

    Science.gov (United States)

    Ochoa, David; García-Gutiérrez, Ponciano; Juan, David; Valencia, Alfonso; Pazos, Florencio

    2013-01-27

    A widespread family of methods for studying and predicting protein interactions using sequence information is based on co-evolution, quantified as similarity of phylogenetic trees. Part of the co-evolution observed between interacting proteins could be due to co-adaptation caused by inter-protein contacts. In this case, the co-evolution is expected to be more evident when evaluated on the surface of the proteins or the internal layers close to it. In this work we study the effect of incorporating information on predicted solvent accessibility to three methods for predicting protein interactions based on similarity of phylogenetic trees. We evaluate the performance of these methods in predicting different types of protein associations when trees based on positions with different characteristics of predicted accessibility are used as input. We found that predicted accessibility improves the results of two recent versions of the mirrortree methodology in predicting direct binary physical interactions, while it neither improves these methods, nor the original mirrortree method, in predicting other types of interactions. That improvement comes at no cost in terms of applicability since accessibility can be predicted for any sequence. We also found that predictions of protein-protein interactions are improved when multiple sequence alignments with a richer representation of sequences (including paralogs) are incorporated in the accessibility prediction.

  15. Highlights from the previous volumes

    Science.gov (United States)

    Vergini Eduardo, G.; Pan, Y.; al., Vardi R. et; al., Akkermans Eric et; et al.

    2014-01-01

    Semiclassical propagation up to the Heisenberg time Superconductivity and magnetic order in the half-Heusler compound ErPdBi An experimental evidence-based computational paradigm for new logic-gates in neuronal activity Universality in the symmetric exclusion process and diffusive systems

  16. [Immediate prediction of recovery, based on emotional impact of vertigo].

    Science.gov (United States)

    Dal-Lago, Andrés H; Ceballos-Lizarraga, Ricardo; Carmona, Sergio

    2014-01-01

    This work presents deeper studies of comorbidity between anxiety and vestibular pathology. The aim of this work was to comprehend the reasons why patients do not feel «fully recovered» even though the treating professionals discharge them. We studied the features of personality that can favour the continuity of the condition. The questionnaire for measuring the emotional impact of vertigo makes it possible to determine if the patient has a psychological style with a tendency to develop pathological anxiety levels. Anxiety is a subjective characteristic determinant in difficulties with medical treatment. The questionnaire was applied to 198 patients in Argentina and Mexico in parallel. Each pathology was treated by standard medical procedures. The study focused on determining the correlation between «feeling fully recovered or not at the end of treatment» and the questionnaire scores obtained before the approach. In more than 80% of cases, high scores (>15 points) on the questionnaire were correlated with the difficulty presented by the patients for full recovery from the pathology after medical treatment. The objective assessments (duration and intensity of symptoms, time of onset of the disease, etc.) do not exactly predict possible difficulties during treatment of vertigo. Consequently, we consider the patient's subjective assessment of how the vestibular pathology affects him or her to be determinant. That key information allows us to predict the course of the illness and the probability of a full recovery. Copyright © 2013 Elsevier España, S.L. All rights reserved.

  17. Epileptic seizure prediction based on a bivariate spectral power methodology.

    Science.gov (United States)

    Bandarabadi, Mojtaba; Teixeira, Cesar A; Direito, Bruno; Dourado, Antonio

    2012-01-01

    The spectral power of 5 frequently considered frequency bands (Alpha, Beta, Gamma, Theta and Delta) for 6 EEG channels is computed and then all the possible pairwise combinations among the 30 features set, are used to create a 435 dimensional feature space. Two new feature selection methods are introduced to choose the best candidate features among those and to reduce the dimensionality of this feature space. The selected features are then fed to Support Vector Machines (SVMs) that classify the cerebral state in preictal and non-preictal classes. The outputs of the SVM are regularized using a method that accounts for the classification dynamics of the preictal class, also known as "Firing Power" method. The results obtained using our feature selection approaches are compared with the ones obtained using minimum Redundancy Maximum Relevance (mRMR) feature selection method. The results in a group of 12 patients of the EPILEPSIAE database, containing 46 seizures and 787 hours multichannel recording for out-of-sample data, indicate the efficiency of the bivariate approach as well as the two new feature selection methods. The best results presented sensitivity of 76.09% (35 of 46 seizures predicted) and a false prediction rate of 0.15(-1).

  18. Predicting animal production on sourveld: a species-based approach

    African Journals Online (AJOL)

    The model was based upon measured ingestive and digestive characteristics of different grass species and incorporates an explicit digestive constraint based upon rumen mass and turnover rate. Illustrates with graphs, diagrams and tables. Keywords: ADG; Andropogon appendiculatus; Average daily gain; Cattle; Cynodon ...

  19. A Method for Driving Route Predictions Based on Hidden Markov Model

    Directory of Open Access Journals (Sweden)

    Ning Ye

    2015-01-01

    Full Text Available We present a driving route prediction method that is based on Hidden Markov Model (HMM. This method can accurately predict a vehicle’s entire route as early in a trip’s lifetime as possible without inputting origins and destinations beforehand. Firstly, we propose the route recommendation system architecture, where route predictions play important role in the system. Secondly, we define a road network model, normalize each of driving routes in the rectangular coordinate system, and build the HMM to make preparation for route predictions using a method of training set extension based on K-means++ and the add-one (Laplace smoothing technique. Thirdly, we present the route prediction algorithm. Finally, the experimental results of the effectiveness of the route predictions that is based on HMM are shown.

  20. Predicting hot spots in protein interfaces based on protrusion index, pseudo hydrophobicity and electron-ion interaction pseudopotential features

    Science.gov (United States)

    Xia, Junfeng; Yue, Zhenyu; Di, Yunqiang; Zhu, Xiaolei; Zheng, Chun-Hou

    2016-01-01

    The identification of hot spots, a small subset of protein interfaces that accounts for the majority of binding free energy, is becoming more important for the research of drug design and cancer development. Based on our previous methods (APIS and KFC2), here we proposed a novel hot spot prediction method. For each hot spot residue, we firstly constructed a wide variety of 108 sequence, structural, and neighborhood features to characterize potential hot spot residues, including conventional ones and new one (pseudo hydrophobicity) exploited in this study. We then selected 3 top-ranking features that contribute the most in the classification by a two-step feature selection process consisting of minimal-redundancy-maximal-relevance algorithm and an exhaustive search method. We used support vector machines to build our final prediction model. When testing our model on an independent test set, our method showed the highest F1-score of 0.70 and MCC of 0.46 comparing with the existing state-of-the-art hot spot prediction methods. Our results indicate that these features are more effective than the conventional features considered previously, and that the combination of our and traditional features may support the creation of a discriminative feature set for efficient prediction of hot spots in protein interfaces. PMID:26934646

  1. Prediction of Industrial Electric Energy Consumption in Anhui Province Based on GA-BP Neural Network

    Science.gov (United States)

    Zhang, Jiajing; Yin, Guodong; Ni, Youcong; Chen, Jinlan

    2018-01-01

    In order to improve the prediction accuracy of industrial electrical energy consumption, a prediction model of industrial electrical energy consumption was proposed based on genetic algorithm and neural network. The model use genetic algorithm to optimize the weights and thresholds of BP neural network, and the model is used to predict the energy consumption of industrial power in Anhui Province, to improve the prediction accuracy of industrial electric energy consumption in Anhui province. By comparing experiment of GA-BP prediction model and BP neural network model, the GA-BP model is more accurate with smaller number of neurons in the hidden layer.

  2. Modeling and Control of CSTR using Model based Neural Network Predictive Control

    OpenAIRE

    Shrivastava, Piyush

    2012-01-01

    This paper presents a predictive control strategy based on neural network model of the plant is applied to Continuous Stirred Tank Reactor (CSTR). This system is a highly nonlinear process; therefore, a nonlinear predictive method, e.g., neural network predictive control, can be a better match to govern the system dynamics. In the paper, the NN model and the way in which it can be used to predict the behavior of the CSTR process over a certain prediction horizon are described, and some commen...

  3. An IoT Based Predictive Connected Car Maintenance Approach

    Directory of Open Access Journals (Sweden)

    Rohit Dhall

    2017-03-01

    Full Text Available Internet of Things (IoT is fast emerging and becoming an almost basic necessity in general life. The concepts of using technology in our daily life is not new, but with the advancements in technology, the impact of technology in daily activities of a person can be seen in almost all the aspects of life. Today, all aspects of our daily life, be it health of a person, his location, movement, etc. can be monitored and analyzed using information captured from various connected devices. This paper discusses one such use case, which can be implemented by the automobile industry, using technological advancements in the areas of IoT and Analytics. ‘Connected Car’ is a terminology, often associated with cars and other passenger vehicles, which are capable of internet connectivity and sharing of various kinds of data with backend applications. The data being shared can be about the location and speed of the car, status of various parts/lubricants of the car, and if the car needs urgent service or not. Once data are transmitted to the backend services, various workflows can be created to take necessary actions, e.g. scheduling a service with the car service provider, or if large numbers of care are in the same location, then the traffic management system can take necessary action. ’Connected cars’ can also communicate with each other, and can send alerts to each other in certain scenarios like possible crash etc. This paper talks about how the concept of ‘connected cars’ can be used to perform ‘predictive car maintenance’. It also discusses how certain technology components, i.e., Eclipse Mosquito and Eclipse Paho can be used to implement a predictive connected car use case.

  4. A prospective evaluation of treatment with Selective Internal Radiation Therapy (SIR-spheres) in patients with unresectable liver metastases from colorectal cancer previously treated with 5-FU based chemotherapy

    International Nuclear Information System (INIS)

    Lim, L; Gibbs, P; Yip, D; Shapiro, JD; Dowling, R; Smith, D; Little, A; Bailey, W; Liechtenstein, M

    2005-01-01

    To prospectively evaluate the efficacy and safety of selective internal radiation (SIR) spheres in patients with inoperable liver metastases from colorectal cancer who have failed 5FU based chemotherapy. Patients were prospectively enrolled at three Australian centres. All patients had previously received 5-FU based chemotherapy for metastatic colorectal cancer. Patients were ECOG 0–2 and had liver dominant or liver only disease. Concurrent 5-FU was given at investigator discretion. Thirty patients were treated between January 2002 and March 2004. As of July 2004 the median follow-up is 18.3 months. Median patient age was 61.7 years (range 36 – 77). Twenty-nine patients are evaluable for toxicity and response. There were 10 partial responses (33%), with the median duration of response being 8.3 months (range 2–18) and median time to progression of 5.3 mths. Response rates were lower (21%) and progression free survival shorter (3.9 mths) in patients that had received all standard chemotherapy options (n = 14). No responses were seen in patients with a poor performance status (n = 3) or extrahepatic disease (n = 6). Overall treatment related toxicity was acceptable, however significant late toxicity included 4 cases of gastric ulceration. In patients with metastatic colorectal cancer that have previously received treatment with 5-FU based chemotherapy, treatment with SIR-spheres has demonstrated encouraging activity. Further studies are required to better define the subsets of patients most likely to respond

  5. TFpredict and SABINE: sequence-based prediction of structural and functional characteristics of transcription factors.

    Directory of Open Access Journals (Sweden)

    Johannes Eichner

    Full Text Available One of the key mechanisms of transcriptional control are the specific connections between transcription factors (TF and cis-regulatory elements in gene promoters. The elucidation of these specific protein-DNA interactions is crucial to gain insights into the complex regulatory mechanisms and networks underlying the adaptation of organisms to dynamically changing environmental conditions. As experimental techniques for determining TF binding sites are expensive and mostly performed for selected TFs only, accurate computational approaches are needed to analyze transcriptional regulation in eukaryotes on a genome-wide level. We implemented a four-step classification workflow which for a given protein sequence (1 discriminates TFs from other proteins, (2 determines the structural superclass of TFs, (3 identifies the DNA-binding domains of TFs and (4 predicts their cis-acting DNA motif. While existing tools were extended and adapted for performing the latter two prediction steps, the first two steps are based on a novel numeric sequence representation which allows for combining existing knowledge from a BLAST scan with robust machine learning-based classification. By evaluation on a set of experimentally confirmed TFs and non-TFs, we demonstrate that our new protein sequence representation facilitates more reliable identification and structural classification of TFs than previously proposed sequence-derived features. The algorithms underlying our proposed methodology are implemented in the two complementary tools TFpredict and SABINE. The online and stand-alone versions of TFpredict and SABINE are freely available to academics at http://www.cogsys.cs.uni-tuebingen.de/software/TFpredict/ and http://www.cogsys.cs.uni-tuebingen.de/software/SABINE/.

  6. Protein Function Prediction Based on Sequence and Structure Information

    KAUST Repository

    Smaili, Fatima Z.

    2016-01-01

    operate. In this master thesis project, we worked on inferring protein functions based on the primary protein sequence. In the approach we follow, 3D models are first constructed using I-TASSER. Functions are then deduced by structurally matching

  7. An Artificial Neural Network Based Short-term Dynamic Prediction of Algae Bloom

    Directory of Open Access Journals (Sweden)

    Yao Junyang

    2014-06-01

    Full Text Available This paper proposes a method of short-term prediction of algae bloom based on artificial neural network. Firstly, principal component analysis is applied to water environmental factors in algae bloom raceway ponds to get main factors that influence the formation of algae blooms. Then, a model of short-term dynamic prediction based on neural network is built with the current chlorophyll_a values as input and the chlorophyll_a values in the next moment as output to realize short-term dynamic prediction of algae bloom. Simulation results show that the model can realize short-term prediction of algae bloom effectively.

  8. Analysis of direct contact membrane distillation based on a lumped-parameter dynamic predictive model

    KAUST Repository

    Karam, Ayman M.

    2016-10-03

    Membrane distillation (MD) is an emerging technology that has a great potential for sustainable water desalination. In order to pave the way for successful commercialization of MD-based water desalination techniques, adequate and accurate dynamical models of the process are essential. This paper presents the predictive capabilities of a lumped-parameter dynamic model for direct contact membrane distillation (DCMD) and discusses the results under wide range of steady-state and dynamic conditions. Unlike previous studies, the proposed model captures the time response of the spacial temperature distribution along the flow direction. It also directly solves for the local temperatures at the membrane interfaces, which allows to accurately model and calculate local flux values along with other intrinsic variables of great influence on the process, like the temperature polarization coefficient (TPC). The proposed model is based on energy and mass conservation principles and analogy between thermal and electrical systems. Experimental data was collected to validated the steady-state and dynamic responses of the model. The obtained results shows great agreement with the experimental data. The paper discusses the results of several simulations under various conditions to optimize the DCMD process efficiency and analyze its response. This demonstrates some potential applications of the proposed model to carry out scale up and design studies. © 2016

  9. Analysis of direct contact membrane distillation based on a lumped-parameter dynamic predictive model

    KAUST Repository

    Karam, Ayman M.; Alsaadi, Ahmad Salem; Ghaffour, NorEddine; Laleg-Kirati, Taous-Meriem

    2016-01-01

    Membrane distillation (MD) is an emerging technology that has a great potential for sustainable water desalination. In order to pave the way for successful commercialization of MD-based water desalination techniques, adequate and accurate dynamical models of the process are essential. This paper presents the predictive capabilities of a lumped-parameter dynamic model for direct contact membrane distillation (DCMD) and discusses the results under wide range of steady-state and dynamic conditions. Unlike previous studies, the proposed model captures the time response of the spacial temperature distribution along the flow direction. It also directly solves for the local temperatures at the membrane interfaces, which allows to accurately model and calculate local flux values along with other intrinsic variables of great influence on the process, like the temperature polarization coefficient (TPC). The proposed model is based on energy and mass conservation principles and analogy between thermal and electrical systems. Experimental data was collected to validated the steady-state and dynamic responses of the model. The obtained results shows great agreement with the experimental data. The paper discusses the results of several simulations under various conditions to optimize the DCMD process efficiency and analyze its response. This demonstrates some potential applications of the proposed model to carry out scale up and design studies. © 2016

  10. [Prediction of the total Japanese cedar pollen counts based on male flower-setting conditions of standard trees].

    Science.gov (United States)

    Yuta, Atsushi; Ukai, Kotaro; Sakakura, Yasuo; Tani, Hideshi; Matsuda, Fukiko; Yang, Tian-qun; Majima, Yuichi

    2002-07-01

    We made a prediction of the Japanese cedar (Cryptomeria japonica) pollen counts at Tsu city based on male flower-setting conditions of standard trees. The 69 standard trees from 23 kinds of clones, planted at Mie Prefecture Science and Technology Promotion Center (Hakusan, Mie) in 1964, were selected. Male flower-setting conditions for 276 faces (69 trees x 4 points of the compass) were scored from 0 to 3. The average of scores and total pollen counts from 1988 to 2000 was analyzed. As the results, the average scores from standard trees and total pollen counts except two mass pollen-scattered years in 1995 and 2000 had a positive correlation (r = 0.914) by linear function. On the mass pollen-scattered years, pollen counts were influenced from the previous year. Therefore, the score of the present year minus that of the previous year were used for analysis. The average scores from male flower-setting conditions and pollen counts had a strong positive correlation (r = 0.994) when positive scores by taking account of the previous year were analyzed. We conclude that prediction of pollen counts are possible based on the male flower-setting conditions of standard trees.

  11. Prediction of Protein-Protein Interactions Related to Protein Complexes Based on Protein Interaction Networks

    Directory of Open Access Journals (Sweden)

    Peng Liu

    2015-01-01

    Full Text Available A method for predicting protein-protein interactions based on detected protein complexes is proposed to repair deficient interactions derived from high-throughput biological experiments. Protein complexes are pruned and decomposed into small parts based on the adaptive k-cores method to predict protein-protein interactions associated with the complexes. The proposed method is adaptive to protein complexes with different structure, number, and size of nodes in a protein-protein interaction network. Based on different complex sets detected by various algorithms, we can obtain different prediction sets of protein-protein interactions. The reliability of the predicted interaction sets is proved by using estimations with statistical tests and direct confirmation of the biological data. In comparison with the approaches which predict the interactions based on the cliques, the overlap of the predictions is small. Similarly, the overlaps among the predicted sets of interactions derived from various complex sets are also small. Thus, every predicted set of interactions may complement and improve the quality of the original network data. Meanwhile, the predictions from the proposed method replenish protein-protein interactions associated with protein complexes using only the network topology.

  12. Statistical prediction of biomethane potentials based on the composition of lignocellulosic biomass

    DEFF Research Database (Denmark)

    Thomsen, Sune Tjalfe; Spliid, Henrik; Østergård, Hanne

    2014-01-01

    Mixture models are introduced as a new and stronger methodology for statistical prediction of biomethane potentials (BPM) from lignocellulosic biomass compared to the linear regression models previously used. A large dataset from literature combined with our own data were analysed using canonical...

  13. Developing an Ensemble Prediction System based on COSMO-DE

    Science.gov (United States)

    Theis, S.; Gebhardt, C.; Buchhold, M.; Ben Bouallègue, Z.; Ohl, R.; Paulat, M.; Peralta, C.

    2010-09-01

    The numerical weather prediction model COSMO-DE is a configuration of the COSMO model with a horizontal grid size of 2.8 km. It has been running operationally at DWD since 2007, it covers the area of Germany and produces forecasts with a lead time of 0-21 hours. The model COSMO-DE is convection-permitting, which means that it does without a parametrisation of deep convection and simulates deep convection explicitly. One aim is an improved forecast of convective heavy rain events. Convection-permitting models are in operational use at several weather services, but currently not in ensemble mode. It is expected that an ensemble system could reveal the advantages of a convection-permitting model even better. The probabilistic approach is necessary, because the explicit simulation of convective processes for more than a few hours cannot be viewed as a deterministic forecast anymore. This is due to the chaotic behaviour and short life cycle of the processes which are simulated explicitly now. In the framework of the project COSMO-DE-EPS, DWD is developing and implementing an ensemble prediction system (EPS) for the model COSMO-DE. The project COSMO-DE-EPS comprises the generation of ensemble members, as well as the verification and visualization of the ensemble forecasts and also statistical postprocessing. A pre-operational mode of the EPS with 20 ensemble members is foreseen to start in 2010. Operational use is envisaged to start in 2012, after an upgrade to 40 members and inclusion of statistical postprocessing. The presentation introduces the project COSMO-DE-EPS and describes the design of the ensemble as it is planned for the pre-operational mode. In particular, the currently implemented method for the generation of ensemble members will be explained and discussed. The method includes variations of initial conditions, lateral boundary conditions, and model physics. At present, pragmatic methods are applied which resemble the basic ideas of a multi-model approach

  14. Predicting the required number of training samples. [for remotely sensed image data based on covariance matrix estimate quality criterion of normal distribution

    Science.gov (United States)

    Kalayeh, H. M.; Landgrebe, D. A.

    1983-01-01

    A criterion which measures the quality of the estimate of the covariance matrix of a multivariate normal distribution is developed. Based on this criterion, the necessary number of training samples is predicted. Experimental results which are used as a guide for determining the number of training samples are included. Previously announced in STAR as N82-28109

  15. Predictive Software Measures based on Z Specifications - A Case Study

    Directory of Open Access Journals (Sweden)

    Andreas Bollin

    2012-07-01

    Full Text Available Estimating the effort and quality of a system is a critical step at the beginning of every software project. It is necessary to have reliable ways of calculating these measures, and, it is even better when the calculation can be done as early as possible in the development life-cycle. Having this in mind, metrics for formal specifications are examined with a view to correlations to complexity and quality-based code measures. A case study, based on a Z specification and its implementation in ADA, analyzes the practicability of these metrics as predictors.

  16. Cell Based GIS as Cellular Automata for Disaster Spreading Predictions and Required Data Systems

    Directory of Open Access Journals (Sweden)

    Kohei Arai

    2013-03-01

    Full Text Available A method for prediction and simulation based on the Cell Based Geographic Information System(GIS as Cellular Automata (CA is proposed together with required data systems, in particular metasearch engine usage in an unified way. It is confirmed that the proposed cell based GIS as CA has flexible usage of the attribute information that is attached to the cell in concert with location information and does work for disaster spreading simulation and prediction.

  17. Predictive Accuracy of a Cardiovascular Disease Risk Prediction Model in Rural South India – A Community Based Retrospective Cohort Study

    Directory of Open Access Journals (Sweden)

    Farah N Fathima

    2015-03-01

    Full Text Available Background: Identification of individuals at risk of developing cardiovascular diseases by risk stratification is the first step in primary prevention. Aims & Objectives: To assess the five year risk of developing a cardiovascular event from retrospective data and to assess the predictive accuracy of the non laboratory based National Health and Nutrition Examination Survey (NHANES risk prediction model among individuals in a rural South Indian population. Materials & Methods: A community based retrospective cohort study was conducted in three villages where risk stratification was done for all eligible adults aged between 35-74 years at the time of initial assessment using the NHANES risk prediction charts. Household visits were made after a period of five years by trained doctors to determine cardiovascular outcomes. Results: 521 people fulfilled the eligibility criteria of whom 486 (93.3% could be traced after five years. 56.8% were in low risk, 36.6% were in moderate risk and 6.6% were in high risk categories. 29 persons (5.97% had had cardiovascular events over the last five years of which 24 events (82.7% were nonfatal and five (17.25% were fatal. The mean age of the people who developed cardiovascular events was 57.24 ± 9.09 years. The odds ratios for the three levels of risk showed a linear trend with the odds ratios for the moderate risk and high risk category being 1.35 and 1.94 respectively with the low risk category as baseline. Conclusion: The non laboratory based NHANES charts did not accurately predict the occurrence of cardiovascular events in any of the risk categories.

  18. Image-Based Visual Servoing for Manipulation Via Predictive Control – A Survey of Some Results

    Directory of Open Access Journals (Sweden)

    Corneliu Lazăr

    2016-09-01

    Full Text Available In this paper, a review of predictive control algorithms developed by the authors for visual servoing of robots in manipulation applications is presented. Using these algorithms, a control predictive framework was created for image-based visual servoing (IBVS systems. Firstly, considering the point features, in the year 2008 we introduced an internal model predictor based on the interaction matrix. Secondly, distinctly from the set-point trajectory, we introduced in 2011 the reference trajectory using the concept from predictive control. Finally, minimizing a sum of squares of predicted errors, the optimal input trajectory was obtained. The new concept of predictive control for IBVS systems was employed to develop a cascade structure for motion control of robot arms. Simulation results obtained with a simulator for predictive IBVS systems are also presented.

  19. Prediction degradation trend of nuclear equipment based on GM (1, 1)-Markov chain

    International Nuclear Information System (INIS)

    Zhang Liming; Zhao Xinwen; Cai Qi; Wu Guangjiang

    2010-01-01

    The degradation trend prediction results are important references for nuclear equipment in-service inspection and maintenance plan. But it is difficult to predict the nuclear equipment degradation trend accurately by the traditional statistical probability due to the small samples, lack of degradation data and the wavy degradation locus. Therefore, a method of equipment degradation trend prediction based on GM (1, l)-Markov chain was proposed in this paper. The method which makes use of the advantages of both GM (1, 1) method and Markov chain could improve the prediction precision of nuclear equipment degradation trend. The paper collected degradation data as samples and accurately predicted the degradation trend of canned motor pump. Compared with the prediction results by GM (1, 1) method, the prediction precision by GM (1, l)-Markov chain is more accurate. (authors)

  20. Predicting nature-based tourist roles: a life span perspective

    Science.gov (United States)

    James J. Murdy; Heather J. Gibson; Andrew Yiannakis

    2003-01-01

    The concept of stable, clearly identifiable patterns of tourist behavior, or roles, is a relatively recent development. Yiannakis and Gibson (1988, 1992) identified fifteen tourist roles based on leisure travelers' vacation behaviors. Building on this work, Gibson (1994) used discriminant analysis to determine the combination of needs and demographics are...

  1. Machine learning-based methods for prediction of linear B-cell epitopes.

    Science.gov (United States)

    Wang, Hsin-Wei; Pai, Tun-Wen

    2014-01-01

    B-cell epitope prediction facilitates immunologists in designing peptide-based vaccine, diagnostic test, disease prevention, treatment, and antibody production. In comparison with T-cell epitope prediction, the performance of variable length B-cell epitope prediction is still yet to be satisfied. Fortunately, due to increasingly available verified epitope databases, bioinformaticians could adopt machine learning-based algorithms on all curated data to design an improved prediction tool for biomedical researchers. Here, we have reviewed related epitope prediction papers, especially those for linear B-cell epitope prediction. It should be noticed that a combination of selected propensity scales and statistics of epitope residues with machine learning-based tools formulated a general way for constructing linear B-cell epitope prediction systems. It is also observed from most of the comparison results that the kernel method of support vector machine (SVM) classifier outperformed other machine learning-based approaches. Hence, in this chapter, except reviewing recently published papers, we have introduced the fundamentals of B-cell epitope and SVM techniques. In addition, an example of linear B-cell prediction system based on physicochemical features and amino acid combinations is illustrated in details.

  2. Method of predicting the mean lung dose based on a patient's anatomy and dose-volume histograms

    Energy Technology Data Exchange (ETDEWEB)

    Zawadzka, Anna, E-mail: a.zawadzka@zfm.coi.pl [Medical Physics Department, Centre of Oncology, Maria Sklodowska-Curie Memorial Cancer Center, Warsaw (Poland); Nesteruk, Marta [Faculty of Physics, University of Warsaw, Warsaw (Poland); Department of Radiation Oncology, University Hospital Zurich and University of Zurich, Zurich (Switzerland); Brzozowska, Beata [Faculty of Physics, University of Warsaw, Warsaw (Poland); Kukołowicz, Paweł F. [Medical Physics Department, Centre of Oncology, Maria Sklodowska-Curie Memorial Cancer Center, Warsaw (Poland)

    2017-04-01

    The aim of this study was to propose a method to predict the minimum achievable mean lung dose (MLD) and corresponding dosimetric parameters for organs-at-risk (OAR) based on individual patient anatomy. For each patient, the dose for 36 equidistant individual multileaf collimator shaped fields in the treatment planning system (TPS) was calculated. Based on these dose matrices, the MLD for each patient was predicted by the homemade DosePredictor software in which the solution of linear equations was implemented. The software prediction results were validated based on 3D conformal radiotherapy (3D-CRT) and volumetric modulated arc therapy (VMAT) plans previously prepared for 16 patients with stage III non–small-cell lung cancer (NSCLC). For each patient, dosimetric parameters derived from plans and the results calculated by DosePredictor were compared. The MLD, the maximum dose to the spinal cord (D{sub max} {sub cord}) and the mean esophageal dose (MED) were analyzed. There was a strong correlation between the MLD calculated by the DosePredictor and those obtained in treatment plans regardless of the technique used. The correlation coefficient was 0.96 for both 3D-CRT and VMAT techniques. In a similar manner, MED correlations of 0.98 and 0.96 were obtained for 3D-CRT and VMAT plans, respectively. The maximum dose to the spinal cord was not predicted very well. The correlation coefficient was 0.30 and 0.61 for 3D-CRT and VMAT, respectively. The presented method allows us to predict the minimum MLD and corresponding dosimetric parameters to OARs without the necessity of plan preparation. The method can serve as a guide during the treatment planning process, for example, as initial constraints in VMAT optimization. It allows the probability of lung pneumonitis to be predicted.

  3. Post-fire debris flow prediction in Western United States: Advancements based on a nonparametric statistical technique

    Science.gov (United States)

    Nikolopoulos, E. I.; Destro, E.; Bhuiyan, M. A. E.; Borga, M., Sr.; Anagnostou, E. N.

    2017-12-01

    Fire disasters affect modern societies at global scale inducing significant economic losses and human casualties. In addition to their direct impacts they have various adverse effects on hydrologic and geomorphologic processes of a region due to the tremendous alteration of the landscape characteristics (vegetation, soil properties etc). As a consequence, wildfires often initiate a cascade of hazards such as flash floods and debris flows that usually follow the occurrence of a wildfire thus magnifying the overall impact in a region. Post-fire debris flows (PFDF) is one such type of hazards frequently occurring in Western United States where wildfires are a common natural disaster. Prediction of PDFD is therefore of high importance in this region and over the last years a number of efforts from United States Geological Survey (USGS) and National Weather Service (NWS) have been focused on the development of early warning systems that will help mitigate PFDF risk. This work proposes a prediction framework that is based on a nonparametric statistical technique (random forests) that allows predicting the occurrence of PFDF at regional scale with a higher degree of accuracy than the commonly used approaches that are based on power-law thresholds and logistic regression procedures. The work presented is based on a recently released database from USGS that reports a total of 1500 storms that triggered and did not trigger PFDF in a number of fire affected catchments in Western United States. The database includes information on storm characteristics (duration, accumulation, max intensity etc) and other auxiliary information of land surface properties (soil erodibility index, local slope etc). Results show that the proposed model is able to achieve a satisfactory prediction accuracy (threat score > 0.6) superior of previously published prediction frameworks highlighting the potential of nonparametric statistical techniques for development of PFDF prediction systems.

  4. Protein complex prediction based on k-connected subgraphs in protein interaction network

    OpenAIRE

    Habibi, Mahnaz; Eslahchi, Changiz; Wong, Limsoon

    2010-01-01

    Abstract Background Protein complexes play an important role in cellular mechanisms. Recently, several methods have been presented to predict protein complexes in a protein interaction network. In these methods, a protein complex is predicted as a dense subgraph of protein interactions. However, interactions data are incomplete and a protein complex does not have to be a complete or dense subgraph. Results We propose a more appropriate protein complex prediction method, CFA, that is based on ...

  5. A Deep Learning Prediction Model Based on Extreme-Point Symmetric Mode Decomposition and Cluster Analysis

    OpenAIRE

    Li, Guohui; Zhang, Songling; Yang, Hong

    2017-01-01

    Aiming at the irregularity of nonlinear signal and its predicting difficulty, a deep learning prediction model based on extreme-point symmetric mode decomposition (ESMD) and clustering analysis is proposed. Firstly, the original data is decomposed by ESMD to obtain the finite number of intrinsic mode functions (IMFs) and residuals. Secondly, the fuzzy c-means is used to cluster the decomposed components, and then the deep belief network (DBN) is used to predict it. Finally, the reconstructed ...

  6. Probability-based collaborative filtering model for predicting gene–disease associations

    OpenAIRE

    Zeng, Xiangxiang; Ding, Ningxiang; Rodríguez-Patón, Alfonso; Zou, Quan

    2017-01-01

    Background Accurately predicting pathogenic human genes has been challenging in recent research. Considering extensive gene–disease data verified by biological experiments, we can apply computational methods to perform accurate predictions with reduced time and expenses. Methods We propose a probability-based collaborative filtering model (PCFM) to predict pathogenic human genes. Several kinds of data sets, containing data of humans and data of other nonhuman species, are integrated in our mo...

  7. Developing a computational tool for predicting physical parameters of a typical VVER-1000 core based on artificial neural network

    International Nuclear Information System (INIS)

    Mirvakili, S.M.; Faghihi, F.; Khalafi, H.

    2012-01-01

    Highlights: ► Thermal–hydraulics parameters of a VVER-1000 core based on neural network (ANN), are carried out. ► Required data for ANN training are found based on modified COBRA-EN code and then linked each other using MATLAB software. ► Based on ANN method, average and maximum temperature of fuel and clad as well as MDNBR of each FA are predicted. -- Abstract: The main goal of the present article is to design a computational tool to predict physical parameters of the VVER-1000 nuclear reactor core based on artificial neural network (ANN), taking into account a detailed physical model of the fuel rods and coolant channels in a fuel assembly. Predictions of thermal characteristics of fuel, clad and coolant are performed using cascade feed forward ANN based on linear fission power distribution and power peaking factors of FAs and hot channels factors (which are found based on our previous neutronic calculations). A software package has been developed to prepare the required data for ANN training which applies a modified COBRA-EN code for sub-channel analysis and links the codes using the MATLAB software. Based on the current estimation system, five main core TH parameters are predicted, which include the average and maximum temperatures of fuel and clad as well as the minimum departure from nucleate boiling ratio (MDNBR) for each FA. To get the best conditions for the considered ANNs training, a comprehensive sensitivity study has been performed to examine the effects of variation of hidden neurons, hidden layers, transfer functions, and the learning algorithms on the training and simulation results. Performance evaluation results show that the developed ANN can be trained to estimate the core TH parameters of a typical VVER-1000 reactor quickly without loss of accuracy.

  8. Experimental method to predict avalanches based on neural networks

    Directory of Open Access Journals (Sweden)

    V. V. Zhdanov

    2016-01-01

    Full Text Available The article presents results of experimental use of currently available statistical methods to classify the avalanche‑dangerous precipitations and snowfalls in the Kishi Almaty river basin. The avalanche service of Kazakhstan uses graphical methods for prediction of avalanches developed by I.V. Kondrashov and E.I. Kolesnikov. The main objective of this work was to develop a modern model that could be used directly at the avalanche stations. Classification of winter precipitations into dangerous snowfalls and non‑dangerous ones was performed by two following ways: the linear discriminant function (canonical analysis and artificial neural networks. Observational data on weather and avalanches in the gorge Kishi Almaty in the gorge Kishi Almaty were used as a training sample. Coefficients for the canonical variables were calculated by the software «Statistica» (Russian version 6.0, and then the necessary formula had been constructed. The accuracy of the above classification was 96%. Simulator by the authors L.N. Yasnitsky and F.М. Cherepanov was used to learn the neural networks. The trained neural network demonstrated 98% accuracy of the classification. Prepared statistical models are recommended to be tested at the snow‑avalanche stations. Results of the tests will be used for estimation of the model quality and its readiness for the operational work. In future, we plan to apply these models for classification of the avalanche danger by the five‑point international scale.

  9. Multi-Sensor Based State Prediction for Personal Mobility Vehicles.

    Directory of Open Access Journals (Sweden)

    Jamilah Abdur-Rahim

    Full Text Available This paper presents a study on multi-modal human emotional state detection while riding a powered wheelchair (PMV; Personal Mobility Vehicle in an indoor labyrinth-like environment. The study reports findings on the habituation of human stress response during self-driving. In addition, the effects of "loss of controllability", change in the role of the driver to a passenger, are investigated via an autonomous driving modality. The multi-modal emotional state detector sensing framework consists of four sensing devices: electroencephalograph (EEG, heart inter-beat interval (IBI, galvanic skin response (GSR and stressor level lever (in the case of autonomous riding. Physiological emotional state measurement characteristics are organized by time-scale, in terms of capturing slower changes (long-term and quicker changes from moment-to-moment. Experimental results with fifteen participants regarding subjective emotional state reports and commercial software measurements validated the proposed emotional state detector. Short-term GSR and heart signal characterizations captured moment-to-moment emotional state during autonomous riding (Spearman correlation; ρ = 0.6, p < 0.001. Short-term GSR and EEG characterizations reliably captured moment-to-moment emotional state during self-driving (Classification accuracy; 69.7. Finally, long-term GSR and heart characterizations were confirmed to reliably capture slow changes during autonomous riding and also of emotional state during participant resting state. The purpose of this study and the exploration of various algorithms and sensors in a structured framework is to provide a comprehensive background for multi-modal emotional state prediction experiments and/or applications. Additional discussion regarding the feasibility and utility of the possibilities of these concepts are given.

  10. Prediction of nucleosome positioning based on transcription factor binding sites.

    Directory of Open Access Journals (Sweden)

    Xianfu Yi

    Full Text Available BACKGROUND: The DNA of all eukaryotic organisms is packaged into nucleosomes, the basic repeating units of chromatin. The nucleosome consists of a histone octamer around which a DNA core is wrapped and the linker histone H1, which is associated with linker DNA. By altering the accessibility of DNA sequences, the nucleosome has profound effects on all DNA-dependent processes. Understanding the factors that influence nucleosome positioning is of great importance for the study of genomic control mechanisms. Transcription factors (TFs have been suggested to play a role in nucleosome positioning in vivo. PRINCIPAL FINDINGS: Here, the minimum redundancy maximum relevance (mRMR feature selection algorithm, the nearest neighbor algorithm (NNA, and the incremental feature selection (IFS method were used to identify the most important TFs that either favor or inhibit nucleosome positioning by analyzing the numbers of transcription factor binding sites (TFBSs in 53,021 nucleosomal DNA sequences and 50,299 linker DNA sequences. A total of nine important families of TFs were extracted from 35 families, and the overall prediction accuracy was 87.4% as evaluated by the jackknife cross-validation test. CONCLUSIONS: Our results are consistent with the notion that TFs are more likely to bind linker DNA sequences than the sequences in the nucleosomes. In addition, our results imply that there may be some TFs that are important for nucleosome positioning but that play an insignificant role in discriminating nucleosome-forming DNA sequences from nucleosome-inhibiting DNA sequences. The hypothesis that TFs play a role in nucleosome positioning is, thus, confirmed by the results of this study.

  11. Operational stability prediction in milling based on impact tests

    Science.gov (United States)

    Kiss, Adam K.; Hajdu, David; Bachrathy, Daniel; Stepan, Gabor

    2018-03-01

    Chatter detection is usually based on the analysis of measured signals captured during cutting processes. These techniques, however, often give ambiguous results close to the stability boundaries, which is a major limitation in industrial applications. In this paper, an experimental chatter detection method is proposed based on the system's response for perturbations during the machining process, and no system parameter identification is required. The proposed method identifies the dominant characteristic multiplier of the periodic dynamical system that models the milling process. The variation of the modulus of the largest characteristic multiplier can also be monitored, the stability boundary can precisely be extrapolated, while the manufacturing parameters are still kept in the chatter-free region. The method is derived in details, and also verified experimentally in laboratory environment.

  12. Graph-based Geospatial Prediction and Clustering for Situation Recognition

    OpenAIRE

    Tang, Mengfan

    2017-01-01

    Big data continues to grow and diversify at an increasing pace. To understand constantly evolving situations, data is collected from various location-based sensors as well as people using effective participatory sensing. Static sensors are placed at particular locations, monitoring and measuring important variables from the environment. Additionally, people contribute data in the form of mobile streams through participatory sensing. To process such disparate data for situation recognition, we...

  13. Protein-Protein Interactions Prediction Based on Iterative Clique Extension with Gene Ontology Filtering

    Directory of Open Access Journals (Sweden)

    Lei Yang

    2014-01-01

    Full Text Available Cliques (maximal complete subnets in protein-protein interaction (PPI network are an important resource used to analyze protein complexes and functional modules. Clique-based methods of predicting PPI complement the data defection from biological experiments. However, clique-based predicting methods only depend on the topology of network. The false-positive and false-negative interactions in a network usually interfere with prediction. Therefore, we propose a method combining clique-based method of prediction and gene ontology (GO annotations to overcome the shortcoming and improve the accuracy of predictions. According to different GO correcting rules, we generate two predicted interaction sets which guarantee the quality and quantity of predicted protein interactions. The proposed method is applied to the PPI network from the Database of Interacting Proteins (DIP and most of the predicted interactions are verified by another biological database, BioGRID. The predicted protein interactions are appended to the original protein network, which leads to clique extension and shows the significance of biological meaning.

  14. Thematic and spatial resolutions affect model-based predictions of tree species distribution.

    Science.gov (United States)

    Liang, Yu; He, Hong S; Fraser, Jacob S; Wu, ZhiWei

    2013-01-01

    Subjective decisions of thematic and spatial resolutions in characterizing environmental heterogeneity may affect the characterizations of spatial pattern and the simulation of occurrence and rate of ecological processes, and in turn, model-based tree species distribution. Thus, this study quantified the importance of thematic and spatial resolutions, and their interaction in predictions of tree species distribution (quantified by species abundance). We investigated how model-predicted species abundances changed and whether tree species with different ecological traits (e.g., seed dispersal distance, competitive capacity) had different responses to varying thematic and spatial resolutions. We used the LANDIS forest landscape model to predict tree species distribution at the landscape scale and designed a series of scenarios with different thematic (different numbers of land types) and spatial resolutions combinations, and then statistically examined the differences of species abundance among these scenarios. Results showed that both thematic and spatial resolutions affected model-based predictions of species distribution, but thematic resolution had a greater effect. Species ecological traits affected the predictions. For species with moderate dispersal distance and relatively abundant seed sources, predicted abundance increased as thematic resolution increased. However, for species with long seeding distance or high shade tolerance, thematic resolution had an inverse effect on predicted abundance. When seed sources and dispersal distance were not limiting, the predicted species abundance increased with spatial resolution and vice versa. Results from this study may provide insights into the choice of thematic and spatial resolutions for model-based predictions of tree species distribution.

  15. Prediction of equibiaxial loading stress in collagen-based extracellular matrix using a three-dimensional unit cell model.

    Science.gov (United States)

    Susilo, Monica E; Bell, Brett J; Roeder, Blayne A; Voytik-Harbin, Sherry L; Kokini, Klod; Nauman, Eric A

    2013-03-01

    Mechanical signals are important factors in determining cell fate. Therefore, insights as to how mechanical signals are transferred between the cell and its surrounding three-dimensional collagen fibril network will provide a basis for designing the optimum extracellular matrix (ECM) microenvironment for tissue regeneration. Previously we described a cellular solid model to predict fibril microstructure-mechanical relationships of reconstituted collagen matrices due to unidirectional loads (Acta Biomater 2010;6:1471-86). The model consisted of representative volume elements made up of an interconnected network of flexible struts. The present study extends this work by adapting the model to account for microstructural anisotropy of the collagen fibrils and a biaxial loading environment. The model was calibrated based on uniaxial tensile data and used to predict the equibiaxial tensile stress-stretch relationship. Modifications to the model significantly improved its predictive capacity for equibiaxial loading data. With a comparable fibril length (model 5.9-8μm, measured 7.5μm) and appropriate fibril anisotropy the anisotropic model provides a better representation of the collagen fibril microstructure. Such models are important tools for tissue engineering because they facilitate prediction of microstructure-mechanical relationships for collagen matrices over a wide range of microstructures and provide a framework for predicting cell-ECM interactions. Copyright © 2012 Acta Materialia Inc. Published by Elsevier Ltd. All rights reserved.

  16. Observational attachment theory-based parenting measures predict children's attachment narratives independently from social learning theory-based measures.

    Science.gov (United States)

    Matias, Carla; O'Connor, Thomas G; Futh, Annabel; Scott, Stephen

    2014-01-01

    Conceptually and methodologically distinct models exist for assessing quality of parent-child relationships, but few studies contrast competing models or assess their overlap in predicting developmental outcomes. Using observational methodology, the current study examined the distinctiveness of attachment theory-based and social learning theory-based measures of parenting in predicting two key measures of child adjustment: security of attachment narratives and social acceptance in peer nominations. A total of 113 5-6-year-old children from ethnically diverse families participated. Parent-child relationships were rated using standard paradigms. Measures derived from attachment theory included sensitive responding and mutuality; measures derived from social learning theory included positive attending, directives, and criticism. Child outcomes were independently-rated attachment narrative representations and peer nominations. Results indicated that Attachment theory-based and Social Learning theory-based measures were modestly correlated; nonetheless, parent-child mutuality predicted secure child attachment narratives independently of social learning theory-based measures; in contrast, criticism predicted peer-nominated fighting independently of attachment theory-based measures. In young children, there is some evidence that attachment theory-based measures may be particularly predictive of attachment narratives; however, no single model of measuring parent-child relationships is likely to best predict multiple developmental outcomes. Assessment in research and applied settings may benefit from integration of different theoretical and methodological paradigms.

  17. A voxel-based finite element model for the prediction of bladder deformation

    Energy Technology Data Exchange (ETDEWEB)

    Xiangfei, Chai; Herk, Marcel van; Hulshof, Maarten C. C. M.; Bel, Arjan [Radiation Oncology Department, Academic Medical Center, University of Amsterdam, 1105 AZ Amsterdam (Netherlands); Radiation Oncology Department, Netherlands Cancer Institute, 1066 CX Amsterdam (Netherlands); Radiation Oncology Department, Academic Medical Center, University of Amsterdam, 1105 AZ Amsterdam (Netherlands)

    2012-01-15

    Purpose: A finite element (FE) bladder model was previously developed to predict bladder deformation caused by bladder filling change. However, two factors prevent a wide application of FE models: (1) the labor required to construct a FE model with high quality mesh and (2) long computation time needed to construct the FE model and solve the FE equations. In this work, we address these issues by constructing a low-resolution voxel-based FE bladder model directly from the binary segmentation images and compare the accuracy and computational efficiency of the voxel-based model used to simulate bladder deformation with those of a classical FE model with a tetrahedral mesh. Methods: For ten healthy volunteers, a series of MRI scans of the pelvic region was recorded at regular intervals of 10 min over 1 h. For this series of scans, the bladder volume gradually increased while rectal volume remained constant. All pelvic structures were defined from a reference image for each volunteer, including bladder wall, small bowel, prostate (male), uterus (female), rectum, pelvic bone, spine, and the rest of the body. Four separate FE models were constructed from these structures: one with a tetrahedral mesh (used in previous study), one with a uniform hexahedral mesh, one with a nonuniform hexahedral mesh, and one with a low-resolution nonuniform hexahedral mesh. Appropriate material properties were assigned to all structures and uniform pressure was applied to the inner bladder wall to simulate bladder deformation from urine inflow. Performance of the hexahedral meshes was evaluated against the performance of the standard tetrahedral mesh by comparing the accuracy of bladder shape prediction and computational efficiency. Results: FE model with a hexahedral mesh can be quickly and automatically constructed. No substantial differences were observed between the simulation results of the tetrahedral mesh and hexahedral meshes (<1% difference in mean dice similarity coefficient to

  18. A voxel-based finite element model for the prediction of bladder deformation

    International Nuclear Information System (INIS)

    Chai Xiangfei; Herk, Marcel van; Hulshof, Maarten C. C. M.; Bel, Arjan

    2012-01-01

    Purpose: A finite element (FE) bladder model was previously developed to predict bladder deformation caused by bladder filling change. However, two factors prevent a wide application of FE models: (1) the labor required to construct a FE model with high quality mesh and (2) long computation time needed to construct the FE model and solve the FE equations. In this work, we address these issues by constructing a low-resolution voxel-based FE bladder model directly from the binary segmentation images and compare the accuracy and computational efficiency of the voxel-based model used to simulate bladder deformation with those of a classical FE model with a tetrahedral mesh. Methods: For ten healthy volunteers, a series of MRI scans of the pelvic region was recorded at regular intervals of 10 min over 1 h. For this series of scans, the bladder volume gradually increased while rectal volume remained constant. All pelvic structures were defined from a reference image for each volunteer, including bladder wall, small bowel, prostate (male), uterus (female), rectum, pelvic bone, spine, and the rest of the body. Four separate FE models were constructed from these structures: one with a tetrahedral mesh (used in previous study), one with a uniform hexahedral mesh, one with a nonuniform hexahedral mesh, and one with a low-resolution nonuniform hexahedral mesh. Appropriate material properties were assigned to all structures and uniform pressure was applied to the inner bladder wall to simulate bladder deformation from urine inflow. Performance of the hexahedral meshes was evaluated against the performance of the standard tetrahedral mesh by comparing the accuracy of bladder shape prediction and computational efficiency. Results: FE model with a hexahedral mesh can be quickly and automatically constructed. No substantial differences were observed between the simulation results of the tetrahedral mesh and hexahedral meshes (<1% difference in mean dice similarity coefficient to

  19. Development of a Late-Life Dementia Prediction Index with Supervised Machine Learning in the Population-Based CAIDE Study.

    Science.gov (United States)

    Pekkala, Timo; Hall, Anette; Lötjönen, Jyrki; Mattila, Jussi; Soininen, Hilkka; Ngandu, Tiia; Laatikainen, Tiina; Kivipelto, Miia; Solomon, Alina

    2017-01-01

    This study aimed to develop a late-life dementia prediction model using a novel validated supervised machine learning method, the Disease State Index (DSI), in the Finnish population-based CAIDE study. The CAIDE study was based on previous population-based midlife surveys. CAIDE participants were re-examined twice in late-life, and the first late-life re-examination was used as baseline for the present study. The main study population included 709 cognitively normal subjects at first re-examination who returned to the second re-examination up to 10 years later (incident dementia n = 39). An extended population (n = 1009, incident dementia 151) included non-participants/non-survivors (national registers data). DSI was used to develop a dementia index based on first re-examination assessments. Performance in predicting dementia was assessed as area under the ROC curve (AUC). AUCs for DSI were 0.79 and 0.75 for main and extended populations. Included predictors were cognition, vascular factors, age, subjective memory complaints, and APOE genotype. The supervised machine learning method performed well in identifying comprehensive profiles for predicting dementia development up to 10 years later. DSI could thus be useful for identifying individuals who are most at risk and may benefit from dementia prevention interventions.

  20. The Prediction of Fatigue Life Based on Four Point Bending Test

    NARCIS (Netherlands)

    Pramesti, F.P.; Molenaar, A.A.A.; Van de Ven, M.F.C.

    2013-01-01

    To be able to devise optimum strategies for maintenance and rehabilitation, it is essential to formulate an accurate prediction of pavement life and its maintenance needs. One of the pavement life prediction methods is based on the pavement's capability to sustain fatigue. If it were possible to

  1. Automatic stimulation of experiments and learning based on prediction failure recognition

    NARCIS (Netherlands)

    Juarez Cordova, A.G.; Kahl, B.; Henne, T.; Prassler, E.

    2009-01-01

    In this paper we focus on the task of automatically and autonomously initiating experimentation and learning based on the recognition of prediction failure. We present a mechanism that utilizes conceptual knowledge to predict the outcome of robot actions, observes their execution and indicates when

  2. The effect of using genealogy-based haplotypes for genomic prediction.

    Science.gov (United States)

    Edriss, Vahid; Fernando, Rohan L; Su, Guosheng; Lund, Mogens S; Guldbrandtsen, Bernt

    2013-03-06

    Genomic prediction uses two sources of information: linkage disequilibrium between markers and quantitative trait loci, and additive genetic relationships between individuals. One way to increase the accuracy of genomic prediction is to capture more linkage disequilibrium by regression on haplotypes instead of regression on individual markers. The aim of this study was to investigate the accuracy of genomic prediction using haplotypes based on local genealogy information. A total of 4429 Danish Holstein bulls were genotyped with the 50K SNP chip. Haplotypes were constructed using local genealogical trees. Effects of haplotype covariates were estimated with two types of prediction models: (1) assuming that effects had the same distribution for all haplotype covariates, i.e. the GBLUP method and (2) assuming that a large proportion (π) of the haplotype covariates had zero effect, i.e. a Bayesian mixture method. About 7.5 times more covariate effects were estimated when fitting haplotypes based on local genealogical trees compared to fitting individuals markers. Genealogy-based haplotype clustering slightly increased the accuracy of genomic prediction and, in some cases, decreased the bias of prediction. With the Bayesian method, accuracy of prediction was less sensitive to parameter π when fitting haplotypes compared to fitting markers. Use of haplotypes based on genealogy can slightly increase the accuracy of genomic prediction. Improved methods to cluster the haplotypes constructed from local genealogy could lead to additional gains in accuracy.

  3. An estimator-based distributed voltage-predictive control strategy for ac islanded microgrids

    DEFF Research Database (Denmark)

    Wang, Yanbo; Chen, Zhe; Wang, Xiongfei

    2015-01-01

    This paper presents an estimator-based voltage predictive control strategy for AC islanded microgrids, which is able to perform voltage control without any communication facilities. The proposed control strategy is composed of a network voltage estimator and a voltage predictive controller for each...... and has a good capability to reject uncertain perturbations of islanded microgrids....

  4. A rule-based backchannel prediction model using pitch and pause information

    NARCIS (Netherlands)

    Truong, Khiet Phuong; Poppe, Ronald Walter; Heylen, Dirk K.J.

    We manually designed rules for a backchannel (BC) prediction model based on pitch and pause information. In short, the model predicts a BC when there is a pause of a certain length that is preceded by a falling or rising pitch. This model was validated against the Dutch IFADV Corpus in a

  5. State of the art and challenges in sequence based T-cell epitope prediction

    DEFF Research Database (Denmark)

    Lundegaard, Claus; Hoof, Ilka; Lund, Ole

    2010-01-01

    Sequence based T-cell epitope predictions have improved immensely in the last decade. From predictions of peptide binding to major histocompatibility complex molecules with moderate accuracy, limited allele coverage, and no good estimates of the other events in the antigen-processing pathway, the...

  6. Prediction of creamy mouthfeel based on texture attribute ratings of dairy desserts

    NARCIS (Netherlands)

    Weenen, H.; Jellema, R.H.; Wijk, de R.A.

    2006-01-01

    A quantitative predictive model for creamy mouthfeel in dairy desserts was developed, using PLS multivariate analysis of texture attributes. Based on 40 experimental custard desserts, a good correlation was obtained between measured and predicted creamy mouthfeel ratings. The model was validated by

  7. Predictive Validity of Curriculum-Based Measures for English Learners at Varying English Proficiency Levels

    Science.gov (United States)

    Kim, Jennifer Sun; Vanderwood, Michael L.; Lee, Catherine Y.

    2016-01-01

    This study examined the predictive validity of curriculum-based measures in reading for Spanish-speaking English learners (ELs) at various levels of English proficiency. Third-grade Spanish-speaking EL students were screened during the fall using DIBELS Oral Reading Fluency (DORF) and Daze. Predictive validity was examined in relation to spring…

  8. The Prediction of Item Parameters Based on Classical Test Theory and Latent Trait Theory

    Science.gov (United States)

    Anil, Duygu

    2008-01-01

    In this study, the prediction power of the item characteristics based on the experts' predictions on conditions try-out practices cannot be applied was examined for item characteristics computed depending on classical test theory and two-parameters logistic model of latent trait theory. The study was carried out on 9914 randomly selected students…

  9. Predicting Performance of a Face Recognition System Based on Image Quality

    NARCIS (Netherlands)

    Dutta, A.

    2015-01-01

    In this dissertation, we focus on several aspects of models that aim to predict performance of a face recognition system. Performance prediction models are commonly based on the following two types of performance predictor features: a) image quality features; and b) features derived solely from

  10. Scoring system based on electrocardiogram features to predict the type of heart failure in patients with chronic heart failure

    Directory of Open Access Journals (Sweden)

    Hendry Purnasidha Bagaswoto

    2016-12-01

    Full Text Available ABSTRACT Heart failure is divided into heart failure with reduced ejection fraction (HFrEF and heart failure with preserved ejection fraction (HFpEF. Additional studies are required to distinguish between these two types of HF. A previous study showed that HFrEF is less likely when ECG findings are normal. This study aims to create a scoring system based on ECG findings that will predict the type of HF. We performed a cross-sectional study analyzing ECG and echocardiographic data from 110 subjects. HFrEF was defined as an ejection fraction ≤40%. Fifty people were diagnosed with HFpEF and 60 people suffered from HFrEF. Multiple logistic regression analysis revealed certain ECG variables that were independent predictors of HFrEF i.e., LAH, QRS duration >100 ms, RBBB, ST-T segment changes and prolongation of the QT interval. Based on ROC curve analysis, we obtained a score for HFpEF of -1 to +3, while HFrEF had a score of +4 to +6 with 76% sensitivity, 96% specificity, 95% positive predictive value, an 80% negative predictive value and an accuracy of 86%. The scoring system derived from this study, including the presence or absence of LAH, QRS duration >100 ms, RBBB, ST-T segment changes and prolongation of the QT interval can be used to predict the type of HF with satisfactory sensitivity and specificity

  11. A Hierarchical Method for Transient Stability Prediction of Power Systems Using the Confidence of a SVM-Based Ensemble Classifier

    Directory of Open Access Journals (Sweden)

    Yanzhen Zhou

    2016-09-01

    Full Text Available Machine learning techniques have been widely used in transient stability prediction of power systems. When using the post-fault dynamic responses, it is difficult to draw a definite conclusion about how long the duration of response data used should be in order to balance the accuracy and speed. Besides, previous studies have the problem of lacking consideration for the confidence level. To solve these problems, a hierarchical method for transient stability prediction based on the confidence of ensemble classifier using multiple support vector machines (SVMs is proposed. Firstly, multiple datasets are generated by bootstrap sampling, then features are randomly picked up to compress the datasets. Secondly, the confidence indices are defined and multiple SVMs are built based on these generated datasets. By synthesizing the probabilistic outputs of multiple SVMs, the prediction results and confidence of the ensemble classifier will be obtained. Finally, different ensemble classifiers with different response times are built to construct different layers of the proposed hierarchical scheme. The simulation results show that the proposed hierarchical method can balance the accuracy and rapidity of the transient stability prediction. Moreover, the hierarchical method can reduce the misjudgments of unstable instances and cooperate with the time domain simulation to insure the security and stability of power systems.

  12. Structure-based function prediction of the expanding mollusk tyrosinase family

    Science.gov (United States)

    Huang, Ronglian; Li, Li; Zhang, Guofan

    2017-11-01

    Tyrosinase (Ty) is a common enzyme found in many different animal groups. In our previous study, genome sequencing revealed that the Ty family is expanded in the Pacific oyster ( Crassostrea gigas). Here, we examine the larger number of Ty family members in the Pacific oyster by high-level structure prediction to obtain more information about their function and evolution, especially the unknown role in biomineralization. We verified 12 Ty gene sequences from Crassostrea gigas genome and Pinctada fucata martensii transcriptome. By using phylogenetic analysis of these Tys with functionally known Tys from other molluscan species, eight subgroups were identified (CgTy_s1, CgTy_s2, MolTy_s1, MolTy-s2, MolTy-s3, PinTy-s1, PinTy-s2 and PviTy). Structural data and surface pockets of the dinuclear copper center in the eight subgroups of molluscan Ty were obtained using the latest versions of prediction online servers. Structural comparison with other Ty proteins from the protein databank revealed functionally important residues (HA1, HA2, HA3, HB1, HB2, HB3, Z1-Z9) and their location within these protein structures. The structural and chemical features of these pockets which may related to the substrate binding showed considerable variability among mollusks, which undoubtedly defines Ty substrate binding. Finally, we discuss the potential driving forces of Ty family evolution in mollusks. Based on these observations, we conclude that the Ty family has rapidly evolved as a consequence of substrate adaptation in mollusks.

  13. Evaluation and use of in-silico structure-based epitope prediction with foot-and-mouth disease virus.

    Directory of Open Access Journals (Sweden)

    Daryl W Borley

    Full Text Available Understanding virus antigenicity is of fundamental importance for the development of better, more cross-reactive vaccines. However, as far as we are aware, no systematic work has yet been conducted using the 3D structure of a virus to identify novel epitopes. Therefore we have extended several existing structural prediction algorithms to build a method for identifying epitopes on the appropriate outer surface of intact virus capsids (which are structurally different from globular proteins in both shape and arrangement of multiple repeated elements and applied it here as a proof of principle concept to the capsid of foot-and-mouth disease virus (FMDV. We have analysed how reliably several freely available structure-based B cell epitope prediction programs can identify already known viral epitopes of FMDV in the context of the viral capsid. To do this we constructed a simple objective metric to measure the sensitivity and discrimination of such algorithms. After optimising the parameters for five methods using an independent training set we used this measure to evaluate the methods. Individually any one algorithm performed rather poorly (three performing better than the other two suggesting that there may be value in developing virus-specific software. Taking a very conservative approach requiring a consensus between all three top methods predicts a number of previously described antigenic residues as potential epitopes on more than one serotype of FMDV, consistent with experimental results. The consensus results identified novel residues as potential epitopes on more than one serotype. These include residues 190-192 of VP2 (not previously determined to be antigenic, residues 69-71 and 193-197 of VP3 spanning the pentamer-pentamer interface, and another region incorporating residues 83, 84 and 169-174 of VP1 (all only previously experimentally defined on serotype A. The computer programs needed to create a semi-automated procedure for carrying out

  14. Cloud-based adaptive exon prediction for DNA analysis.

    Science.gov (United States)

    Putluri, Srinivasareddy; Zia Ur Rahman, Md; Fathima, Shaik Yasmeen

    2018-02-01

    Cloud computing offers significant research and economic benefits to healthcare organisations. Cloud services provide a safe place for storing and managing large amounts of such sensitive data. Under conventional flow of gene information, gene sequence laboratories send out raw and inferred information via Internet to several sequence libraries. DNA sequencing storage costs will be minimised by use of cloud service. In this study, the authors put forward a novel genomic informatics system using Amazon Cloud Services, where genomic sequence information is stored and accessed for processing. True identification of exon regions in a DNA sequence is a key task in bioinformatics, which helps in disease identification and design drugs. Three base periodicity property of exons forms the basis of all exon identification techniques. Adaptive signal processing techniques found to be promising in comparison with several other methods. Several adaptive exon predictors (AEPs) are developed using variable normalised least mean square and its maximum normalised variants to reduce computational complexity. Finally, performance evaluation of various AEPs is done based on measures such as sensitivity, specificity and precision using various standard genomic datasets taken from National Center for Biotechnology Information genomic sequence database.

  15. Can Insomnia in Pregnancy Predict Postpartum Depression? A Longitudinal, Population-Based Study

    Science.gov (United States)

    Dørheim, Signe K.; Bjorvatn, Bjørn; Eberhard-Gran, Malin

    2014-01-01

    Background Insomnia and depression are strongly interrelated. This study aimed to describe changes in sleep across childbirth, and to evaluate whether insomnia in pregnancy is a predictor of postpartum depression. Methods A longitudinal, population-based study was conducted among perinatal women giving birth at Akershus University Hospital, Norway. Women received questionnaires in weeks 17 and 32 of pregnancy and eight weeks postpartum. This paper presents data from 2,088 of 4,662 women with complete data for insomnia and depression in week 32 of pregnancy and eight weeks postpartum. Sleep times, wake-up times and average sleep durations were self-reported. The Bergen Insomnia Scale (BIS) was used to measure insomnia. The Edinburgh Postnatal Depression Scale (EPDS) was used to measure depressive symptoms. Results After delivery, sleep duration was reduced by 49 minutes (to 6.5 hours), and mean sleep efficiency was reduced from 84% to 75%. However, self-reported insomnia scores (BIS) improved from 17.2 to 15.4, and the reported prevalence of insomnia decreased from 61.6% to 53.8%. High EPDS scores and anxiety in pregnancy, fear of delivery, previous depression, primiparity, and higher educational level were risk factors for both postpartum insomnia and depression. Insomnia did not predict postpartum depression in women with no prior history of depression, whereas women who recovered from depression had residual insomnia. Limitations Depression and insomnia were not verified by clinical interviews. Women with depressive symptoms were less likely to remain in the study. Conclusions Although women slept fewer hours at night after delivery compared to during late pregnancy, and reported more nights with nighttime awakenings, their self-reported insomnia scores improved, and the prevalence of insomnia according to the DSM-IV criteria decreased. Insomnia in pregnancy may be a marker for postpartum recurrence of depression among women with previous depression. PMID

  16. A Rule-Based Model for Bankruptcy Prediction Based on an Improved Genetic Ant Colony Algorithm

    Directory of Open Access Journals (Sweden)

    Yudong Zhang

    2013-01-01

    Full Text Available In this paper, we proposed a hybrid system to predict corporate bankruptcy. The whole procedure consists of the following four stages: first, sequential forward selection was used to extract the most important features; second, a rule-based model was chosen to fit the given dataset since it can present physical meaning; third, a genetic ant colony algorithm (GACA was introduced; the fitness scaling strategy and the chaotic operator were incorporated with GACA, forming a new algorithm—fitness-scaling chaotic GACA (FSCGACA, which was used to seek the optimal parameters of the rule-based model; and finally, the stratified K-fold cross-validation technique was used to enhance the generalization of the model. Simulation experiments of 1000 corporations’ data collected from 2006 to 2009 demonstrated that the proposed model was effective. It selected the 5 most important factors as “net income to stock broker’s equality,” “quick ratio,” “retained earnings to total assets,” “stockholders’ equity to total assets,” and “financial expenses to sales.” The total misclassification error of the proposed FSCGACA was only 7.9%, exceeding the results of genetic algorithm (GA, ant colony algorithm (ACA, and GACA. The average computation time of the model is 2.02 s.

  17. Selecting the minimum prediction base of historical data to perform 5-year predictions of the cancer burden: The GoF-optimal method.

    Science.gov (United States)

    Valls, Joan; Castellà, Gerard; Dyba, Tadeusz; Clèries, Ramon

    2015-06-01

    Predicting the future burden of cancer is a key issue for health services planning, where a method for selecting the predictive model and the prediction base is a challenge. A method, named here Goodness-of-Fit optimal (GoF-optimal), is presented to determine the minimum prediction base of historical data to perform 5-year predictions of the number of new cancer cases or deaths. An empirical ex-post evaluation exercise for cancer mortality data in Spain and cancer incidence in Finland using simple linear and log-linear Poisson models was performed. Prediction bases were considered within the time periods 1951-2006 in Spain and 1975-2007 in Finland, and then predictions were made for 37 and 33 single years in these periods, respectively. The performance of three fixed different prediction bases (last 5, 10, and 20 years of historical data) was compared to that of the prediction base determined by the GoF-optimal method. The coverage (COV) of the 95% prediction interval and the discrepancy ratio (DR) were calculated to assess the success of the prediction. The results showed that (i) models using the prediction base selected through GoF-optimal method reached the highest COV and the lowest DR and (ii) the best alternative strategy to GoF-optimal was the one using the base of prediction of 5-years. The GoF-optimal approach can be used as a selection criterion in order to find an adequate base of prediction. Copyright © 2015 Elsevier Ltd. All rights reserved.

  18. Prediction of flow boiling curves based on artificial neural network

    International Nuclear Information System (INIS)

    Wu Junmei; Xi'an Jiaotong Univ., Xi'an; Su Guanghui

    2007-01-01

    The effects of the main system parameters on flow boiling curves were analyzed by using an artificial neural network (ANN) based on the database selected from the 1960s. The input parameters of the ANN are system pressure, mass flow rate, inlet subcooling, wall superheat and steady/transition boiling, and the output parameter is heat flux. The results obtained by the ANN show that the heat flux increases with increasing inlet sub cooling for all heat transfer modes. Mass flow rate has no significant effects on nucleate boiling curves. The transition boiling and film boiling heat fluxes will increase with an increase of mass flow rate. The pressure plays a predominant role and improves heat transfer in whole boiling regions except film boiling. There are slight differences between the steady and the transient boiling curves in all boiling regions except the nucleate one. (authors)

  19. Evaluation of Artificial Intelligence Based Models for Chemical Biodegradability Prediction

    Directory of Open Access Journals (Sweden)

    Aleksandar Sabljic

    2004-12-01

    Full Text Available This study presents a review of biodegradability modeling efforts including a detailed assessment of two models developed using an artificial intelligence based methodology. Validation results for these models using an independent, quality reviewed database, demonstrate that the models perform well when compared to another commonly used biodegradability model, against the same data. The ability of models induced by an artificial intelligence methodology to accommodate complex interactions in detailed systems, and the demonstrated reliability of the approach evaluated by this study, indicate that the methodology may have application in broadening the scope of biodegradability models. Given adequate data for biodegradability of chemicals under environmental conditions, this may allow for the development of future models that include such things as surface interface impacts on biodegradability for example.

  20. Accuracy of depolarization and delay spread predictions using advanced ray-based modeling in indoor scenarios

    Directory of Open Access Journals (Sweden)

    Mani Francesco

    2011-01-01

    Full Text Available Abstract This article investigates the prediction accuracy of an advanced deterministic propagation model in terms of channel depolarization and frequency selectivity for indoor wireless propagation. In addition to specular reflection and diffraction, the developed ray tracing tool considers penetration through dielectric blocks and/or diffuse scattering mechanisms. The sensitivity and prediction accuracy analysis is based on two measurement campaigns carried out in a warehouse and an office building. It is shown that the implementation of diffuse scattering into RT significantly increases the accuracy of the cross-polar discrimination prediction, whereas the delay-spread prediction is only marginally improved.

  1. High Order Wavelet-Based Multiresolution Technology for Airframe Noise Prediction, Phase II

    Data.gov (United States)

    National Aeronautics and Space Administration — We propose to develop a novel, high-accuracy, high-fidelity, multiresolution (MRES), wavelet-based framework for efficient prediction of airframe noise sources and...

  2. Fuzzy-logic based learning style prediction in e-learning using web ...

    Indian Academy of Sciences (India)

    tion, especially in web environments and proposes to use Fuzzy rules to handle the uncertainty in .... learning in safe and supportive environment ... working of the proposed Fuzzy-logic based learning style prediction in e-learning. Section 4.

  3. Toward Hypertension Prediction Based on PPG-Derived HRV Signals: a Feasibility Study.

    Science.gov (United States)

    Lan, Kun-Chan; Raknim, Paweeya; Kao, Wei-Fong; Huang, Jyh-How

    2018-04-21

    Heart rate variability (HRV) is often used to assess the risk of cardiovascular disease, and data on this can be obtained via electrocardiography (ECG). However, collecting heart rate data via photoplethysmography (PPG) is now a lot easier. We investigate the feasibility of using the PPG-based heart rate to estimate HRV and predict diseases. We obtain three months of PPG-based heart rate data from subjects with and without hypertension, and calculate the HRV based on various forms of time and frequency domain analysis. We then apply a data mining technique to this estimated HRV data, to see if it is possible to correctly identify patients with hypertension. We use six HRV parameters to predict hypertension, and find SDNN has the best predictive power. We show that early disease prediction is possible through collecting one's PPG-based heart rate information.

  4. Physics-based Modeling Tools for Life Prediction and Durability Assessment of Advanced Materials, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — The technical objectives of this program are: (1) to develop a set of physics-based modeling tools to predict the initiation of hot corrosion and to address pit and...

  5. A New Approach to Fatigue Life Prediction Based on Nucleation and Growth (Preprint)

    National Research Council Canada - National Science Library

    McClung, R. C; Francis, W. L; Hudak, S. J

    2006-01-01

    Prediction of total fatigue life in components is often performed by summing "initiation" and "propagation" life phases, where initiation life is based on stress-life or strain-life methods calibrated...

  6. Microcomputer-based tests for repeated-measures: Metric properties and predictive validities

    Science.gov (United States)

    Kennedy, Robert S.; Baltzley, Dennis R.; Dunlap, William P.; Wilkes, Robert L.; Kuntz, Lois-Ann

    1989-01-01

    A menu of psychomotor and mental acuity tests were refined. Field applications of such a battery are, for example, a study of the effects of toxic agents or exotic environments on performance readiness, or the determination of fitness for duty. The key requirement of these tasks is that they be suitable for repeated-measures applications, and so questions of stability and reliability are a continuing, central focus of this work. After the initial (practice) session, seven replications of 14 microcomputer-based performance tests (32 measures) were completed by 37 subjects. Each test in the battery had previously been shown to stabilize in less than five 90-second administrations and to possess retest reliabilities greater than r = 0.707 for three minutes of testing. However, all the tests had never been administered together as a battery and they had never been self-administered. In order to provide predictive validity for intelligence measurement, the Wechsler Adult Intelligence Scale-Revised and the Wonderlic Personnel Test were obtained on the same subjects.

  7. Microscopic predictions of fission yields based on the time dependent GCM formalism

    International Nuclear Information System (INIS)

    Regnier, D.; Dubray, N.; Verriere, M.; Schunck, N.

    2016-01-01

    Accurate knowledge of fission fragment yields is an essential ingredient of numerous applications ranging from the formation of elements in the r-process to fuel cycle optimization in nuclear energy. The need for a predictive theory applicable where no data is available, together with the variety of potential applications, is an incentive to develop a fully microscopic approach to fission dynamics. One of the most promising theoretical frameworks is the time-dependent generator coordinate method (TDGCM) applied under the Gaussian overlap approximation (GOA). Previous studies reported promising results by numerically solving the TDGCM+GOA equation with a finite difference technique. However, the computational cost of this method makes it difficult to properly control numerical errors. In addition, it prevents one from performing calculations with more than two collective variables. To overcome these limitations, we have developed the new code FELIX-1.0 that solves the TDGCM+GOA equation based on the Galerkin finite element method. In this article, we briefly illustrate the capabilities of the solver FELIX-1.0, in particular its validation for n+ 239 Pu low energy induced fission. FELIX-1.0 gives full control on the numerical precision of fission product yields in neutron-induced fission, and its scalability also enables series of dynamical calculations on several potential energy surfaces. Preliminary results suggest an important sensitivity of our two-dimensional approach to the input potential energy surface

  8. Microscopic predictions of fission yields based on the time dependent GCM formalism

    Science.gov (United States)

    Regnier, D.; Dubray, N.; Schunck, N.; Verrière, M.

    2016-03-01

    Accurate knowledge of fission fragment yields is an essential ingredient of numerous applications ranging from the formation of elements in the r-process to fuel cycle optimization in nuclear energy. The need for a predictive theory applicable where no data is available, together with the variety of potential applications, is an incentive to develop a fully microscopic approach to fission dynamics. One of the most promising theoretical frameworks is the time-dependent generator coordinate method (TDGCM) applied under the Gaussian overlap approximation (GOA). Previous studies reported promising results by numerically solving the TDGCM+GOA equation with a finite difference technique. However, the computational cost of this method makes it difficult to properly control numerical errors. In addition, it prevents one from performing calculations with more than two collective variables. To overcome these limitations, we developed the new code FELIX-1.0 that solves the TDGCM+GOA equation based on the Galerkin finite element method. In this article, we briefly illustrate the capabilities of the solver FELIX-1.0, in particular its validation for n+239Pu low energy induced fission. This work is the result of a collaboration between CEA,DAM,DIF and LLNL on nuclear fission theory.

  9. An ensemble based top performing approach for NCI-DREAM drug sensitivity prediction challenge.

    Directory of Open Access Journals (Sweden)

    Qian Wan

    Full Text Available We consider the problem of predicting sensitivity of cancer cell lines to new drugs based on supervised learning on genomic profiles. The genetic and epigenetic characterization of a cell line provides observations on various aspects of regulation including DNA copy number variations, gene expression, DNA methylation and protein abundance. To extract relevant information from the various data types, we applied a random forest based approach to generate sensitivity predictions from each type of data and combined the predictions in a linear regression model to generate the final drug sensitivity prediction. Our approach when applied to the NCI-DREAM drug sensitivity prediction challenge was a top performer among 47 teams and produced high accuracy predictions. Our results show that the incorporation of multiple genomic characterizations lowered the mean and variance of the estimated bootstrap prediction error. We also applied our approach to the Cancer Cell Line Encyclopedia database for sensitivity prediction and the ability to extract the top targets of an anti-cancer drug. The results illustrate the effectiveness of our approach in predicting drug sensitivity from heterogeneous genomic datasets.

  10. Early Prediction of Disease Progression in Small Cell Lung Cancer: Toward Model-Based Personalized Medicine in Oncology.

    Science.gov (United States)

    Buil-Bruna, Núria; Sahota, Tarjinder; López-Picazo, José-María; Moreno-Jiménez, Marta; Martín-Algarra, Salvador; Ribba, Benjamin; Trocóniz, Iñaki F

    2015-06-15

    Predictive biomarkers can play a key role in individualized disease monitoring. Unfortunately, the use of biomarkers in clinical settings has thus far been limited. We have previously shown that mechanism-based pharmacokinetic/pharmacodynamic modeling enables integration of nonvalidated biomarker data to provide predictive model-based biomarkers for response classification. The biomarker model we developed incorporates an underlying latent variable (disease) representing (unobserved) tumor size dynamics, which is assumed to drive biomarker production and to be influenced by exposure to treatment. Here, we show that by integrating CT scan data, the population model can be expanded to include patient outcome. Moreover, we show that in conjunction with routine medical monitoring data, the population model can support accurate individual predictions of outcome. Our combined model predicts that a change in disease of 29.2% (relative standard error 20%) between two consecutive CT scans (i.e., 6-8 weeks) gives a probability of disease progression of 50%. We apply this framework to an external dataset containing biomarker data from 22 small cell lung cancer patients (four patients progressing during follow-up). Using only data up until the end of treatment (a total of 137 lactate dehydrogenase and 77 neuron-specific enolase observations), the statistical framework prospectively identified 75% of the individuals as having a predictable outcome in follow-up visits. This included two of the four patients who eventually progressed. In all identified individuals, the model-predicted outcomes matched the observed outcomes. This framework allows at risk patients to be identified early and therapeutic intervention/monitoring to be adjusted individually, which may improve overall patient survival. ©2015 American Association for Cancer Research.

  11. Validation of prediction model for successful vaginal birth after Cesarean delivery based on sonographic assessment of hysterotomy scar.

    Science.gov (United States)

    Baranov, A; Salvesen, K Å; Vikhareva, O

    2018-02-01

    To validate a prediction model for successful vaginal birth after Cesarean delivery (VBAC) based on sonographic assessment of the hysterotomy scar, in a Swedish population. Data were collected from a prospective cohort study. We recruited non-pregnant women aged 18-35 years who had undergone one previous low-transverse Cesarean delivery at ≥ 37 gestational weeks and had had no other uterine surgery. Participants who subsequently became pregnant underwent transvaginal ultrasound examination of the Cesarean hysterotomy scar at 11 + 0 to 13 + 6 and at 19 + 0 to 21 + 6 gestational weeks. Thickness of the myometrium at the thinnest part of the scar area was measured. After delivery, information on pregnancy outcome was retrieved from hospital records. Individual probabilities of successful VBAC were calculated using a previously published model. Predicted individual probabilities were divided into deciles. For each decile, observed VBAC rates were calculated. To assess the accuracy of the prediction model, receiver-operating characteristics curves were constructed and the areas under the curves (AUC) were calculated. Complete sonographic data were available for 120 women. Eighty (67%) women underwent trial of labor after Cesarean delivery (TOLAC) with VBAC occurring in 70 (88%) cases. The scar was visible in all 80 women at the first-trimester scan and in 54 (68%) women at the second-trimester scan. AUC was 0.44 (95% CI, 0.28-0.60) among all women who underwent TOLAC and 0.51 (95% CI, 0.32-0.71) among those with the scar visible sonographically at both ultrasound examinations. The prediction model demonstrated poor accuracy for prediction of successful VBAC in our Swedish population. Copyright © 2017 ISUOG. Published by John Wiley & Sons Ltd. Copyright © 2017 ISUOG. Published by John Wiley & Sons Ltd.

  12. Prediction of speech intelligibility based on a correlation metric in the envelope power spectrum domain

    DEFF Research Database (Denmark)

    Relano-Iborra, Helia; May, Tobias; Zaar, Johannes

    A powerful tool to investigate speech perception is the use of speech intelligibility prediction models. Recently, a model was presented, termed correlation-based speechbased envelope power spectrum model (sEPSMcorr) [1], based on the auditory processing of the multi-resolution speech-based Envel...

  13. Comparison of short term rainfall forecasts for model based flow prediction in urban drainage systems

    DEFF Research Database (Denmark)

    Thorndahl, Søren; Poulsen, Troels Sander; Bøvith, Thomas

    2012-01-01

    Forecast based flow prediction in drainage systems can be used to implement real time control of drainage systems. This study compares two different types of rainfall forecasts – a radar rainfall extrapolation based nowcast model and a numerical weather prediction model. The models are applied...... performance of the system is found using the radar nowcast for the short leadtimes and weather model for larger lead times....

  14. A rank-based Prediction Algorithm of Learning User's Intention

    Science.gov (United States)

    Shen, Jie; Gao, Ying; Chen, Cang; Gong, HaiPing

    Internet search has become an important part in people's daily life. People can find many types of information to meet different needs through search engines on the Internet. There are two issues for the current search engines: first, the users should predetermine the types of information they want and then change to the appropriate types of search engine interfaces. Second, most search engines can support multiple kinds of search functions, each function has its own separate search interface. While users need different types of information, they must switch between different interfaces. In practice, most queries are corresponding to various types of information results. These queries can search the relevant results in various search engines, such as query "Palace" contains the websites about the introduction of the National Palace Museum, blog, Wikipedia, some pictures and video information. This paper presents a new aggregative algorithm for all kinds of search results. It can filter and sort the search results by learning three aspects about the query words, search results and search history logs to achieve the purpose of detecting user's intention. Experiments demonstrate that this rank-based method for multi-types of search results is effective. It can meet the user's search needs well, enhance user's satisfaction, provide an effective and rational model for optimizing search engines and improve user's search experience.

  15. Predicting Geotechnical Investigation Using the Knowledge Based System

    Directory of Open Access Journals (Sweden)

    Bojan Žlender

    2016-01-01

    Full Text Available The purpose of this paper is to evaluate the optimal number of investigation points and each field test and laboratory test for a proper description of a building site. These optimal numbers are defined based on their minimum and maximum number and with the equivalent investigation ratio. The total increments of minimum and maximum number of investigation points for different building site conditions were determined. To facilitate the decision-making process for a number of investigation points, an Adaptive Network Fuzzy Inference System (ANFIS was proposed. The obtained fuzzy inference system considers the influence of several entry parameters and computes the equivalent investigation ratio. The developed model (ANFIS-SI can be applied to characterize any building site. The ANFIS-SI model takes into account project factors which are evaluated with a rating from 1 to 10. The model ANFIS-SI, with integrated recommendations can be used as a systematic decision support tool for engineers to evaluate the number of investigation points, field tests, and laboratory tests for a proper description of a building site. The determination of the optimal number of investigative points and the optimal number of each field test and laboratory test is presented on reference case.

  16. Screw Remaining Life Prediction Based on Quantum Genetic Algorithm and Support Vector Machine

    Directory of Open Access Journals (Sweden)

    Xiaochen Zhang

    2017-01-01

    Full Text Available To predict the remaining life of ball screw, a screw remaining life prediction method based on quantum genetic algorithm (QGA and support vector machine (SVM is proposed. A screw accelerated test bench is introduced. Accelerometers are installed to monitor the performance degradation of ball screw. Combined with wavelet packet decomposition and isometric mapping (Isomap, the sensitive feature vectors are obtained and stored in database. Meanwhile, the sensitive feature vectors are randomly chosen from the database and constitute training samples and testing samples. Then the optimal kernel function parameter and penalty factor of SVM are searched with the method of QGA. Finally, the training samples are used to train optimized SVM while testing samples are adopted to test the prediction accuracy of the trained SVM so the screw remaining life prediction model can be got. The experiment results show that the screw remaining life prediction model could effectively predict screw remaining life.

  17. Predictor-Year Subspace Clustering Based Ensemble Prediction of Indian Summer Monsoon

    Directory of Open Access Journals (Sweden)

    Moumita Saha

    2016-01-01

    Full Text Available Forecasting the Indian summer monsoon is a challenging task due to its complex and nonlinear behavior. A large number of global climatic variables with varying interaction patterns over years influence monsoon. Various statistical and neural prediction models have been proposed for forecasting monsoon, but many of them fail to capture variability over years. The skill of predictor variables of monsoon also evolves over time. In this article, we propose a joint-clustering of monsoon years and predictors for understanding and predicting the monsoon. This is achieved by subspace clustering algorithm. It groups the years based on prevailing global climatic condition using statistical clustering technique and subsequently for each such group it identifies significant climatic predictor variables which assist in better prediction. Prediction model is designed to frame individual cluster using random forest of regression tree. Prediction of aggregate and regional monsoon is attempted. Mean absolute error of 5.2% is obtained for forecasting aggregate Indian summer monsoon. Errors in predicting the regional monsoons are also comparable in comparison to the high variation of regional precipitation. Proposed joint-clustering based ensemble model is observed to be superior to existing monsoon prediction models and it also surpasses general nonclustering based prediction models.

  18. A class-based link prediction using Distance Dependent Chinese Restaurant Process

    Science.gov (United States)

    Andalib, Azam; Babamir, Seyed Morteza

    2016-08-01

    One of the important tasks in relational data analysis is link prediction which has been successfully applied on many applications such as bioinformatics, information retrieval, etc. The link prediction is defined as predicting the existence or absence of edges between nodes of a network. In this paper, we propose a novel method for link prediction based on Distance Dependent Chinese Restaurant Process (DDCRP) model which enables us to utilize the information of the topological structure of the network such as shortest path and connectivity of the nodes. We also propose a new Gibbs sampling algorithm for computing the posterior distribution of the hidden variables based on the training data. Experimental results on three real-world datasets show the superiority of the proposed method over other probabilistic models for link prediction problem.

  19. Association Rule-based Predictive Model for Machine Failure in Industrial Internet of Things

    Science.gov (United States)

    Kwon, Jung-Hyok; Lee, Sol-Bee; Park, Jaehoon; Kim, Eui-Jik

    2017-09-01

    This paper proposes an association rule-based predictive model for machine failure in industrial Internet of things (IIoT), which can accurately predict the machine failure in real manufacturing environment by investigating the relationship between the cause and type of machine failure. To develop the predictive model, we consider three major steps: 1) binarization, 2) rule creation, 3) visualization. The binarization step translates item values in a dataset into one or zero, then the rule creation step creates association rules as IF-THEN structures using the Lattice model and Apriori algorithm. Finally, the created rules are visualized in various ways for users’ understanding. An experimental implementation was conducted using R Studio version 3.3.2. The results show that the proposed predictive model realistically predicts machine failure based on association rules.

  20. Does teacher evaluation based on student performance predict motivation, well-being, and ill-being?

    Science.gov (United States)

    Cuevas, Ricardo; Ntoumanis, Nikos; Fernandez-Bustos, Juan G; Bartholomew, Kimberley

    2018-06-01

    This study tests an explanatory model based on self-determination theory, which posits that pressure experienced by teachers when they are evaluated based on their students' academic performance will differentially predict teacher adaptive and maladaptive motivation, well-being, and ill-being. A total of 360 Spanish physical education teachers completed a multi-scale inventory. We found support for a structural equation model that showed that perceived pressure predicted teacher autonomous motivation negatively, predicted amotivation positively, and was unrelated to controlled motivation. In addition, autonomous motivation predicted vitality positively and exhaustion negatively, whereas controlled motivation and amotivation predicted vitality negatively and exhaustion positively. Amotivation significantly mediated the relation between pressure and vitality and between pressure and exhaustion. The results underline the potential negative impact of pressure felt by teachers due to this type of evaluation on teacher motivation and psychological health. Copyright © 2018 Society for the Study of School Psychology. Published by Elsevier Ltd. All rights reserved.

  1. Scaling-based prediction of magnetic anisotropy in grain-oriented steels

    Directory of Open Access Journals (Sweden)

    Najgebauer Mariusz

    2017-06-01

    Full Text Available The paper presents the scaling-based approach to analysis and prediction of magnetic anisotropy in grain-oriented steels. Results of the anisotropy scaling indicate the existence of two universality classes. The hybrid approach to prediction of magnetic anisotropy, combining the scaling analysis with the ODFs method, is proposed. This approach is examined in prediction of angular dependencies of magnetic induction as well as magnetization curves for the 111-35S5 steel. It is shown that it is possible to predict anisotropy of magnetic properties based on measurements in three arbitrary directions for φ = 0°, 60° and 90°. The relatively small errors between predicted and measured values of magnetic induction are obtained.

  2. Personality disorders in previously detained adolescent females: a prospective study

    NARCIS (Netherlands)

    Krabbendam, A.; Colins, O.F.; Doreleijers, T.A.H.; van der Molen, E.; Beekman, A.T.F.; Vermeiren, R.R.J.M.

    2015-01-01

    This longitudinal study investigated the predictive value of trauma and mental health problems for the development of antisocial personality disorder (ASPD) and borderline personality disorder (BPD) in previously detained women. The participants were 229 detained adolescent females who were assessed

  3. A prediction model-based algorithm for computer-assisted database screening of adverse drug reactions in the Netherlands.

    Science.gov (United States)

    Scholl, Joep H G; van Hunsel, Florence P A M; Hak, Eelko; van Puijenbroek, Eugène P

    2018-02-01

    The statistical screening of pharmacovigilance databases containing spontaneously reported adverse drug reactions (ADRs) is mainly based on disproportionality analysis. The aim of this study was to improve the efficiency of full database screening using a prediction model-based approach. A logistic regression-based prediction model containing 5 candidate predictors was developed and internally validated using the Summary of Product Characteristics as the gold standard for the outcome. All drug-ADR associations, with the exception of those related to vaccines, with a minimum of 3 reports formed the training data for the model. Performance was based on the area under the receiver operating characteristic curve (AUC). Results were compared with the current method of database screening based on the number of previously analyzed associations. A total of 25 026 unique drug-ADR associations formed the training data for the model. The final model contained all 5 candidate predictors (number of reports, disproportionality, reports from healthcare professionals, reports from marketing authorization holders, Naranjo score). The AUC for the full model was 0.740 (95% CI; 0.734-0.747). The internal validity was good based on the calibration curve and bootstrapping analysis (AUC after bootstrapping = 0.739). Compared with the old method, the AUC increased from 0.649 to 0.740, and the proportion of potential signals increased by approximately 50% (from 12.3% to 19.4%). A prediction model-based approach can be a useful tool to create priority-based listings for signal detection in databases consisting of spontaneous ADRs. © 2017 The Authors. Pharmacoepidemiology & Drug Safety Published by John Wiley & Sons Ltd.

  4. A Wavelet Kernel-Based Primal Twin Support Vector Machine for Economic Development Prediction

    Directory of Open Access Journals (Sweden)

    Fang Su

    2013-01-01

    Full Text Available Economic development forecasting allows planners to choose the right strategies for the future. This study is to propose economic development prediction method based on the wavelet kernel-based primal twin support vector machine algorithm. As gross domestic product (GDP is an important indicator to measure economic development, economic development prediction means GDP prediction in this study. The wavelet kernel-based primal twin support vector machine algorithm can solve two smaller sized quadratic programming problems instead of solving a large one as in the traditional support vector machine algorithm. Economic development data of Anhui province from 1992 to 2009 are used to study the prediction performance of the wavelet kernel-based primal twin support vector machine algorithm. The comparison of mean error of economic development prediction between wavelet kernel-based primal twin support vector machine and traditional support vector machine models trained by the training samples with the 3–5 dimensional input vectors, respectively, is given in this paper. The testing results show that the economic development prediction accuracy of the wavelet kernel-based primal twin support vector machine model is better than that of traditional support vector machine.

  5. Genetic algorithm based adaptive neural network ensemble and its application in predicting carbon flux

    Science.gov (United States)

    Xue, Y.; Liu, S.; Hu, Y.; Yang, J.; Chen, Q.

    2007-01-01

    To improve the accuracy in prediction, Genetic Algorithm based Adaptive Neural Network Ensemble (GA-ANNE) is presented. Intersections are allowed between different training sets based on the fuzzy clustering analysis, which ensures the diversity as well as the accuracy of individual Neural Networks (NNs). Moreover, to improve the accuracy of the adaptive weights of individual NNs, GA is used to optimize the cluster centers. Empirical results in predicting carbon flux of Duke Forest reveal that GA-ANNE can predict the carbon flux more accurately than Radial Basis Function Neural Network (RBFNN), Bagging NN ensemble, and ANNE. ?? 2007 IEEE.

  6. Non-Fourier based thermal-mechanical tissue damage prediction for thermal ablation.

    Science.gov (United States)

    Li, Xin; Zhong, Yongmin; Smith, Julian; Gu, Chengfan

    2017-01-02

    Prediction of tissue damage under thermal loads plays important role for thermal ablation planning. A new methodology is presented in this paper by combing non-Fourier bio-heat transfer, constitutive elastic mechanics as well as non-rigid motion of dynamics to predict and analyze thermal distribution, thermal-induced mechanical deformation and thermal-mechanical damage of soft tissues under thermal loads. Simulations and comparison analysis demonstrate that the proposed methodology based on the non-Fourier bio-heat transfer can account for the thermal-induced mechanical behaviors of soft tissues and predict tissue thermal damage more accurately than classical Fourier bio-heat transfer based model.

  7. A simple, fast, and accurate thermodynamic-based approach for transfer and prediction of gas chromatography retention times between columns and instruments Part III: Retention time prediction on target column.

    Science.gov (United States)

    Hou, Siyuan; Stevenson, Keisean A J M; Harynuk, James J

    2018-03-27

    This is the third part of a three-part series of papers. In Part I, we presented a method for determining the actual effective geometry of a reference column as well as the thermodynamic-based parameters of a set of probe compounds in an in-house mixture. Part II introduced an approach for estimating the actual effective geometry of a target column by collecting retention data of the same mixture of probe compounds on the target column and using their thermodynamic parameters, acquired on the reference column, as a bridge between both systems. Part III, presented here, demonstrates the retention time transfer and prediction from the reference column to the target column using experimental data for a separate mixture of compounds. To predict the retention time of a new compound, we first estimate its thermodynamic-based parameters on the reference column (using geometric parameters determined previously). The compound's retention time on a second column (of previously determined geometry) is then predicted. The models and the associated optimization algorithms were tested using simulated and experimental data. The accuracy of predicted retention times shows that the proposed approach is simple, fast, and accurate for retention time transfer and prediction between gas chromatography columns. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  8. Settlement Prediction of Road Soft Foundation Using a Support Vector Machine (SVM Based on Measured Data

    Directory of Open Access Journals (Sweden)

    Yu Huiling

    2016-01-01

    Full Text Available The suppor1t vector machine (SVM is a relatively new artificial intelligence technique which is increasingly being applied to geotechnical problems and is yielding encouraging results. SVM is a new machine learning method based on the statistical learning theory. A case study based on road foundation engineering project shows that the forecast results are in good agreement with the measured data. The SVM model is also compared with BP artificial neural network model and traditional hyperbola method. The prediction results indicate that the SVM model has a better prediction ability than BP neural network model and hyperbola method. Therefore, settlement prediction based on SVM model can reflect actual settlement process more correctly. The results indicate that it is effective and feasible to use this method and the nonlinear mapping relation between foundation settlement and its influence factor can be expressed well. It will provide a new method to predict foundation settlement.

  9. A data base approach for prediction of deforestation-induced mass wasting events

    Science.gov (United States)

    Logan, T. L.

    1981-01-01

    A major topic of concern in timber management is determining the impact of clear-cutting on slope stability. Deforestation treatments on steep mountain slopes have often resulted in a high frequency of major mass wasting events. The Geographic Information System (GIS) is a potentially useful tool for predicting the location of mass wasting sites. With a raster-based GIS, digitally encoded maps of slide hazard parameters can be overlayed and modeled to produce new maps depicting high probability slide areas. The present investigation has the objective to examine the raster-based information system as a tool for predicting the location of the clear-cut mountain slopes which are most likely to experience shallow soil debris avalanches. A literature overview is conducted, taking into account vegetation, roads, precipitation, soil type, slope-angle and aspect, and models predicting mass soil movements. Attention is given to a data base approach and aspects of slide prediction.

  10. Uncertainty analysis of neural network based flood forecasting models: An ensemble based approach for constructing prediction interval

    Science.gov (United States)

    Kasiviswanathan, K.; Sudheer, K.

    2013-05-01

    Artificial neural network (ANN) based hydrologic models have gained lot of attention among water resources engineers and scientists, owing to their potential for accurate prediction of flood flows as compared to conceptual or physics based hydrologic models. The ANN approximates the non-linear functional relationship between the complex hydrologic variables in arriving at the river flow forecast values. Despite a large number of applications, there is still some criticism that ANN's point prediction lacks in reliability since the uncertainty of predictions are not quantified, and it limits its use in practical applications. A major concern in application of traditional uncertainty analysis techniques on neural network framework is its parallel computing architecture with large degrees of freedom, which makes the uncertainty assessment a challenging task. Very limited studies have considered assessment of predictive uncertainty of ANN based hydrologic models. In this study, a novel method is proposed that help construct the prediction interval of ANN flood forecasting model during calibration itself. The method is designed to have two stages of optimization during calibration: at stage 1, the ANN model is trained with genetic algorithm (GA) to obtain optimal set of weights and biases vector, and during stage 2, the optimal variability of ANN parameters (obtained in stage 1) is identified so as to create an ensemble of predictions. During the 2nd stage, the optimization is performed with multiple objectives, (i) minimum residual variance for the ensemble mean, (ii) maximum measured data points to fall within the estimated prediction interval and (iii) minimum width of prediction interval. The method is illustrated using a real world case study of an Indian basin. The method was able to produce an ensemble that has an average prediction interval width of 23.03 m3/s, with 97.17% of the total validation data points (measured) lying within the interval. The derived

  11. Improving Computational Efficiency of Prediction in Model-Based Prognostics Using the Unscented Transform

    Science.gov (United States)

    Daigle, Matthew John; Goebel, Kai Frank

    2010-01-01

    Model-based prognostics captures system knowledge in the form of physics-based models of components, and how they fail, in order to obtain accurate predictions of end of life (EOL). EOL is predicted based on the estimated current state distribution of a component and expected profiles of future usage. In general, this requires simulations of the component using the underlying models. In this paper, we develop a simulation-based prediction methodology that achieves computational efficiency by performing only the minimal number of simulations needed in order to accurately approximate the mean and variance of the complete EOL distribution. This is performed through the use of the unscented transform, which predicts the means and covariances of a distribution passed through a nonlinear transformation. In this case, the EOL simulation acts as that nonlinear transformation. In this paper, we review the unscented transform, and describe how this concept is applied to efficient EOL prediction. As a case study, we develop a physics-based model of a solenoid valve, and perform simulation experiments to demonstrate improved computational efficiency without sacrificing prediction accuracy.

  12. Development of a noise prediction model based on advanced fuzzy approaches in typical industrial workrooms.

    Science.gov (United States)

    Aliabadi, Mohsen; Golmohammadi, Rostam; Khotanlou, Hassan; Mansoorizadeh, Muharram; Salarpour, Amir

    2014-01-01

    Noise prediction is considered to be the best method for evaluating cost-preventative noise controls in industrial workrooms. One of the most important issues is the development of accurate models for analysis of the complex relationships among acoustic features affecting noise level in workrooms. In this study, advanced fuzzy approaches were employed to develop relatively accurate models for predicting noise in noisy industrial workrooms. The data were collected from 60 industrial embroidery workrooms in the Khorasan Province, East of Iran. The main acoustic and embroidery process features that influence the noise were used to develop prediction models using MATLAB software. Multiple regression technique was also employed and its results were compared with those of fuzzy approaches. Prediction errors of all prediction models based on fuzzy approaches were within the acceptable level (lower than one dB). However, Neuro-fuzzy model (RMSE=0.53dB and R2=0.88) could slightly improve the accuracy of noise prediction compared with generate fuzzy model. Moreover, fuzzy approaches provided more accurate predictions than did regression technique. The developed models based on fuzzy approaches as useful prediction tools give professionals the opportunity to have an optimum decision about the effectiveness of acoustic treatment scenarios in embroidery workrooms.

  13. SVM-Based System for Prediction of Epileptic Seizures from iEEG Signal

    Science.gov (United States)

    Cherkassky, Vladimir; Lee, Jieun; Veber, Brandon; Patterson, Edward E.; Brinkmann, Benjamin H.; Worrell, Gregory A.

    2017-01-01

    Objective This paper describes a data-analytic modeling approach for prediction of epileptic seizures from intracranial electroencephalogram (iEEG) recording of brain activity. Even though it is widely accepted that statistical characteristics of iEEG signal change prior to seizures, robust seizure prediction remains a challenging problem due to subject-specific nature of data-analytic modeling. Methods Our work emphasizes understanding of clinical considerations important for iEEG-based seizure prediction, and proper translation of these clinical considerations into data-analytic modeling assumptions. Several design choices during pre-processing and post-processing are considered and investigated for their effect on seizure prediction accuracy. Results Our empirical results show that the proposed SVM-based seizure prediction system can achieve robust prediction of preictal and interictal iEEG segments from dogs with epilepsy. The sensitivity is about 90–100%, and the false-positive rate is about 0–0.3 times per day. The results also suggest good prediction is subject-specific (dog or human), in agreement with earlier studies. Conclusion Good prediction performance is possible only if the training data contain sufficiently many seizure episodes, i.e., at least 5–7 seizures. Significance The proposed system uses subject-specific modeling and unbalanced training data. This system also utilizes three different time scales during training and testing stages. PMID:27362758

  14. Remaining useful life prediction based on noisy condition monitoring signals using constrained Kalman filter

    International Nuclear Information System (INIS)

    Son, Junbo; Zhou, Shiyu; Sankavaram, Chaitanya; Du, Xinyu; Zhang, Yilu

    2016-01-01

    In this paper, a statistical prognostic method to predict the remaining useful life (RUL) of individual units based on noisy condition monitoring signals is proposed. The prediction accuracy of existing data-driven prognostic methods depends on the capability of accurately modeling the evolution of condition monitoring (CM) signals. Therefore, it is inevitable that the RUL prediction accuracy depends on the amount of random noise in CM signals. When signals are contaminated by a large amount of random noise, RUL prediction even becomes infeasible in some cases. To mitigate this issue, a robust RUL prediction method based on constrained Kalman filter is proposed. The proposed method models the CM signals subject to a set of inequality constraints so that satisfactory prediction accuracy can be achieved regardless of the noise level of signal evolution. The advantageous features of the proposed RUL prediction method is demonstrated by both numerical study and case study with real world data from automotive lead-acid batteries. - Highlights: • A computationally efficient constrained Kalman filter is proposed. • Proposed filter is integrated into an online failure prognosis framework. • A set of proper constraints significantly improves the failure prediction accuracy. • Promising results are reported in the application of battery failure prognosis.

  15. Occupant feedback based model predictive control for thermal comfort and energy optimization: A chamber experimental evaluation

    International Nuclear Information System (INIS)

    Chen, Xiao; Wang, Qian; Srebric, Jelena

    2016-01-01

    Highlights: • This study evaluates an occupant-feedback driven Model Predictive Controller (MPC). • The MPC adjusts indoor temperature based on a dynamic thermal sensation (DTS) model. • A chamber model for predicting chamber air temperature is developed and validated. • Experiments show that MPC using DTS performs better than using Predicted Mean Vote. - Abstract: In current centralized building climate control, occupants do not have much opportunity to intervene the automated control system. This study explores the benefit of using thermal comfort feedback from occupants in the model predictive control (MPC) design based on a novel dynamic thermal sensation (DTS) model. This DTS model based MPC was evaluated in chamber experiments. A hierarchical structure for thermal control was adopted in the chamber experiments. At the high level, an MPC controller calculates the optimal supply air temperature of the chamber heating, ventilation, and air conditioning (HVAC) system, using the feedback of occupants’ votes on thermal sensation. At the low level, the actual supply air temperature is controlled by the chiller/heater using a PI control to achieve the optimal set point. This DTS-based MPC was also compared to an MPC designed based on the Predicted Mean Vote (PMV) model for thermal sensation. The experiment results demonstrated that the DTS-based MPC using occupant feedback allows significant energy saving while maintaining occupant thermal comfort compared to the PMV-based MPC.

  16. Comparison of RNA-seq and microarray-based models for clinical endpoint prediction.

    Science.gov (United States)

    Zhang, Wenqian; Yu, Ying; Hertwig, Falk; Thierry-Mieg, Jean; Zhang, Wenwei; Thierry-Mieg, Danielle; Wang, Jian; Furlanello, Cesare; Devanarayan, Viswanath; Cheng, Jie; Deng, Youping; Hero, Barbara; Hong, Huixiao; Jia, Meiwen; Li, Li; Lin, Simon M; Nikolsky, Yuri; Oberthuer, André; Qing, Tao; Su, Zhenqiang; Volland, Ruth; Wang, Charles; Wang, May D; Ai, Junmei; Albanese, Davide; Asgharzadeh, Shahab; Avigad, Smadar; Bao, Wenjun; Bessarabova, Marina; Brilliant, Murray H; Brors, Benedikt; Chierici, Marco; Chu, Tzu-Ming; Zhang, Jibin; Grundy, Richard G; He, Min Max; Hebbring, Scott; Kaufman, Howard L; Lababidi, Samir; Lancashire, Lee J; Li, Yan; Lu, Xin X; Luo, Heng; Ma, Xiwen; Ning, Baitang; Noguera, Rosa; Peifer, Martin; Phan, John H; Roels, Frederik; Rosswog, Carolina; Shao, Susan; Shen, Jie; Theissen, Jessica; Tonini, Gian Paolo; Vandesompele, Jo; Wu, Po-Yen; Xiao, Wenzhong; Xu, Joshua; Xu, Weihong; Xuan, Jiekun; Yang, Yong; Ye, Zhan; Dong, Zirui; Zhang, Ke K; Yin, Ye; Zhao, Chen; Zheng, Yuanting; Wolfinger, Russell D; Shi, Tieliu; Malkas, Linda H; Berthold, Frank; Wang, Jun; Tong, Weida; Shi, Leming; Peng, Zhiyu; Fischer, Matthias

    2015-06-25

    Gene expression profiling is being widely applied in cancer research to identify biomarkers for clinical endpoint prediction. Since RNA-seq provides a powerful tool for transcriptome-based applications beyond the limitations of microarrays, we sought to systematically evaluate the performance of RNA-seq-based and microarray-based classifiers in this MAQC-III/SEQC study for clinical endpoint prediction using neuroblastoma as a model. We generate gene expression profiles from 498 primary neuroblastomas using both RNA-seq and 44 k microarrays. Characterization of the neuroblastoma transcriptome by RNA-seq reveals that more than 48,000 genes and 200,000 transcripts are being expressed in this malignancy. We also find that RNA-seq provides much more detailed information on specific transcript expression patterns in clinico-genetic neuroblastoma subgroups than microarrays. To systematically compare the power of RNA-seq and microarray-based models in predicting clinical endpoints, we divide the cohort randomly into training and validation sets and develop 360 predictive models on six clinical endpoints of varying predictability. Evaluation of factors potentially affecting model performances reveals that prediction accuracies are most strongly influenced by the nature of the clinical endpoint, whereas technological platforms (RNA-seq vs. microarrays), RNA-seq data analysis pipelines, and feature levels (gene vs. transcript vs. exon-junction level) do not significantly affect performances of the models. We demonstrate that RNA-seq outperforms microarrays in determining the transcriptomic characteristics of cancer, while RNA-seq and microarray-based models perform similarly in clinical endpoint prediction. Our findings may be valuable to guide future studies on the development of gene expression-based predictive models and their implementation in clinical practice.

  17. A predictive coding account of bistable perception - a model-based fMRI study.

    Directory of Open Access Journals (Sweden)

    Veith Weilnhammer

    2017-05-01

    Full Text Available In bistable vision, subjective perception wavers between two interpretations of a constant ambiguous stimulus. This dissociation between conscious perception and sensory stimulation has motivated various empirical studies on the neural correlates of bistable perception, but the neurocomputational mechanism behind endogenous perceptual transitions has remained elusive. Here, we recurred to a generic Bayesian framework of predictive coding and devised a model that casts endogenous perceptual transitions as a consequence of prediction errors emerging from residual evidence for the suppressed percept. Data simulations revealed close similarities between the model's predictions and key temporal characteristics of perceptual bistability, indicating that the model was able to reproduce bistable perception. Fitting the predictive coding model to behavioural data from an fMRI-experiment on bistable perception, we found a correlation across participants between the model parameter encoding perceptual stabilization and the behaviourally measured frequency of perceptual transitions, corroborating that the model successfully accounted for participants' perception. Formal model comparison with established models of bistable perception based on mutual inhibition and adaptation, noise or a combination of adaptation and noise was used for the validation of the predictive coding model against the established models. Most importantly, model-based analyses of the fMRI data revealed that prediction error time-courses derived from the predictive coding model correlated with neural signal time-courses in bilateral inferior frontal gyri and anterior insulae. Voxel-wise model selection indicated a superiority of the predictive coding model over conventional analysis approaches in explaining neural activity in these frontal areas, suggesting that frontal cortex encodes prediction errors that mediate endogenous perceptual transitions in bistable perception. Taken together

  18. A predictive coding account of bistable perception - a model-based fMRI study.

    Science.gov (United States)

    Weilnhammer, Veith; Stuke, Heiner; Hesselmann, Guido; Sterzer, Philipp; Schmack, Katharina

    2017-05-01

    In bistable vision, subjective perception wavers between two interpretations of a constant ambiguous stimulus. This dissociation between conscious perception and sensory stimulation has motivated various empirical studies on the neural correlates of bistable perception, but the neurocomputational mechanism behind endogenous perceptual transitions has remained elusive. Here, we recurred to a generic Bayesian framework of predictive coding and devised a model that casts endogenous perceptual transitions as a consequence of prediction errors emerging from residual evidence for the suppressed percept. Data simulations revealed close similarities between the model's predictions and key temporal characteristics of perceptual bistability, indicating that the model was able to reproduce bistable perception. Fitting the predictive coding model to behavioural data from an fMRI-experiment on bistable perception, we found a correlation across participants between the model parameter encoding perceptual stabilization and the behaviourally measured frequency of perceptual transitions, corroborating that the model successfully accounted for participants' perception. Formal model comparison with established models of bistable perception based on mutual inhibition and adaptation, noise or a combination of adaptation and noise was used for the validation of the predictive coding model against the established models. Most importantly, model-based analyses of the fMRI data revealed that prediction error time-courses derived from the predictive coding model correlated with neural signal time-courses in bilateral inferior frontal gyri and anterior insulae. Voxel-wise model selection indicated a superiority of the predictive coding model over conventional analysis approaches in explaining neural activity in these frontal areas, suggesting that frontal cortex encodes prediction errors that mediate endogenous perceptual transitions in bistable perception. Taken together, our current work

  19. Computational Model-Based Prediction of Human Episodic Memory Performance Based on Eye Movements

    Science.gov (United States)

    Sato, Naoyuki; Yamaguchi, Yoko

    Subjects' episodic memory performance is not simply reflected by eye movements. We use a ‘theta phase coding’ model of the hippocampus to predict subjects' memory performance from their eye movements. Results demonstrate the ability of the model to predict subjects' memory performance. These studies provide a novel approach to computational modeling in the human-machine interface.

  20. Degradation Prediction Model Based on a Neural Network with Dynamic Windows

    Science.gov (United States)

    Zhang, Xinghui; Xiao, Lei; Kang, Jianshe

    2015-01-01

    Tracking degradation of mechanical components is very critical for effective maintenance decision making. Remaining useful life (RUL) estimation is a widely used form of degradation prediction. RUL prediction methods when enough run-to-failure condition monitoring data can be used have been fully researched, but for some high reliability components, it is very difficult to collect run-to-failure condition monitoring data, i.e., from normal to failure. Only a certain number of condition indicators in certain period can be used to estimate RUL. In addition, some existing prediction methods have problems which block RUL estimation due to poor extrapolability. The predicted value converges to a certain constant or fluctuates in certain range. Moreover, the fluctuant condition features also have bad effects on prediction. In order to solve these dilemmas, this paper proposes a RUL prediction model based on neural network with dynamic windows. This model mainly consists of three steps: window size determination by increasing rate, change point detection and rolling prediction. The proposed method has two dominant strengths. One is that the proposed approach does not need to assume the degradation trajectory is subject to a certain distribution. The other is it can adapt to variation of degradation indicators which greatly benefits RUL prediction. Finally, the performance of the proposed RUL prediction model is validated by real field data and simulation data. PMID:25806873

  1. The Satellite Clock Bias Prediction Method Based on Takagi-Sugeno Fuzzy Neural Network

    Science.gov (United States)

    Cai, C. L.; Yu, H. G.; Wei, Z. C.; Pan, J. D.

    2017-05-01

    The continuous improvement of the prediction accuracy of Satellite Clock Bias (SCB) is the key problem of precision navigation. In order to improve the precision of SCB prediction and better reflect the change characteristics of SCB, this paper proposes an SCB prediction method based on the Takagi-Sugeno fuzzy neural network. Firstly, the SCB values are pre-treated based on their characteristics. Then, an accurate Takagi-Sugeno fuzzy neural network model is established based on the preprocessed data to predict SCB. This paper uses the precise SCB data with different sampling intervals provided by IGS (International Global Navigation Satellite System Service) to realize the short-time prediction experiment, and the results are compared with the ARIMA (Auto-Regressive Integrated Moving Average) model, GM(1,1) model, and the quadratic polynomial model. The results show that the Takagi-Sugeno fuzzy neural network model is feasible and effective for the SCB short-time prediction experiment, and performs well for different types of clocks. The prediction results for the proposed method are better than the conventional methods obviously.

  2. Predicting consumer liking and preference based on emotional responses and sensory perception: A study with basic taste solutions.

    Science.gov (United States)

    Samant, Shilpa S; Chapko, Matthew J; Seo, Han-Seok

    2017-10-01

    Traditional methods of sensory testing focus on capturing information about multisensory perceptions, but do not necessarily measure emotions elicited by these food and beverages. The objective of this study was to develop an optimum model of predicting overall liking (rating) and preference (choice) based on taste intensity and evoked emotions. One hundred and two participants (51 females) were asked to taste water, sucrose, citric acid, salt, and caffeine solutions. Their emotional responses toward each sample were measured by a combination of a self-reported emotion questionnaire (EsSense25), facial expressions, and autonomic nervous system (ANS) responses. In addition, their perceived intensity and overall liking were measured. After a break, participants re-tasted the samples and ranked them according to their preference. The results showed that emotional responses measured using self-reported emotion questionnaire and facial expression analysis along with perceived taste intensity performed best to predict overall liking as well as preference, while ANS measures showed limited contribution. Contrary to some previous research, this study demonstrated that not only negative emotions, but also positive ones could help predict consumer liking and preference. In addition, since there were subtle differences in the prediction models of overall liking and preference, both aspects should be taken into account to understand consumer behavior. In conclusion, combination of evoked emotions along with sensory perception could help better understand consumer acceptance as well as preference toward basic taste solutions. Published by Elsevier Ltd.

  3. Bridge Structure Deformation Prediction Based on GNSS Data Using Kalman-ARIMA-GARCH Model.

    Science.gov (United States)

    Xin, Jingzhou; Zhou, Jianting; Yang, Simon X; Li, Xiaoqing; Wang, Yu

    2018-01-19

    Bridges are an essential part of the ground transportation system. Health monitoring is fundamentally important for the safety and service life of bridges. A large amount of structural information is obtained from various sensors using sensing technology, and the data processing has become a challenging issue. To improve the prediction accuracy of bridge structure deformation based on data mining and to accurately evaluate the time-varying characteristics of bridge structure performance evolution, this paper proposes a new method for bridge structure deformation prediction, which integrates the Kalman filter, autoregressive integrated moving average model (ARIMA), and generalized autoregressive conditional heteroskedasticity (GARCH). Firstly, the raw deformation data is directly pre-processed using the Kalman filter to reduce the noise. After that, the linear recursive ARIMA model is established to analyze and predict the structure deformation. Finally, the nonlinear recursive GARCH model is introduced to further improve the accuracy of the prediction. Simulation results based on measured sensor data from the Global Navigation Satellite System (GNSS) deformation monitoring system demonstrated that: (1) the Kalman filter is capable of denoising the bridge deformation monitoring data; (2) the prediction accuracy of the proposed Kalman-ARIMA-GARCH model is satisfactory, where the mean absolute error increases only from 3.402 mm to 5.847 mm with the increment of the prediction step; and (3) in comparision to the Kalman-ARIMA model, the Kalman-ARIMA-GARCH model results in superior prediction accuracy as it includes partial nonlinear characteristics (heteroscedasticity); the mean absolute error of five-step prediction using the proposed model is improved by 10.12%. This paper provides a new way for structural behavior prediction based on data processing, which can lay a foundation for the early warning of bridge health monitoring system based on sensor data using sensing

  4. Bridge Structure Deformation Prediction Based on GNSS Data Using Kalman-ARIMA-GARCH Model

    Directory of Open Access Journals (Sweden)

    Jingzhou Xin

    2018-01-01

    Full Text Available Bridges are an essential part of the ground transportation system. Health monitoring is fundamentally important for the safety and service life of bridges. A large amount of structural information is obtained from various sensors using sensing technology, and the data processing has become a challenging issue. To improve the prediction accuracy of bridge structure deformation based on data mining and to accurately evaluate the time-varying characteristics of bridge structure performance evolution, this paper proposes a new method for bridge structure deformation prediction, which integrates the Kalman filter, autoregressive integrated moving average model (ARIMA, and generalized autoregressive conditional heteroskedasticity (GARCH. Firstly, the raw deformation data is directly pre-processed using the Kalman filter to reduce the noise. After that, the linear recursive ARIMA model is established to analyze and predict the structure deformation. Finally, the nonlinear recursive GARCH model is introduced to further improve the accuracy of the prediction. Simulation results based on measured sensor data from the Global Navigation Satellite System (GNSS deformation monitoring system demonstrated that: (1 the Kalman filter is capable of denoising the bridge deformation monitoring data; (2 the prediction accuracy of the proposed Kalman-ARIMA-GARCH model is satisfactory, where the mean absolute error increases only from 3.402 mm to 5.847 mm with the increment of the prediction step; and (3 in comparision to the Kalman-ARIMA model, the Kalman-ARIMA-GARCH model results in superior prediction accuracy as it includes partial nonlinear characteristics (heteroscedasticity; the mean absolute error of five-step prediction using the proposed model is improved by 10.12%. This paper provides a new way for structural behavior prediction based on data processing, which can lay a foundation for the early warning of bridge health monitoring system based on sensor data

  5. Prediction of Lunar- and Martian-Based Intra- and Site-to-Site Task Performance.

    Science.gov (United States)

    Ade, Carl J; Broxterman, Ryan M; Craig, Jesse C; Schlup, Susanna J; Wilcox, Samuel L; Warren, Steve; Kuehl, Phillip; Gude, Dana; Jia, Chen; Barstow, Thomas J

    2016-04-01

    This study aimed to investigate the feasibility of determining the physiological parameters associated with the ability to complete simulated exploration type tasks at metabolic rates which might be expected for lunar and Martian ambulation. Running V̇O2max and gas exchange threshold (GET) were measured in 21 volunteers. Two simulated extravehicular activity field tests were completed in 1 G in regular athletic apparel at two intensities designed to elicit metabolic rates of ∼20.0 and ∼30.0 ml · kg(-1) · min(-1), which are similar to those previously reported for ambulation in simulated lunar- and Martian-based environments, respectively. All subjects were able to complete the field test at the lunar intensity, but 28% were unable to complete the field test at the Martian intensity (non-Finishers). During the Martian field test there were no differences in V̇O2 between Finishers and non-Finishers, but the non-Finishers achieved a greater %V̇O2max compared to Finishers (78.4 ± 4.6% vs. 64.9 ± 9.6%). Logistic regression analysis revealed fitness thresholds for a predicted probability of 0.5, at which Finishing and non-Finishing are equally likely, and 0.75, at which an individual has a 75% chance of Finishing, to be a V̇O2max of 38.4 ml · kg(-1) · min(-1) and 40.0 ml · kg(-1) · min(-1) or a GET of 20.1 ml · kg(-1) · min(-1) and 25.1 ml · kg(-1) · min(-1), respectively (χ(2) = 10.2). Logistic regression analysis also revealed that the expected %V̇O2max required to complete a field test could be used to successfully predict performance (χ(2) = 19.3). The results of the present investigation highlight the potential utility of V̇O2max, particularly as it relates to the metabolic demands of a surface ambulation, in defining successful completion of planetary-based exploration field tests.

  6. Predictive Methods for Dense Polymer Networks: Combating Bias with Bio-Based Structures

    Science.gov (United States)

    2016-03-16

    Combating bias with bio - based structures 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) Andrew J. Guenthner...unlimited. PA Clearance 16152 Integrity  Service  Excellence Predictive methods for dense polymer networks: Combating bias with bio -based...Architectural Bias • Comparison of Petroleum-Based and Bio -Based Chemical Architectures • Continuing Research on Structure-Property Relationships using

  7. A Seasonal Time-Series Model Based on Gene Expression Programming for Predicting Financial Distress

    Directory of Open Access Journals (Sweden)

    Ching-Hsue Cheng

    2018-01-01

    Full Text Available The issue of financial distress prediction plays an important and challenging research topic in the financial field. Currently, there have been many methods for predicting firm bankruptcy and financial crisis, including the artificial intelligence and the traditional statistical methods, and the past studies have shown that the prediction result of the artificial intelligence method is better than the traditional statistical method. Financial statements are quarterly reports; hence, the financial crisis of companies is seasonal time-series data, and the attribute data affecting the financial distress of companies is nonlinear and nonstationary time-series data with fluctuations. Therefore, this study employed the nonlinear attribute selection method to build a nonlinear financial distress prediction model: that is, this paper proposed a novel seasonal time-series gene expression programming model for predicting the financial distress of companies. The proposed model has several advantages including the following: (i the proposed model is different from the previous models lacking the concept of time series; (ii the proposed integrated attribute selection method can find the core attributes and reduce high dimensional data; and (iii the proposed model can generate the rules and mathematical formulas of financial distress for providing references to the investors and decision makers. The result shows that the proposed method is better than the listing classifiers under three criteria; hence, the proposed model has competitive advantages in predicting the financial distress of companies.

  8. A Seasonal Time-Series Model Based on Gene Expression Programming for Predicting Financial Distress

    Science.gov (United States)

    2018-01-01

    The issue of financial distress prediction plays an important and challenging research topic in the financial field. Currently, there have been many methods for predicting firm bankruptcy and financial crisis, including the artificial intelligence and the traditional statistical methods, and the past studies have shown that the prediction result of the artificial intelligence method is better than the traditional statistical method. Financial statements are quarterly reports; hence, the financial crisis of companies is seasonal time-series data, and the attribute data affecting the financial distress of companies is nonlinear and nonstationary time-series data with fluctuations. Therefore, this study employed the nonlinear attribute selection method to build a nonlinear financial distress prediction model: that is, this paper proposed a novel seasonal time-series gene expression programming model for predicting the financial distress of companies. The proposed model has several advantages including the following: (i) the proposed model is different from the previous models lacking the concept of time series; (ii) the proposed integrated attribute selection method can find the core attributes and reduce high dimensional data; and (iii) the proposed model can generate the rules and mathematical formulas of financial distress for providing references to the investors and decision makers. The result shows that the proposed method is better than the listing classifiers under three criteria; hence, the proposed model has competitive advantages in predicting the financial distress of companies. PMID:29765399

  9. A Seasonal Time-Series Model Based on Gene Expression Programming for Predicting Financial Distress.

    Science.gov (United States)

    Cheng, Ching-Hsue; Chan, Chia-Pang; Yang, Jun-He

    2018-01-01

    The issue of financial distress prediction pl