WorldWideScience

Sample records for performance predictions based

  1. Comparisons of Faulting-Based Pavement Performance Prediction Models

    Directory of Open Access Journals (Sweden)

    Weina Wang

    2017-01-01

    Full Text Available Faulting prediction is the core of concrete pavement maintenance and design. Highway agencies are always faced with the problem of lower accuracy for the prediction which causes costly maintenance. Although many researchers have developed some performance prediction models, the accuracy of prediction has remained a challenge. This paper reviews performance prediction models and JPCP faulting models that have been used in past research. Then three models including multivariate nonlinear regression (MNLR model, artificial neural network (ANN model, and Markov Chain (MC model are tested and compared using a set of actual pavement survey data taken on interstate highway with varying design features, traffic, and climate data. It is found that MNLR model needs further recalibration, while the ANN model needs more data for training the network. MC model seems a good tool for pavement performance prediction when the data is limited, but it is based on visual inspections and not explicitly related to quantitative physical parameters. This paper then suggests that the further direction for developing the performance prediction model is incorporating the advantages and disadvantages of different models to obtain better accuracy.

  2. Performance of local information-based link prediction: a sampling perspective

    Science.gov (United States)

    Zhao, Jichang; Feng, Xu; Dong, Li; Liang, Xiao; Xu, Ke

    2012-08-01

    Link prediction is pervasively employed to uncover the missing links in the snapshots of real-world networks, which are usually obtained through different kinds of sampling methods. In the previous literature, in order to evaluate the performance of the prediction, known edges in the sampled snapshot are divided into the training set and the probe set randomly, without considering the underlying sampling approaches. However, different sampling methods might lead to different missing links, especially for the biased ways. For this reason, random partition-based evaluation of performance is no longer convincing if we take the sampling method into account. In this paper, we try to re-evaluate the performance of local information-based link predictions through sampling method governed division of the training set and the probe set. It is interesting that we find that for different sampling methods, each prediction approach performs unevenly. Moreover, most of these predictions perform weakly when the sampling method is biased, which indicates that the performance of these methods might have been overestimated in the prior works.

  3. Comparison of Simple Versus Performance-Based Fall Prediction Models

    Directory of Open Access Journals (Sweden)

    Shekhar K. Gadkaree BS

    2015-05-01

    Full Text Available Objective: To compare the predictive ability of standard falls prediction models based on physical performance assessments with more parsimonious prediction models based on self-reported data. Design: We developed a series of fall prediction models progressing in complexity and compared area under the receiver operating characteristic curve (AUC across models. Setting: National Health and Aging Trends Study (NHATS, which surveyed a nationally representative sample of Medicare enrollees (age ≥65 at baseline (Round 1: 2011-2012 and 1-year follow-up (Round 2: 2012-2013. Participants: In all, 6,056 community-dwelling individuals participated in Rounds 1 and 2 of NHATS. Measurements: Primary outcomes were 1-year incidence of “ any fall ” and “ recurrent falls .” Prediction models were compared and validated in development and validation sets, respectively. Results: A prediction model that included demographic information, self-reported problems with balance and coordination, and previous fall history was the most parsimonious model that optimized AUC for both any fall (AUC = 0.69, 95% confidence interval [CI] = [0.67, 0.71] and recurrent falls (AUC = 0.77, 95% CI = [0.74, 0.79] in the development set. Physical performance testing provided a marginal additional predictive value. Conclusion: A simple clinical prediction model that does not include physical performance testing could facilitate routine, widespread falls risk screening in the ambulatory care setting.

  4. Predicting Subcontractor Performance Using Web-Based Evolutionary Fuzzy Neural Networks

    Directory of Open Access Journals (Sweden)

    Chien-Ho Ko

    2013-01-01

    Full Text Available Subcontractor performance directly affects project success. The use of inappropriate subcontractors may result in individual work delays, cost overruns, and quality defects throughout the project. This study develops web-based Evolutionary Fuzzy Neural Networks (EFNNs to predict subcontractor performance. EFNNs are a fusion of Genetic Algorithms (GAs, Fuzzy Logic (FL, and Neural Networks (NNs. FL is primarily used to mimic high level of decision-making processes and deal with uncertainty in the construction industry. NNs are used to identify the association between previous performance and future status when predicting subcontractor performance. GAs are optimizing parameters required in FL and NNs. EFNNs encode FL and NNs using floating numbers to shorten the length of a string. A multi-cut-point crossover operator is used to explore the parameter and retain solution legality. Finally, the applicability of the proposed EFNNs is validated using real subcontractors. The EFNNs are evolved using 22 historical patterns and tested using 12 unseen cases. Application results show that the proposed EFNNs surpass FL and NNs in predicting subcontractor performance. The proposed approach improves prediction accuracy and reduces the effort required to predict subcontractor performance, providing field operators with web-based remote access to a reliable, scientific prediction mechanism.

  5. Predicting subcontractor performance using web-based Evolutionary Fuzzy Neural Networks.

    Science.gov (United States)

    Ko, Chien-Ho

    2013-01-01

    Subcontractor performance directly affects project success. The use of inappropriate subcontractors may result in individual work delays, cost overruns, and quality defects throughout the project. This study develops web-based Evolutionary Fuzzy Neural Networks (EFNNs) to predict subcontractor performance. EFNNs are a fusion of Genetic Algorithms (GAs), Fuzzy Logic (FL), and Neural Networks (NNs). FL is primarily used to mimic high level of decision-making processes and deal with uncertainty in the construction industry. NNs are used to identify the association between previous performance and future status when predicting subcontractor performance. GAs are optimizing parameters required in FL and NNs. EFNNs encode FL and NNs using floating numbers to shorten the length of a string. A multi-cut-point crossover operator is used to explore the parameter and retain solution legality. Finally, the applicability of the proposed EFNNs is validated using real subcontractors. The EFNNs are evolved using 22 historical patterns and tested using 12 unseen cases. Application results show that the proposed EFNNs surpass FL and NNs in predicting subcontractor performance. The proposed approach improves prediction accuracy and reduces the effort required to predict subcontractor performance, providing field operators with web-based remote access to a reliable, scientific prediction mechanism.

  6. Predicting Performance of a Face Recognition System Based on Image Quality

    NARCIS (Netherlands)

    Dutta, A.

    2015-01-01

    In this dissertation, we focus on several aspects of models that aim to predict performance of a face recognition system. Performance prediction models are commonly based on the following two types of performance predictor features: a) image quality features; and b) features derived solely from

  7. Performance reliability prediction for thermal aging based on kalman filtering

    International Nuclear Information System (INIS)

    Ren Shuhong; Wen Zhenhua; Xue Fei; Zhao Wensheng

    2015-01-01

    The performance reliability of the nuclear power plant main pipeline that failed due to thermal aging was studied by the performance degradation theory. Firstly, through the data obtained from the accelerated thermal aging experiments, the degradation process of the impact strength and fracture toughness of austenitic stainless steel material of the main pipeline was analyzed. The time-varying performance degradation model based on the state space method was built, and the performance trends were predicted by using Kalman filtering. Then, the multi-parameter and real-time performance reliability prediction model for the main pipeline thermal aging was developed by considering the correlation between the impact properties and fracture toughness, and by using the stochastic process theory. Thus, the thermal aging performance reliability and reliability life of the main pipeline with multi-parameter were obtained, which provides the scientific basis for the optimization management of the aging maintenance decision making for nuclear power plant main pipelines. (authors)

  8. Selecting the minimum prediction base of historical data to perform 5-year predictions of the cancer burden: The GoF-optimal method.

    Science.gov (United States)

    Valls, Joan; Castellà, Gerard; Dyba, Tadeusz; Clèries, Ramon

    2015-06-01

    Predicting the future burden of cancer is a key issue for health services planning, where a method for selecting the predictive model and the prediction base is a challenge. A method, named here Goodness-of-Fit optimal (GoF-optimal), is presented to determine the minimum prediction base of historical data to perform 5-year predictions of the number of new cancer cases or deaths. An empirical ex-post evaluation exercise for cancer mortality data in Spain and cancer incidence in Finland using simple linear and log-linear Poisson models was performed. Prediction bases were considered within the time periods 1951-2006 in Spain and 1975-2007 in Finland, and then predictions were made for 37 and 33 single years in these periods, respectively. The performance of three fixed different prediction bases (last 5, 10, and 20 years of historical data) was compared to that of the prediction base determined by the GoF-optimal method. The coverage (COV) of the 95% prediction interval and the discrepancy ratio (DR) were calculated to assess the success of the prediction. The results showed that (i) models using the prediction base selected through GoF-optimal method reached the highest COV and the lowest DR and (ii) the best alternative strategy to GoF-optimal was the one using the base of prediction of 5-years. The GoF-optimal approach can be used as a selection criterion in order to find an adequate base of prediction. Copyright © 2015 Elsevier Ltd. All rights reserved.

  9. Performance predictions affect attentional processes of event-based prospective memory.

    Science.gov (United States)

    Rummel, Jan; Kuhlmann, Beatrice G; Touron, Dayna R

    2013-09-01

    To investigate whether making performance predictions affects prospective memory (PM) processing, we asked one group of participants to predict their performance in a PM task embedded in an ongoing task and compared their performance with a control group that made no predictions. A third group gave not only PM predictions but also ongoing-task predictions. Exclusive PM predictions resulted in slower ongoing-task responding both in a nonfocal (Experiment 1) and in a focal (Experiment 2) PM task. Only in the nonfocal task was the additional slowing accompanied by improved PM performance. Even in the nonfocal task, however, was the correlation between ongoing-task speed and PM performance reduced after predictions, suggesting that the slowing was not completely functional for PM. Prediction-induced changes could be avoided by asking participants to additionally predict their performance in the ongoing task. In sum, the present findings substantiate a role of metamemory for attention-allocation strategies of PM. Copyright © 2013 Elsevier Inc. All rights reserved.

  10. Are performance-based measures predictive of work participation in patients with musculoskeletal disorders? A systematic review.

    Science.gov (United States)

    Kuijer, P P F M; Gouttebarge, V; Brouwer, S; Reneman, M F; Frings-Dresen, M H W

    2012-02-01

    Assessments of whether patients with musculoskeletal disorders (MSDs) can participate in work mainly consist of case history, physical examinations, and self-reports. Performance-based measures might add value in these assessments. This study answers the question: how well do performance-based measures predict work participation in patients with MSDs? A systematic literature search was performed to obtain longitudinal studies that used reliable performance-based measures to predict work participation in patients with MSDs. The following five sources of information were used to retrieve relevant studies: PubMed, Embase, AMA Guide to the Evaluation of Functional Ability, references of the included papers, and the expertise and personal file of the authors. A quality assessment specific for prognostic studies and an evidence synthesis were performed. Of the 1,230 retrieved studies, eighteen fulfilled the inclusion criteria. The studies included 4,113 patients, and the median follow-up period was 12 months. Twelve studies took possible confounders into account. Five studies were of good quality and thirteen of moderate quality. Two good-quality and all thirteen moderate-quality studies (83%) reported that performance-based measures were predictive of work participation. Two good-quality studies (11%) reported both an association and no association between performance-based measures and work participation. One good-quality study (6%) found no effect. A performance-based lifting test was used in fourteen studies and appeared to be predictive of work participation in thirteen studies. Strong evidence exists that a number of performance-based measures are predictive of work participation in patients with MSDs, especially lifting tests. Overall, the explained variance was modest.

  11. Enhancing performance of next generation FSO communication systems using soft computing-based predictions.

    Science.gov (United States)

    Kazaura, Kamugisha; Omae, Kazunori; Suzuki, Toshiji; Matsumoto, Mitsuji; Mutafungwa, Edward; Korhonen, Timo O; Murakami, Tadaaki; Takahashi, Koichi; Matsumoto, Hideki; Wakamori, Kazuhiko; Arimoto, Yoshinori

    2006-06-12

    The deterioration and deformation of a free-space optical beam wave-front as it propagates through the atmosphere can reduce the link availability and may introduce burst errors thus degrading the performance of the system. We investigate the suitability of utilizing soft-computing (SC) based tools for improving performance of free-space optical (FSO) communications systems. The SC based tools are used for the prediction of key parameters of a FSO communications system. Measured data collected from an experimental FSO communication system is used as training and testing data for a proposed multi-layer neural network predictor (MNNP) used to predict future parameter values. The predicted parameters are essential for reducing transmission errors by improving the antenna's accuracy of tracking data beams. This is particularly essential for periods considered to be of strong atmospheric turbulence. The parameter values predicted using the proposed tool show acceptable conformity with original measurements.

  12. Decline curve based models for predicting natural gas well performance

    Directory of Open Access Journals (Sweden)

    Arash Kamari

    2017-06-01

    Full Text Available The productivity of a gas well declines over its production life as cannot cover economic policies. To overcome such problems, the production performance of gas wells should be predicted by applying reliable methods to analyse the decline trend. Therefore, reliable models are developed in this study on the basis of powerful artificial intelligence techniques viz. the artificial neural network (ANN modelling strategy, least square support vector machine (LSSVM approach, adaptive neuro-fuzzy inference system (ANFIS, and decision tree (DT method for the prediction of cumulative gas production as well as initial decline rate multiplied by time as a function of the Arps' decline curve exponent and ratio of initial gas flow rate over total gas flow rate. It was concluded that the results obtained based on the models developed in current study are in satisfactory agreement with the actual gas well production data. Furthermore, the results of comparative study performed demonstrates that the LSSVM strategy is superior to the other models investigated for the prediction of both cumulative gas production, and initial decline rate multiplied by time.

  13. An ensemble based top performing approach for NCI-DREAM drug sensitivity prediction challenge.

    Directory of Open Access Journals (Sweden)

    Qian Wan

    Full Text Available We consider the problem of predicting sensitivity of cancer cell lines to new drugs based on supervised learning on genomic profiles. The genetic and epigenetic characterization of a cell line provides observations on various aspects of regulation including DNA copy number variations, gene expression, DNA methylation and protein abundance. To extract relevant information from the various data types, we applied a random forest based approach to generate sensitivity predictions from each type of data and combined the predictions in a linear regression model to generate the final drug sensitivity prediction. Our approach when applied to the NCI-DREAM drug sensitivity prediction challenge was a top performer among 47 teams and produced high accuracy predictions. Our results show that the incorporation of multiple genomic characterizations lowered the mean and variance of the estimated bootstrap prediction error. We also applied our approach to the Cancer Cell Line Encyclopedia database for sensitivity prediction and the ability to extract the top targets of an anti-cancer drug. The results illustrate the effectiveness of our approach in predicting drug sensitivity from heterogeneous genomic datasets.

  14. Computational Model-Based Prediction of Human Episodic Memory Performance Based on Eye Movements

    Science.gov (United States)

    Sato, Naoyuki; Yamaguchi, Yoko

    Subjects' episodic memory performance is not simply reflected by eye movements. We use a ‘theta phase coding’ model of the hippocampus to predict subjects' memory performance from their eye movements. Results demonstrate the ability of the model to predict subjects' memory performance. These studies provide a novel approach to computational modeling in the human-machine interface.

  15. Predicting Mental Imagery-Based BCI Performance from Personality, Cognitive Profile and Neurophysiological Patterns.

    Directory of Open Access Journals (Sweden)

    Camille Jeunet

    Full Text Available Mental-Imagery based Brain-Computer Interfaces (MI-BCIs allow their users to send commands to a computer using their brain-activity alone (typically measured by ElectroEncephaloGraphy-EEG, which is processed while they perform specific mental tasks. While very promising, MI-BCIs remain barely used outside laboratories because of the difficulty encountered by users to control them. Indeed, although some users obtain good control performances after training, a substantial proportion remains unable to reliably control an MI-BCI. This huge variability in user-performance led the community to look for predictors of MI-BCI control ability. However, these predictors were only explored for motor-imagery based BCIs, and mostly for a single training session per subject. In this study, 18 participants were instructed to learn to control an EEG-based MI-BCI by performing 3 MI-tasks, 2 of which were non-motor tasks, across 6 training sessions, on 6 different days. Relationships between the participants' BCI control performances and their personality, cognitive profile and neurophysiological markers were explored. While no relevant relationships with neurophysiological markers were found, strong correlations between MI-BCI performances and mental-rotation scores (reflecting spatial abilities were revealed. Also, a predictive model of MI-BCI performance based on psychometric questionnaire scores was proposed. A leave-one-subject-out cross validation process revealed the stability and reliability of this model: it enabled to predict participants' performance with a mean error of less than 3 points. This study determined how users' profiles impact their MI-BCI control ability and thus clears the way for designing novel MI-BCI training protocols, adapted to the profile of each user.

  16. Predicting the Performance of Chain Saw Machines Based on Shore Scleroscope Hardness

    Science.gov (United States)

    Tumac, Deniz

    2014-03-01

    Shore hardness has been used to estimate several physical and mechanical properties of rocks over the last few decades. However, the number of researches correlating Shore hardness with rock cutting performance is quite limited. Also, rather limited researches have been carried out on predicting the performance of chain saw machines. This study differs from the previous investigations in the way that Shore hardness values (SH1, SH2, and deformation coefficient) are used to determine the field performance of chain saw machines. The measured Shore hardness values are correlated with the physical and mechanical properties of natural stone samples, cutting parameters (normal force, cutting force, and specific energy) obtained from linear cutting tests in unrelieved cutting mode, and areal net cutting rate of chain saw machines. Two empirical models developed previously are improved for the prediction of the areal net cutting rate of chain saw machines. The first model is based on a revised chain saw penetration index, which uses SH1, machine weight, and useful arm cutting depth as predictors. The second model is based on the power consumed for only cutting the stone, arm thickness, and specific energy as a function of the deformation coefficient. While cutting force has a strong relationship with Shore hardness values, the normal force has a weak or moderate correlation. Uniaxial compressive strength, Cerchar abrasivity index, and density can also be predicted by Shore hardness values.

  17. Robust and predictive fuzzy key performance indicators for condition-based treatment of squats in railway infrastructures

    NARCIS (Netherlands)

    Jamshidi, A.; Nunez Vicencio, Alfredo; Dollevoet, R.P.B.J.; Li, Z.

    2017-01-01

    This paper presents a condition-based treatment methodology for a type of rail surface defect called squat. The proposed methodology is based on a set of robust and predictive fuzzy key performance indicators. A fuzzy Takagi-Sugeno interval model is used to predict squat evolution for different

  18. The Chaotic Prediction for Aero-Engine Performance Parameters Based on Nonlinear PLS Regression

    Directory of Open Access Journals (Sweden)

    Chunxiao Zhang

    2012-01-01

    Full Text Available The prediction of the aero-engine performance parameters is very important for aero-engine condition monitoring and fault diagnosis. In this paper, the chaotic phase space of engine exhaust temperature (EGT time series which come from actual air-borne ACARS data is reconstructed through selecting some suitable nearby points. The partial least square (PLS based on the cubic spline function or the kernel function transformation is adopted to obtain chaotic predictive function of EGT series. The experiment results indicate that the proposed PLS chaotic prediction algorithm based on biweight kernel function transformation has significant advantage in overcoming multicollinearity of the independent variables and solve the stability of regression model. Our predictive NMSE is 16.5 percent less than that of the traditional linear least squares (OLS method and 10.38 percent less than that of the linear PLS approach. At the same time, the forecast error is less than that of nonlinear PLS algorithm through bootstrap test screening.

  19. Complexity factors and prediction of performance

    International Nuclear Information System (INIS)

    Braarud, Per Oeyvind

    1998-03-01

    Understanding of what makes a control room situation difficult to handle is important when studying operator performance, both with respect to prediction as well as improvement of the human performance. A factor analytic approach identified eight factors from operators' answers to an 39 item questionnaire about complexity of the operator's task in the control room. A Complexity Profiling Questionnaire was developed, based on the factor analytic results from the operators' conception of complexity. The validity of the identified complexity factors was studied by prediction of crew performance and prediction of plant performance from ratings of the complexity of scenarios. The scenarios were rated by both process experts and the operators participating in the scenarios, using the Complexity Profiling Questionnaire. The process experts' complexity ratings predicted both crew performance and plant performance, while the operators' rating predicted plant performance only. The results reported are from initial studies of complexity, and imply a promising potential for further studies of the concept. The approach used in the study as well as the reported results are discussed. A chapter about the structure of the conception of complexity, and a chapter about further research conclude the report. (author)

  20. Does teacher evaluation based on student performance predict motivation, well-being, and ill-being?

    Science.gov (United States)

    Cuevas, Ricardo; Ntoumanis, Nikos; Fernandez-Bustos, Juan G; Bartholomew, Kimberley

    2018-06-01

    This study tests an explanatory model based on self-determination theory, which posits that pressure experienced by teachers when they are evaluated based on their students' academic performance will differentially predict teacher adaptive and maladaptive motivation, well-being, and ill-being. A total of 360 Spanish physical education teachers completed a multi-scale inventory. We found support for a structural equation model that showed that perceived pressure predicted teacher autonomous motivation negatively, predicted amotivation positively, and was unrelated to controlled motivation. In addition, autonomous motivation predicted vitality positively and exhaustion negatively, whereas controlled motivation and amotivation predicted vitality negatively and exhaustion positively. Amotivation significantly mediated the relation between pressure and vitality and between pressure and exhaustion. The results underline the potential negative impact of pressure felt by teachers due to this type of evaluation on teacher motivation and psychological health. Copyright © 2018 Society for the Study of School Psychology. Published by Elsevier Ltd. All rights reserved.

  1. Genomic Prediction of Barley Hybrid Performance

    Directory of Open Access Journals (Sweden)

    Norman Philipp

    2016-07-01

    Full Text Available Hybrid breeding in barley ( L. offers great opportunities to accelerate the rate of genetic improvement and to boost yield stability. A crucial requirement consists of the efficient selection of superior hybrid combinations. We used comprehensive phenotypic and genomic data from a commercial breeding program with the goal of examining the potential to predict the hybrid performances. The phenotypic data were comprised of replicated grain yield trials for 385 two-way and 408 three-way hybrids evaluated in up to 47 environments. The parental lines were genotyped using a 3k single nucleotide polymorphism (SNP array based on an Illumina Infinium assay. We implemented ridge regression best linear unbiased prediction modeling for additive and dominance effects and evaluated the prediction ability using five-fold cross validations. The prediction ability of hybrid performances based on general combining ability (GCA effects was moderate, amounting to 0.56 and 0.48 for two- and three-way hybrids, respectively. The potential of GCA-based hybrid prediction requires that both parental components have been evaluated in a hybrid background. This is not necessary for genomic prediction for which we also observed moderate cross-validated prediction abilities of 0.51 and 0.58 for two- and three-way hybrids, respectively. This exemplifies the potential of genomic prediction in hybrid barley. Interestingly, prediction ability using the two-way hybrids as training population and the three-way hybrids as test population or vice versa was low, presumably, because of the different genetic makeup of the parental source populations. Consequently, further research is needed to optimize genomic prediction approaches combining different source populations in barley.

  2. Simulation-Based Performance Evaluation of Predictive-Hashing Based Multicast Authentication Protocol

    Directory of Open Access Journals (Sweden)

    Seonho Choi

    2012-12-01

    Full Text Available A predictive-hashing based Denial-of-Service (DoS resistant multicast authentication protocol was proposed based upon predictive-hashing, one-way key chain, erasure codes, and distillation codes techniques [4, 5]. It was claimed that this new scheme should be more resistant to various types of DoS attacks, and its worst-case resource requirements were derived in terms of coarse-level system parameters including CPU times for signature verification and erasure/distillation decoding operations, attack levels, etc. To show the effectiveness of our approach and to analyze exact resource requirements in various attack scenarios with different parameter settings, we designed and implemented an attack simulator which is platformindependent. Various attack scenarios may be created with different attack types and parameters against a receiver equipped with the predictive-hashing based protocol. The design of the simulator is explained, and the simulation results are presented with detailed resource usage statistics. In addition, resistance level to various types of DoS attacks is formulated with a newly defined resistance metric. By comparing these results to those from another approach, PRABS [8], we show that the resistance level of our protocol is greatly enhanced even in the presence of many attack streams.

  3. Reliable predictions of waste performance in a geologic repository

    International Nuclear Information System (INIS)

    Pigford, T.H.; Chambre, P.L.

    1985-08-01

    Establishing reliable estimates of long-term performance of a waste repository requires emphasis upon valid theories to predict performance. Predicting rates that radionuclides are released from waste packages cannot rest upon empirical extrapolations of laboratory leach data. Reliable predictions can be based on simple bounding theoretical models, such as solubility-limited bulk-flow, if the assumed parameters are reliably known or defensibly conservative. Wherever possible, performance analysis should proceed beyond simple bounding calculations to obtain more realistic - and usually more favorable - estimates of expected performance. Desire for greater realism must be balanced against increasing uncertainties in prediction and loss of reliability. Theoretical predictions of release rate based on mass-transfer analysis are bounding and the theory can be verified. Postulated repository analogues to simulate laboratory leach experiments introduce arbitrary and fictitious repository parameters and are shown not to agree with well-established theory. 34 refs., 3 figs., 2 tabs

  4. Predicting Academic Performance Based on Students' Blog and Microblog Posts

    NARCIS (Netherlands)

    Dascalu, Mihai; Popescu, Elvira; Becheru, Alexandru; Crossley, Scott; Trausan-Matu, Stefan

    2016-01-01

    This study investigates the degree to which textual complexity indices applied on students’ online contributions, corroborated with a longitudinal analysis performed on their weekly posts, predict academic performance. The source of student writing consists of blog and microblog posts, created in

  5. A statistical model for predicting muscle performance

    Science.gov (United States)

    Byerly, Diane Leslie De Caix

    The objective of these studies was to develop a capability for predicting muscle performance and fatigue to be utilized for both space- and ground-based applications. To develop this predictive model, healthy test subjects performed a defined, repetitive dynamic exercise to failure using a Lordex spinal machine. Throughout the exercise, surface electromyography (SEMG) data were collected from the erector spinae using a Mega Electronics ME3000 muscle tester and surface electrodes placed on both sides of the back muscle. These data were analyzed using a 5th order Autoregressive (AR) model and statistical regression analysis. It was determined that an AR derived parameter, the mean average magnitude of AR poles, significantly correlated with the maximum number of repetitions (designated Rmax) that a test subject was able to perform. Using the mean average magnitude of AR poles, a test subject's performance to failure could be predicted as early as the sixth repetition of the exercise. This predictive model has the potential to provide a basis for improving post-space flight recovery, monitoring muscle atrophy in astronauts and assessing the effectiveness of countermeasures, monitoring astronaut performance and fatigue during Extravehicular Activity (EVA) operations, providing pre-flight assessment of the ability of an EVA crewmember to perform a given task, improving the design of training protocols and simulations for strenuous International Space Station assembly EVA, and enabling EVA work task sequences to be planned enhancing astronaut performance and safety. Potential ground-based, medical applications of the predictive model include monitoring muscle deterioration and performance resulting from illness, establishing safety guidelines in the industry for repetitive tasks, monitoring the stages of rehabilitation for muscle-related injuries sustained in sports and accidents, and enhancing athletic performance through improved training protocols while reducing

  6. Adding propensity scores to pure prediction models fails to improve predictive performance

    Directory of Open Access Journals (Sweden)

    Amy S. Nowacki

    2013-08-01

    Full Text Available Background. Propensity score usage seems to be growing in popularity leading researchers to question the possible role of propensity scores in prediction modeling, despite the lack of a theoretical rationale. It is suspected that such requests are due to the lack of differentiation regarding the goals of predictive modeling versus causal inference modeling. Therefore, the purpose of this study is to formally examine the effect of propensity scores on predictive performance. Our hypothesis is that a multivariable regression model that adjusts for all covariates will perform as well as or better than those models utilizing propensity scores with respect to model discrimination and calibration.Methods. The most commonly encountered statistical scenarios for medical prediction (logistic and proportional hazards regression were used to investigate this research question. Random cross-validation was performed 500 times to correct for optimism. The multivariable regression models adjusting for all covariates were compared with models that included adjustment for or weighting with the propensity scores. The methods were compared based on three predictive performance measures: (1 concordance indices; (2 Brier scores; and (3 calibration curves.Results. Multivariable models adjusting for all covariates had the highest average concordance index, the lowest average Brier score, and the best calibration. Propensity score adjustment and inverse probability weighting models without adjustment for all covariates performed worse than full models and failed to improve predictive performance with full covariate adjustment.Conclusion. Propensity score techniques did not improve prediction performance measures beyond multivariable adjustment. Propensity scores are not recommended if the analytical goal is pure prediction modeling.

  7. In-depth performance evaluation of PFP and ESG sequence-based function prediction methods in CAFA 2011 experiment

    Directory of Open Access Journals (Sweden)

    Chitale Meghana

    2013-02-01

    Full Text Available Abstract Background Many Automatic Function Prediction (AFP methods were developed to cope with an increasing growth of the number of gene sequences that are available from high throughput sequencing experiments. To support the development of AFP methods, it is essential to have community wide experiments for evaluating performance of existing AFP methods. Critical Assessment of Function Annotation (CAFA is one such community experiment. The meeting of CAFA was held as a Special Interest Group (SIG meeting at the Intelligent Systems in Molecular Biology (ISMB conference in 2011. Here, we perform a detailed analysis of two sequence-based function prediction methods, PFP and ESG, which were developed in our lab, using the predictions submitted to CAFA. Results We evaluate PFP and ESG using four different measures in comparison with BLAST, Prior, and GOtcha. In addition to the predictions submitted to CAFA, we further investigate performance of a different scoring function to rank order predictions by PFP as well as PFP/ESG predictions enriched with Priors that simply adds frequently occurring Gene Ontology terms as a part of predictions. Prediction accuracies of each method were also evaluated separately for different functional categories. Successful and unsuccessful predictions by PFP and ESG are also discussed in comparison with BLAST. Conclusion The in-depth analysis discussed here will complement the overall assessment by the CAFA organizers. Since PFP and ESG are based on sequence database search results, our analyses are not only useful for PFP and ESG users but will also shed light on the relationship of the sequence similarity space and functions that can be inferred from the sequences.

  8. Hybrid Corporate Performance Prediction Model Considering Technical Capability

    Directory of Open Access Journals (Sweden)

    Joonhyuck Lee

    2016-07-01

    Full Text Available Many studies have tried to predict corporate performance and stock prices to enhance investment profitability using qualitative approaches such as the Delphi method. However, developments in data processing technology and machine-learning algorithms have resulted in efforts to develop quantitative prediction models in various managerial subject areas. We propose a quantitative corporate performance prediction model that applies the support vector regression (SVR algorithm to solve the problem of the overfitting of training data and can be applied to regression problems. The proposed model optimizes the SVR training parameters based on the training data, using the genetic algorithm to achieve sustainable predictability in changeable markets and managerial environments. Technology-intensive companies represent an increasing share of the total economy. The performance and stock prices of these companies are affected by their financial standing and their technological capabilities. Therefore, we apply both financial indicators and technical indicators to establish the proposed prediction model. Here, we use time series data, including financial, patent, and corporate performance information of 44 electronic and IT companies. Then, we predict the performance of these companies as an empirical verification of the prediction performance of the proposed model.

  9. Reliability residual-life prediction method for thermal aging based on performance degradation

    International Nuclear Information System (INIS)

    Ren Shuhong; Xue Fei; Yu Weiwei; Ti Wenxin; Liu Xiaotian

    2013-01-01

    The paper makes the study of the nuclear power plant main pipeline. The residual-life of the main pipeline that failed due to thermal aging has been studied by the use of performance degradation theory and Bayesian updating methods. Firstly, the thermal aging impact property degradation process of the main pipeline austenitic stainless steel has been analyzed by the accelerated thermal aging test data. Then, the thermal aging residual-life prediction model based on the impact property degradation data is built by Bayesian updating methods. Finally, these models are applied in practical situations. It is shown that the proposed methods are feasible and the prediction accuracy meets the needs of the project. Also, it provides a foundation for the scientific management of aging management of the main pipeline. (authors)

  10. In Silico Modeling of Gastrointestinal Drug Absorption: Predictive Performance of Three Physiologically Based Absorption Models.

    Science.gov (United States)

    Sjögren, Erik; Thörn, Helena; Tannergren, Christer

    2016-06-06

    Gastrointestinal (GI) drug absorption is a complex process determined by formulation, physicochemical and biopharmaceutical factors, and GI physiology. Physiologically based in silico absorption models have emerged as a widely used and promising supplement to traditional in vitro assays and preclinical in vivo studies. However, there remains a lack of comparative studies between different models. The aim of this study was to explore the strengths and limitations of the in silico absorption models Simcyp 13.1, GastroPlus 8.0, and GI-Sim 4.1, with respect to their performance in predicting human intestinal drug absorption. This was achieved by adopting an a priori modeling approach and using well-defined input data for 12 drugs associated with incomplete GI absorption and related challenges in predicting the extent of absorption. This approach better mimics the real situation during formulation development where predictive in silico models would be beneficial. Plasma concentration-time profiles for 44 oral drug administrations were calculated by convolution of model-predicted absorption-time profiles and reported pharmacokinetic parameters. Model performance was evaluated by comparing the predicted plasma concentration-time profiles, Cmax, tmax, and exposure (AUC) with observations from clinical studies. The overall prediction accuracies for AUC, given as the absolute average fold error (AAFE) values, were 2.2, 1.6, and 1.3 for Simcyp, GastroPlus, and GI-Sim, respectively. The corresponding AAFE values for Cmax were 2.2, 1.6, and 1.3, respectively, and those for tmax were 1.7, 1.5, and 1.4, respectively. Simcyp was associated with underprediction of AUC and Cmax; the accuracy decreased with decreasing predicted fabs. A tendency for underprediction was also observed for GastroPlus, but there was no correlation with predicted fabs. There were no obvious trends for over- or underprediction for GI-Sim. The models performed similarly in capturing dependencies on dose and

  11. Proactive Supply Chain Performance Management with Predictive Analytics

    Directory of Open Access Journals (Sweden)

    Nenad Stefanovic

    2014-01-01

    Full Text Available Today’s business climate requires supply chains to be proactive rather than reactive, which demands a new approach that incorporates data mining predictive analytics. This paper introduces a predictive supply chain performance management model which combines process modelling, performance measurement, data mining models, and web portal technologies into a unique model. It presents the supply chain modelling approach based on the specialized metamodel which allows modelling of any supply chain configuration and at different level of details. The paper also presents the supply chain semantic business intelligence (BI model which encapsulates data sources and business rules and includes the data warehouse model with specific supply chain dimensions, measures, and KPIs (key performance indicators. Next, the paper describes two generic approaches for designing the KPI predictive data mining models based on the BI semantic model. KPI predictive models were trained and tested with a real-world data set. Finally, a specialized analytical web portal which offers collaborative performance monitoring and decision making is presented. The results show that these models give very accurate KPI projections and provide valuable insights into newly emerging trends, opportunities, and problems. This should lead to more intelligent, predictive, and responsive supply chains capable of adapting to future business environment.

  12. Proactive supply chain performance management with predictive analytics.

    Science.gov (United States)

    Stefanovic, Nenad

    2014-01-01

    Today's business climate requires supply chains to be proactive rather than reactive, which demands a new approach that incorporates data mining predictive analytics. This paper introduces a predictive supply chain performance management model which combines process modelling, performance measurement, data mining models, and web portal technologies into a unique model. It presents the supply chain modelling approach based on the specialized metamodel which allows modelling of any supply chain configuration and at different level of details. The paper also presents the supply chain semantic business intelligence (BI) model which encapsulates data sources and business rules and includes the data warehouse model with specific supply chain dimensions, measures, and KPIs (key performance indicators). Next, the paper describes two generic approaches for designing the KPI predictive data mining models based on the BI semantic model. KPI predictive models were trained and tested with a real-world data set. Finally, a specialized analytical web portal which offers collaborative performance monitoring and decision making is presented. The results show that these models give very accurate KPI projections and provide valuable insights into newly emerging trends, opportunities, and problems. This should lead to more intelligent, predictive, and responsive supply chains capable of adapting to future business environment.

  13. Proactive Supply Chain Performance Management with Predictive Analytics

    Science.gov (United States)

    Stefanovic, Nenad

    2014-01-01

    Today's business climate requires supply chains to be proactive rather than reactive, which demands a new approach that incorporates data mining predictive analytics. This paper introduces a predictive supply chain performance management model which combines process modelling, performance measurement, data mining models, and web portal technologies into a unique model. It presents the supply chain modelling approach based on the specialized metamodel which allows modelling of any supply chain configuration and at different level of details. The paper also presents the supply chain semantic business intelligence (BI) model which encapsulates data sources and business rules and includes the data warehouse model with specific supply chain dimensions, measures, and KPIs (key performance indicators). Next, the paper describes two generic approaches for designing the KPI predictive data mining models based on the BI semantic model. KPI predictive models were trained and tested with a real-world data set. Finally, a specialized analytical web portal which offers collaborative performance monitoring and decision making is presented. The results show that these models give very accurate KPI projections and provide valuable insights into newly emerging trends, opportunities, and problems. This should lead to more intelligent, predictive, and responsive supply chains capable of adapting to future business environment. PMID:25386605

  14. Mining Behavior Based Safety Data to Predict Safety Performance

    Energy Technology Data Exchange (ETDEWEB)

    Jeffrey C. Joe

    2010-06-01

    The Idaho National Laboratory (INL) operates a behavior based safety program called Safety Observations Achieve Results (SOAR). This peer-to-peer observation program encourages employees to perform in-field observations of each other's work practices and habits (i.e., behaviors). The underlying premise of conducting these observations is that more serious accidents are prevented from occurring because lower level “at risk” behaviors are identified and corrected before they can propagate into culturally accepted “unsafe” behaviors that result in injuries or fatalities. Although the approach increases employee involvement in safety, the premise of the program has not been subject to sufficient empirical evaluation. The INL now has a significant amount of SOAR data on these lower level “at risk” behaviors. This paper describes the use of data mining techniques to analyze these data to determine whether they can predict if and when a more serious accident will occur.

  15. Integrating geophysics and hydrology for reducing the uncertainty of groundwater model predictions and improved prediction performance

    DEFF Research Database (Denmark)

    Christensen, Nikolaj Kruse; Christensen, Steen; Ferre, Ty

    the integration of geophysical data in the construction of a groundwater model increases the prediction performance. We suggest that modelers should perform a hydrogeophysical “test-bench” analysis of the likely value of geophysics data for improving groundwater model prediction performance before actually...... and the resulting predictions can be compared with predictions from the ‘true’ model. By performing this analysis we expect to give the modeler insight into how the uncertainty of model-based prediction can be reduced.......A major purpose of groundwater modeling is to help decision-makers in efforts to manage the natural environment. Increasingly, it is recognized that both the predictions of interest and their associated uncertainties should be quantified to support robust decision making. In particular, decision...

  16. Performance Prediction of a MongoDB-Based Traceability System in Smart Factory Supply Chains.

    Science.gov (United States)

    Kang, Yong-Shin; Park, Il-Ha; Youm, Sekyoung

    2016-12-14

    In the future, with the advent of the smart factory era, manufacturing and logistics processes will become more complex, and the complexity and criticality of traceability will further increase. This research aims at developing a performance assessment method to verify scalability when implementing traceability systems based on key technologies for smart factories, such as Internet of Things (IoT) and BigData. To this end, based on existing research, we analyzed traceability requirements and an event schema for storing traceability data in MongoDB, a document-based Not Only SQL (NoSQL) database. Next, we analyzed the algorithm of the most representative traceability query and defined a query-level performance model, which is composed of response times for the components of the traceability query algorithm. Next, this performance model was solidified as a linear regression model because the response times increase linearly by a benchmark test. Finally, for a case analysis, we applied the performance model to a virtual automobile parts logistics. As a result of the case study, we verified the scalability of a MongoDB-based traceability system and predicted the point when data node servers should be expanded in this case. The traceability system performance assessment method proposed in this research can be used as a decision-making tool for hardware capacity planning during the initial stage of construction of traceability systems and during their operational phase.

  17. Predicting Performance in Higher Education Using Proximal Predictors

    Science.gov (United States)

    Niessen, A. Susan M.; Meijer, Rob R.; Tendeiro, Jorge N.

    2016-01-01

    We studied the validity of two methods for predicting academic performance and student-program fit that were proximal to important study criteria. Applicants to an undergraduate psychology program participated in a selection procedure containing a trial-studying test based on a work sample approach, and specific skills tests in English and math. Test scores were used to predict academic achievement and progress after the first year, achievement in specific course types, enrollment, and dropout after the first year. All tests showed positive significant correlations with the criteria. The trial-studying test was consistently the best predictor in the admission procedure. We found no significant differences between the predictive validity of the trial-studying test and prior educational performance, and substantial shared explained variance between the two predictors. Only applicants with lower trial-studying scores were significantly less likely to enroll in the program. In conclusion, the trial-studying test yielded predictive validities similar to that of prior educational performance and possibly enabled self-selection. In admissions aimed at student-program fit, or in admissions in which past educational performance is difficult to use, a trial-studying test is a good instrument to predict academic performance. PMID:27073859

  18. The steady performance prediction of propeller-rudder-bulb system based on potential iterative method

    International Nuclear Information System (INIS)

    Liu, Y B; Su, Y M; Ju, L; Huang, S L

    2012-01-01

    A new numerical method was developed for predicting the steady hydrodynamic performance of propeller-rudder-bulb system. In the calculation, the rudder and bulb was taken into account as a whole, the potential based surface panel method was applied both to propeller and rudder-bulb system. The interaction between propeller and rudder-bulb was taken into account by velocity potential iteration in which the influence of propeller rotation was considered by the average influence coefficient. In the influence coefficient computation, the singular value should be found and deleted. Numerical results showed that the method presented is effective for predicting the steady hydrodynamic performance of propeller-rudder system and propeller-rudder-bulb system. Comparing with the induced velocity iterative method, the method presented can save programming and calculation time. Changing dimensions, the principal parameter—bulb size that affect energy-saving effect was studied, the results show that the bulb on rudder have a optimal size at the design advance coefficient.

  19. Performance Prediction of a MongoDB-Based Traceability System in Smart Factory Supply Chains

    Science.gov (United States)

    Kang, Yong-Shin; Park, Il-Ha; Youm, Sekyoung

    2016-01-01

    In the future, with the advent of the smart factory era, manufacturing and logistics processes will become more complex, and the complexity and criticality of traceability will further increase. This research aims at developing a performance assessment method to verify scalability when implementing traceability systems based on key technologies for smart factories, such as Internet of Things (IoT) and BigData. To this end, based on existing research, we analyzed traceability requirements and an event schema for storing traceability data in MongoDB, a document-based Not Only SQL (NoSQL) database. Next, we analyzed the algorithm of the most representative traceability query and defined a query-level performance model, which is composed of response times for the components of the traceability query algorithm. Next, this performance model was solidified as a linear regression model because the response times increase linearly by a benchmark test. Finally, for a case analysis, we applied the performance model to a virtual automobile parts logistics. As a result of the case study, we verified the scalability of a MongoDB-based traceability system and predicted the point when data node servers should be expanded in this case. The traceability system performance assessment method proposed in this research can be used as a decision-making tool for hardware capacity planning during the initial stage of construction of traceability systems and during their operational phase. PMID:27983654

  20. Performance Prediction of a MongoDB-Based Traceability System in Smart Factory Supply Chains

    Directory of Open Access Journals (Sweden)

    Yong-Shin Kang

    2016-12-01

    Full Text Available In the future, with the advent of the smart factory era, manufacturing and logistics processes will become more complex, and the complexity and criticality of traceability will further increase. This research aims at developing a performance assessment method to verify scalability when implementing traceability systems based on key technologies for smart factories, such as Internet of Things (IoT and BigData. To this end, based on existing research, we analyzed traceability requirements and an event schema for storing traceability data in MongoDB, a document-based Not Only SQL (NoSQL database. Next, we analyzed the algorithm of the most representative traceability query and defined a query-level performance model, which is composed of response times for the components of the traceability query algorithm. Next, this performance model was solidified as a linear regression model because the response times increase linearly by a benchmark test. Finally, for a case analysis, we applied the performance model to a virtual automobile parts logistics. As a result of the case study, we verified the scalability of a MongoDB-based traceability system and predicted the point when data node servers should be expanded in this case. The traceability system performance assessment method proposed in this research can be used as a decision-making tool for hardware capacity planning during the initial stage of construction of traceability systems and during their operational phase.

  1. Plant corrosion: prediction of materials performance

    International Nuclear Information System (INIS)

    Strutt, J.E.; Nicholls, J.R.

    1987-01-01

    Seventeen papers have been compiled forming a book on computer-based approaches to corrosion prediction in a wide range of industrial sectors, including the chemical, petrochemical and power generation industries. Two papers have been selected and indexed separately. The first describes a system operating within BNFL's Reprocessing Division to predict materials performance in corrosive conditions to aid future plant design. The second describes the truncation of the distribution function of pit depths during high temperature oxidation of a 20Cr austenitic steel in the fuel cladding in AGR systems. (U.K.)

  2. PV (photovoltaics) performance evaluation and simulation-based energy yield prediction for tropical buildings

    International Nuclear Information System (INIS)

    Saber, Esmail M.; Lee, Siew Eang; Manthapuri, Sumanth; Yi, Wang; Deb, Chirag

    2014-01-01

    Air pollution and climate change increased the importance of renewable energy resources like solar energy in the last decades. Rack-mounted PhotoVoltaics (PV) and Building Integrated PhotoVoltaics (BIPV) are the most common photovoltaic systems which convert incident solar radiation on façade or surrounding area to electricity. In this paper the performance of different solar cell types is evaluated for the tropical weather of Singapore. As a case study, on-site measured data of PV systems implemented in a zero energy building in Singapore, is analyzed. Different types of PV systems (silicon wafer and thin film) have been installed on rooftop, façade, car park shelter, railing and etc. The impact of different solar cell generations, arrays environmental conditions (no shading, dappled shading, full shading), orientation (South, North, East or West facing) and inclination (between PV module and horizontal direction) is investigated on performance of modules. In the second stage of research, the whole PV systems in the case study are simulated in EnergyPlus energy simulation software with several PV performance models including Simple, Equivalent one-diode and Sandia. The predicted results by different models are compared with measured data and the validated model is used to provide simulation-based energy yield predictions for wide ranges of scenarios. It has been concluded that orientation of low-slope rooftop PV has negligible impact on annual energy yield but in case of PV external sunshade, east façade and panel slope of 30–40° are the most suitable location and inclination. - Highlights: • Characteristics of PV systems in tropics are analyzed in depth. • The ambiguity toward amorphous panel energy yield in tropics is discussed. • Equivalent-one diode and Sandia models can fairly predict the energy yield. • A general guideline is provided to estimate the energy yield of PV systems in tropics

  3. Predicting energy performance of a net-zero energy building: A statistical approach

    International Nuclear Information System (INIS)

    Kneifel, Joshua; Webb, David

    2016-01-01

    Highlights: • A regression model is applied to actual energy data from a net-zero energy building. • The model is validated through a rigorous statistical analysis. • Comparisons are made between model predictions and those of a physics-based model. • The model is a viable baseline for evaluating future models from the energy data. - Abstract: Performance-based building requirements have become more prevalent because it gives freedom in building design while still maintaining or exceeding the energy performance required by prescriptive-based requirements. In order to determine if building designs reach target energy efficiency improvements, it is necessary to estimate the energy performance of a building using predictive models and different weather conditions. Physics-based whole building energy simulation modeling is the most common approach. However, these physics-based models include underlying assumptions and require significant amounts of information in order to specify the input parameter values. An alternative approach to test the performance of a building is to develop a statistically derived predictive regression model using post-occupancy data that can accurately predict energy consumption and production based on a few common weather-based factors, thus requiring less information than simulation models. A regression model based on measured data should be able to predict energy performance of a building for a given day as long as the weather conditions are similar to those during the data collection time frame. This article uses data from the National Institute of Standards and Technology (NIST) Net-Zero Energy Residential Test Facility (NZERTF) to develop and validate a regression model to predict the energy performance of the NZERTF using two weather variables aggregated to the daily level, applies the model to estimate the energy performance of hypothetical NZERTFs located in different cities in the Mixed-Humid Climate Zone, and compares these

  4. Speech Intelligibility Prediction Based on Mutual Information

    DEFF Research Database (Denmark)

    Jensen, Jesper; Taal, Cees H.

    2014-01-01

    This paper deals with the problem of predicting the average intelligibility of noisy and potentially processed speech signals, as observed by a group of normal hearing listeners. We propose a model which performs this prediction based on the hypothesis that intelligibility is monotonically related...... to the mutual information between critical-band amplitude envelopes of the clean signal and the corresponding noisy/processed signal. The resulting intelligibility predictor turns out to be a simple function of the mean-square error (mse) that arises when estimating a clean critical-band amplitude using...... a minimum mean-square error (mmse) estimator based on the noisy/processed amplitude. The proposed model predicts that speech intelligibility cannot be improved by any processing of noisy critical-band amplitudes. Furthermore, the proposed intelligibility predictor performs well ( ρ > 0.95) in predicting...

  5. Predicting High-Power Performance in Professional Cyclists.

    Science.gov (United States)

    Sanders, Dajo; Heijboer, Mathieu; Akubat, Ibrahim; Meijer, Kenneth; Hesselink, Matthijs K

    2017-03-01

    To assess if short-duration (5 to ~300 s) high-power performance can accurately be predicted using the anaerobic power reserve (APR) model in professional cyclists. Data from 4 professional cyclists from a World Tour cycling team were used. Using the maximal aerobic power, sprint peak power output, and an exponential constant describing the decrement in power over time, a power-duration relationship was established for each participant. To test the predictive accuracy of the model, several all-out field trials of different durations were performed by each cyclist. The power output achieved during the all-out trials was compared with the predicted power output by the APR model. The power output predicted by the model showed very large to nearly perfect correlations to the actual power output obtained during the all-out trials for each cyclist (r = .88 ± .21, .92 ± .17, .95 ± .13, and .97 ± .09). Power output during the all-out trials remained within an average of 6.6% (53 W) of the predicted power output by the model. This preliminary pilot study presents 4 case studies on the applicability of the APR model in professional cyclists using a field-based approach. The decrement in all-out performance during high-intensity exercise seems to conform to a general relationship with a single exponential-decay model describing the decrement in power vs increasing duration. These results are in line with previous studies using the APR model to predict performance during brief all-out trials. Future research should evaluate the APR model with a larger sample size of elite cyclists.

  6. [Development and Application of a Performance Prediction Model for Home Care Nursing Based on a Balanced Scorecard using the Bayesian Belief Network].

    Science.gov (United States)

    Noh, Wonjung; Seomun, Gyeongae

    2015-06-01

    This study was conducted to develop key performance indicators (KPIs) for home care nursing (HCN) based on a balanced scorecard, and to construct a performance prediction model of strategic objectives using the Bayesian Belief Network (BBN). This methodological study included four steps: establishment of KPIs, performance prediction modeling, development of a performance prediction model using BBN, and simulation of a suggested nursing management strategy. An HCN expert group and a staff group participated. The content validity index was analyzed using STATA 13.0, and BBN was analyzed using HUGIN 8.0. We generated a list of KPIs composed of 4 perspectives, 10 strategic objectives, and 31 KPIs. In the validity test of the performance prediction model, the factor with the greatest variance for increasing profit was maximum cost reduction of HCN services. The factor with the smallest variance for increasing profit was a minimum image improvement for HCN. During sensitivity analysis, the probability of the expert group did not affect the sensitivity. Furthermore, simulation of a 10% image improvement predicted the most effective way to increase profit. KPIs of HCN can estimate financial and non-financial performance. The performance prediction model for HCN will be useful to improve performance.

  7. Predicting emergency diesel starting performance

    International Nuclear Information System (INIS)

    DeBey, T.M.

    1989-01-01

    The US Department of Energy effort to extend the operational lives of commercial nuclear power plants has examined methods for predicting the performance of specific equipment. This effort focuses on performance prediction as a means for reducing equipment surveillance, maintenance, and outages. Realizing these goals will result in nuclear plants that are more reliable, have lower maintenance costs, and have longer lives. This paper describes a monitoring system that has been developed to predict starting performance in emergency diesels. A prototype system has been built and tested on an engine at Sandia National Laboratories. 2 refs

  8. The electronic residency application service application can predict accreditation council for graduate medical education competency-based surgical resident performance.

    Science.gov (United States)

    Tolan, Amy M; Kaji, Amy H; Quach, Chi; Hines, O Joe; de Virgilio, Christian

    2010-01-01

    Program directors often struggle to determine which factors in the Electronic Residency Application Service (ERAS) application are important in the residency selection process. With the establishment of the Accreditation Council for Graduate Medical Education (ACGME) competencies, it would be important to know whether information available in the ERAS application can predict subsequent competency-based performance of general surgery residents. This study is a retrospective correlation of data points found in the ERAS application with core competency-based clinical rotation evaluations. ACGME competency-based evaluations as well as technical skills assessment from all rotations during residency were collected. The overall competency score was defined as an average of all 6 competencies and technical skills. A total of77 residents from two (one university and one community based university-affiliate) general surgery residency programs were included in the analysis. Receiving honors for many of the third year clerkships and AOA membership were associated with a number of the individual competencies. USMLE scores were predictive only of Medical Knowledge (p = 0.004). Factors associated with higher overall competency were female gender (p = 0.02), AOA (p = 0.06), overall number of honors received (p = 0.04), and honors in Ob/Gyn (p = 0.03) and Pediatrics (p = 0.05). Multivariable analysis showed honors in Ob/Gyn, female gender, older age, and total number of honors to be predictive of a number of individual core competencies. USMLE scores were only predictive of Medical Knowledge. The ERAS application is useful for predicting subsequent competency based performance in surgical residents. Receiving honors in the surgery clerkship, which has traditionally carried weight when evaluating a potential surgery resident, may not be as strong a predictor of future success. Copyright © 2010 Association of Program Directors in Surgery. Published by Elsevier Inc. All rights

  9. EPRI MOV performance prediction program

    International Nuclear Information System (INIS)

    Hosler, J.F.; Damerell, P.S.; Eidson, M.G.; Estep, N.E.

    1994-01-01

    An overview of the EPRI Motor-Operated Valve (MOV) Performance Prediction Program is presented. The objectives of this Program are to better understand the factors affecting the performance of MOVs and to develop and validate methodologies to predict MOV performance. The Program involves valve analytical modeling, separate-effects testing to refine the models, and flow-loop and in-plant MOV testing to provide a basis for model validation. The ultimate product of the Program is an MOV Performance Prediction Methodology applicable to common gate, globe, and butterfly valves. The methodology predicts thrust and torque requirements at design-basis flow and differential pressure conditions, assesses the potential for gate valve internal damage, and provides test methods to quantify potential for gate valve internal damage, and provides test methods to quantify potential variations in actuator output thrust with loading condition. Key findings and their potential impact on MOV design and engineering application are summarized

  10. Determining Mean Predicted Performance for Army Job Families

    National Research Council Canada - National Science Library

    Zeidner, Joseph

    2003-01-01

    The present study is designed to obtain mean predicted performance (MPPs) for the 9- and 17-job families, using composites based on 7 ASVAB tests, using a triple cross validation design permitting completely unbiased estimates of MPP...

  11. Modelling the predictive performance of credit scoring

    Directory of Open Access Journals (Sweden)

    Shi-Wei Shen

    2013-07-01

    Research purpose: The purpose of this empirical paper was to examine the predictive performance of credit scoring systems in Taiwan. Motivation for the study: Corporate lending remains a major business line for financial institutions. However, in light of the recent global financial crises, it has become extremely important for financial institutions to implement rigorous means of assessing clients seeking access to credit facilities. Research design, approach and method: Using a data sample of 10 349 observations drawn between 1992 and 2010, logistic regression models were utilised to examine the predictive performance of credit scoring systems. Main findings: A test of Goodness of fit demonstrated that credit scoring models that incorporated the Taiwan Corporate Credit Risk Index (TCRI, micro- and also macroeconomic variables possessed greater predictive power. This suggests that macroeconomic variables do have explanatory power for default credit risk. Practical/managerial implications: The originality in the study was that three models were developed to predict corporate firms’ defaults based on different microeconomic and macroeconomic factors such as the TCRI, asset growth rates, stock index and gross domestic product. Contribution/value-add: The study utilises different goodness of fits and receiver operator characteristics during the examination of the robustness of the predictive power of these factors.

  12. A Weibull statistics-based lignocellulose saccharification model and a built-in parameter accurately predict lignocellulose hydrolysis performance.

    Science.gov (United States)

    Wang, Mingyu; Han, Lijuan; Liu, Shasha; Zhao, Xuebing; Yang, Jinghua; Loh, Soh Kheang; Sun, Xiaomin; Zhang, Chenxi; Fang, Xu

    2015-09-01

    Renewable energy from lignocellulosic biomass has been deemed an alternative to depleting fossil fuels. In order to improve this technology, we aim to develop robust mathematical models for the enzymatic lignocellulose degradation process. By analyzing 96 groups of previously published and newly obtained lignocellulose saccharification results and fitting them to Weibull distribution, we discovered Weibull statistics can accurately predict lignocellulose saccharification data, regardless of the type of substrates, enzymes and saccharification conditions. A mathematical model for enzymatic lignocellulose degradation was subsequently constructed based on Weibull statistics. Further analysis of the mathematical structure of the model and experimental saccharification data showed the significance of the two parameters in this model. In particular, the λ value, defined the characteristic time, represents the overall performance of the saccharification system. This suggestion was further supported by statistical analysis of experimental saccharification data and analysis of the glucose production levels when λ and n values change. In conclusion, the constructed Weibull statistics-based model can accurately predict lignocellulose hydrolysis behavior and we can use the λ parameter to assess the overall performance of enzymatic lignocellulose degradation. Advantages and potential applications of the model and the λ value in saccharification performance assessment were discussed. Copyright © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  13. Optimizing the stimulus presentation paradigm design for the P300-based brain-computer interface using performance prediction.

    Science.gov (United States)

    Mainsah, B O; Reeves, G; Collins, L M; Throckmorton, C S

    2017-08-01

    The role of a brain-computer interface (BCI) is to discern a user's intended message or action by extracting and decoding relevant information from brain signals. Stimulus-driven BCIs, such as the P300 speller, rely on detecting event-related potentials (ERPs) in response to a user attending to relevant or target stimulus events. However, this process is error-prone because the ERPs are embedded in noisy electroencephalography (EEG) data, representing a fundamental problem in communication of the uncertainty in the information that is received during noisy transmission. A BCI can be modeled as a noisy communication system and an information-theoretic approach can be exploited to design a stimulus presentation paradigm to maximize the information content that is presented to the user. However, previous methods that focused on designing error-correcting codes failed to provide significant performance improvements due to underestimating the effects of psycho-physiological factors on the P300 ERP elicitation process and a limited ability to predict online performance with their proposed methods. Maximizing the information rate favors the selection of stimulus presentation patterns with increased target presentation frequency, which exacerbates refractory effects and negatively impacts performance within the context of an oddball paradigm. An information-theoretic approach that seeks to understand the fundamental trade-off between information rate and reliability is desirable. We developed a performance-based paradigm (PBP) by tuning specific parameters of the stimulus presentation paradigm to maximize performance while minimizing refractory effects. We used a probabilistic-based performance prediction method as an evaluation criterion to select a final configuration of the PBP. With our PBP, we demonstrate statistically significant improvements in online performance, both in accuracy and spelling rate, compared to the conventional row-column paradigm. By accounting for

  14. Performance predictions for solar-chemical converters based on photoelectrochemical I-V curves

    Energy Technology Data Exchange (ETDEWEB)

    Luttmer, J.D.; Trachtenberg, I.

    1985-06-01

    Texas Instruments' solar energy system contains a solar-chemical converter (SCC) which converts solar energy into chemical energy via the electrolysis of hydrobromic acid (HBr) into hydrogen (H/sub 2/) and bromine (Br/sub 2/). Previous predictions of SCC performance have employed electrical dry-probe data and a computer simulation model to predict the H/sub 2/ generation rates. The method of prediction described here makes use of the photoelectrochemical Icurves to determine the ''wet'' probe parameters of V /SUB oc/ J /SUB sc/ FF, and efficiency for anodes and cathodes. The advantages of this technique over the dry-probe/computer simulation method are discussed. A comparison of predicted and measured H/sub 2/ generation rates is presented. Solar to chemical efficiencies of 8.6% have been both predicted and measured for the electrolysis of 48% HBr to hydrogen and bromine by a full anode/cathode array. Individual cathode solar to hydrogen efficiencies of 9.5% have been obtained.

  15. Predictive testing of performance of metals in HTR service environments

    International Nuclear Information System (INIS)

    Kondo, T.; Shindo, M.; Tamura, M.; Tsuji, H.; Kurata, Y.; Tsukada, T.

    1982-01-01

    Status of the material testing in simulated HTGR environment is reviewed with special attention focused on the methodology of the prediction of performance in long time. Importance of controlling effective chemical potentials relations in the material-environmental interface is stressed in regard of the complex inter-dependent kinetic relation between oxidation and carbon transport. Based on the recent experimental observations, proposals are made to establish some procedures for conservative prediction of the metal performance

  16. Decline Curve Based Models for Predicting Natural Gas Well Performance

    OpenAIRE

    Kamari, Arash; Mohammadi, Amir H.; Lee, Moonyong; Mahmood, Tariq; Bahadori, Alireza

    2016-01-01

    The productivity of a gas well declines over its production life as cannot cover economic policies. To overcome such problems, the production performance of gas wells should be predicted by applying reliable methods to analyse the decline trend. Therefore, reliable models are developed in this study on the basis of powerful artificial intelligence techniques viz. the artificial neural network (ANN) modelling strategy, least square support vector machine (LSSVM) approach, adaptive neuro-fuzzy ...

  17. Genomic Prediction of Testcross Performance in Canola (Brassica napus)

    Science.gov (United States)

    Jan, Habib U.; Abbadi, Amine; Lücke, Sophie; Nichols, Richard A.; Snowdon, Rod J.

    2016-01-01

    Genomic selection (GS) is a modern breeding approach where genome-wide single-nucleotide polymorphism (SNP) marker profiles are simultaneously used to estimate performance of untested genotypes. In this study, the potential of genomic selection methods to predict testcross performance for hybrid canola breeding was applied for various agronomic traits based on genome-wide marker profiles. A total of 475 genetically diverse spring-type canola pollinator lines were genotyped at 24,403 single-copy, genome-wide SNP loci. In parallel, the 950 F1 testcross combinations between the pollinators and two representative testers were evaluated for a number of important agronomic traits including seedling emergence, days to flowering, lodging, oil yield and seed yield along with essential seed quality characters including seed oil content and seed glucosinolate content. A ridge-regression best linear unbiased prediction (RR-BLUP) model was applied in combination with 500 cross-validations for each trait to predict testcross performance, both across the whole population as well as within individual subpopulations or clusters, based solely on SNP profiles. Subpopulations were determined using multidimensional scaling and K-means clustering. Genomic prediction accuracy across the whole population was highest for seed oil content (0.81) followed by oil yield (0.75) and lowest for seedling emergence (0.29). For seed yieId, seed glucosinolate, lodging resistance and days to onset of flowering (DTF), prediction accuracies were 0.45, 0.61, 0.39 and 0.56, respectively. Prediction accuracies could be increased for some traits by treating subpopulations separately; a strategy which only led to moderate improvements for some traits with low heritability, like seedling emergence. No useful or consistent increase in accuracy was obtained by inclusion of a population substructure covariate in the model. Testcross performance prediction using genome-wide SNP markers shows considerable

  18. Performance Prediction of Centrifugal Compressor for Drop-In Testing Using Low Global Warming Potential Alternative Refrigerants and Performance Test Codes

    Directory of Open Access Journals (Sweden)

    Joo Hoon Park

    2017-12-01

    Full Text Available As environmental regulations to stall global warming are strengthened around the world, studies using newly developed low global warming potential (GWP alternative refrigerants are increasing. In this study, substitute refrigerants, R-1234ze (E and R-1233zd (E, were used in the centrifugal compressor of an R-134a 2-stage centrifugal chiller with a fixed rotational speed. Performance predictions and thermodynamic analyses of the centrifugal compressor for drop-in testing were performed. A performance prediction method based on the existing ASME PTC-10 performance test code was proposed. The proposed method yielded the expected operating area and operating point of the centrifugal compressor with alternative refrigerants. The thermodynamic performance of the first and second stages of the centrifugal compressor was calculated as the polytropic state. To verify the suitability of the proposed method, the drop-in test results of the two alternative refrigerants were compared. The predicted operating range based on the permissible deviation of ASME PTC-10 confirmed that the temperature difference was very small at the same efficiency. Because the drop-in test of R-1234ze (E was performed within the expected operating range, the centrifugal compressor using R-1234ze (E is considered well predicted. However, the predictions of the operating point and operating range of R-1233zd (E were lower than those of the drop-in test. The proposed performance prediction method will assist in understanding thermodynamic performance at the expected operating point and operating area of a centrifugal compressor using alternative gases based on limited design and structure information.

  19. Assess and Predict Automatic Generation Control Performances for Thermal Power Generation Units Based on Modeling Techniques

    Science.gov (United States)

    Zhao, Yan; Yang, Zijiang; Gao, Song; Liu, Jinbiao

    2018-02-01

    Automatic generation control(AGC) is a key technology to maintain real time power generation and load balance, and to ensure the quality of power supply. Power grids require each power generation unit to have a satisfactory AGC performance, being specified in two detailed rules. The two rules provide a set of indices to measure the AGC performance of power generation unit. However, the commonly-used method to calculate these indices is based on particular data samples from AGC responses and will lead to incorrect results in practice. This paper proposes a new method to estimate the AGC performance indices via system identification techniques. In addition, a nonlinear regression model between performance indices and load command is built in order to predict the AGC performance indices. The effectiveness of the proposed method is validated through industrial case studies.

  20. Chromosomal regions involved in hybrid performance and heterosis : their AFLP-based identification and practical use in prediction models

    NARCIS (Netherlands)

    Vuylsteke, M.; Kuiper, M.; Stam, P.

    2000-01-01

    In this paper, a novel approach towards the prediction of hybrid performance and heterosis is presented. Here, we describe an approach based on: (i) the assessment of associations between AFLPÒ22 AFLPÒ is a registered trademark of Keygene N.V. ,33 The methylation AFLPÒ method is subject to a patent

  1. Predicting Performance in Higher Education Using Proximal Predictors

    NARCIS (Netherlands)

    Niessen, A Susan M; Meijer, Rob R; Tendeiro, Jorge N

    2016-01-01

    We studied the validity of two methods for predicting academic performance and student-program fit that were proximal to important study criteria. Applicants to an undergraduate psychology program participated in a selection procedure containing a trial-studying test based on a work sample approach,

  2. On the development and performance evaluation of a multiobjective GA-based RBF adaptive model for the prediction of stock indices

    Directory of Open Access Journals (Sweden)

    Babita Majhi

    2014-09-01

    Full Text Available This paper develops and assesses the performance of a hybrid prediction model using a radial basis function neural network and non-dominated sorting multiobjective genetic algorithm-II (NSGA-II for various stock market forecasts. The proposed technique simultaneously optimizes two mutually conflicting objectives: the structure (the number of centers in the hidden layer and the output mean square error (MSE of the model. The best compromised non-dominated solution-based model was determined from the optimal Pareto front using fuzzy set theory. The performances of this model were evaluated in terms of four different measures using Standard and Poor 500 (S&P500 and Dow Jones Industrial Average (DJIA stock data. The results of the simulation of the new model demonstrate a prediction performance superior to that of the conventional radial basis function (RBF-based forecasting model in terms of the mean average percentage error (MAPE, directional accuracy (DA, Thelis’ U and average relative variance (ARV values.

  3. Comparison of ITER performance predicted by semi-empirical and theory-based transport models

    International Nuclear Information System (INIS)

    Mukhovatov, V.; Shimomura, Y.; Polevoi, A.

    2003-01-01

    The values of Q=(fusion power)/(auxiliary heating power) predicted for ITER by three different methods, i.e., transport model based on empirical confinement scaling, dimensionless scaling technique, and theory-based transport models are compared. The energy confinement time given by the ITERH-98(y,2) scaling for an inductive scenario with plasma current of 15 MA and plasma density 15% below the Greenwald value is 3.6 s with one technical standard deviation of ±14%. These data are translated into a Q interval of [7-13] at the auxiliary heating power P aux = 40 MW and [7-28] at the minimum heating power satisfying a good confinement ELMy H-mode. Predictions of dimensionless scalings and theory-based transport models such as Weiland, MMM and IFS/PPPL overlap with the empirical scaling predictions within the margins of uncertainty. (author)

  4. Towards cycle-accurate performance predictions for real-time embedded systems

    NARCIS (Netherlands)

    Triantafyllidis, K.; Bondarev, E.; With, de P.H.N.; Arabnia, H.R.; Deligiannidis, L.; Jandieri, G.

    2013-01-01

    In this paper we present a model-based performance analysis method for component-based real-time systems, featuring cycle-accurate predictions of latencies and enhanced system robustness. The method incorporates the following phases: (a) instruction-level profiling of SW components, (b) modeling the

  5. Enhancing pavement performance prediction models for the Illinois Tollway System

    Directory of Open Access Journals (Sweden)

    Laxmikanth Premkumar

    2016-01-01

    Full Text Available Accurate pavement performance prediction represents an important role in prioritizing future maintenance and rehabilitation needs, and predicting future pavement condition in a pavement management system. The Illinois State Toll Highway Authority (Tollway with over 2000 lane miles of pavement utilizes the condition rating survey (CRS methodology to rate pavement performance. Pavement performance models developed in the past for the Illinois Department of Transportation (IDOT are used by the Tollway to predict the future condition of its network. The model projects future CRS ratings based on pavement type, thickness, traffic, pavement age and current CRS rating. However, with time and inclusion of newer pavement types there was a need to calibrate the existing pavement performance models, as well as, develop models for newer pavement types.This study presents the results of calibrating the existing models, and developing new models for the various pavement types in the Illinois Tollway network. The predicted future condition of the pavements is used in estimating its remaining service life to failure, which is of immediate use in recommending future maintenance and rehabilitation requirements for the network. Keywords: Pavement performance models, Remaining life, Pavement management

  6. Predicting performance of a ground-source heat pump system using fuzzy weighted pre-processing-based ANFIS

    Energy Technology Data Exchange (ETDEWEB)

    Esen, Hikmet; Esen, Mehmet [Department of Mechanical Education, Faculty of Technical Education, Firat University, 23119 Elazig (Turkey); Inalli, Mustafa [Department of Mechanical Engineering, Faculty of Engineering, Firat University, 23279 Elazig (Turkey); Sengur, Abdulkadir [Department of Electronic and Computer Science, Faculty of Technical Education, Firat University, 23119 Elazig (Turkey)

    2008-12-15

    The goal of this work is to predict the daily performance (COP) of a ground-source heat pump (GSHP) system with the minimum data set based on an adaptive neuro-fuzzy inference system (ANFIS) with a fuzzy weighted pre-processing (FWP) method. To evaluate the effectiveness of our proposal (FWP-ANFIS), a computer simulation is developed on MATLAB environment. The comparison of the proposed hybridized system's results with the standard ANFIS results is carried out and the results are given in the tables. The efficiency of the proposed method was demonstrated by using the 3-fold cross-validation test. The statistical methods, such as the root-mean squared (RMS), the coefficient of multiple determinations (R{sup 2}) and the coefficient of variation (cov), are given to compare the predicted and actual values for model validation. The average R{sup 2} values is 0.9998, the average RMS value is 0.0272 and the average cov value is 0.7733, which can be considered as very promising. The data set for the COP of GSHP system available included 38 data patterns. The simulation results show that the FWP-based ANFIS can be used in an alternative way in these systems. The prediction results of the proposed structure were much better than the standard ANFIS results. Therefore, instead of limited experimental data found in the literature, faster and simpler solutions are obtained using hybridized structures such as FWP-based ANFIS. (author)

  7. Improving performance of breast cancer risk prediction using a new CAD-based region segmentation scheme

    Science.gov (United States)

    Heidari, Morteza; Zargari Khuzani, Abolfazl; Danala, Gopichandh; Qiu, Yuchen; Zheng, Bin

    2018-02-01

    Objective of this study is to develop and test a new computer-aided detection (CAD) scheme with improved region of interest (ROI) segmentation combined with an image feature extraction framework to improve performance in predicting short-term breast cancer risk. A dataset involving 570 sets of "prior" negative mammography screening cases was retrospectively assembled. In the next sequential "current" screening, 285 cases were positive and 285 cases remained negative. A CAD scheme was applied to all 570 "prior" negative images to stratify cases into the high and low risk case group of having cancer detected in the "current" screening. First, a new ROI segmentation algorithm was used to automatically remove useless area of mammograms. Second, from the matched bilateral craniocaudal view images, a set of 43 image features related to frequency characteristics of ROIs were initially computed from the discrete cosine transform and spatial domain of the images. Third, a support vector machine model based machine learning classifier was used to optimally classify the selected optimal image features to build a CAD-based risk prediction model. The classifier was trained using a leave-one-case-out based cross-validation method. Applying this improved CAD scheme to the testing dataset, an area under ROC curve, AUC = 0.70+/-0.04, which was significantly higher than using the extracting features directly from the dataset without the improved ROI segmentation step (AUC = 0.63+/-0.04). This study demonstrated that the proposed approach could improve accuracy on predicting short-term breast cancer risk, which may play an important role in helping eventually establish an optimal personalized breast cancer paradigm.

  8. Standardizing the performance evaluation of short-term wind prediction models

    DEFF Research Database (Denmark)

    Madsen, Henrik; Pinson, Pierre; Kariniotakis, G.

    2005-01-01

    Short-term wind power prediction is a primary requirement for efficient large-scale integration of wind generation in power systems and electricity markets. The choice of an appropriate prediction model among the numerous available models is not trivial, and has to be based on an objective...... evaluation of model performance. This paper proposes a standardized protocol for the evaluation of short-term wind-poser preciction systems. A number of reference prediction models are also described, and their use for performance comparison is analysed. The use of the protocol is demonstrated using results...... from both on-shore and off-shore wind forms. The work was developed in the frame of the Anemos project (EU R&D project) where the protocol has been used to evaluate more than 10 prediction systems....

  9. Optimizing the stimulus presentation paradigm design for the P300-based brain-computer interface using performance prediction

    Science.gov (United States)

    Mainsah, B. O.; Reeves, G.; Collins, L. M.; Throckmorton, C. S.

    2017-08-01

    Objective. The role of a brain-computer interface (BCI) is to discern a user’s intended message or action by extracting and decoding relevant information from brain signals. Stimulus-driven BCIs, such as the P300 speller, rely on detecting event-related potentials (ERPs) in response to a user attending to relevant or target stimulus events. However, this process is error-prone because the ERPs are embedded in noisy electroencephalography (EEG) data, representing a fundamental problem in communication of the uncertainty in the information that is received during noisy transmission. A BCI can be modeled as a noisy communication system and an information-theoretic approach can be exploited to design a stimulus presentation paradigm to maximize the information content that is presented to the user. However, previous methods that focused on designing error-correcting codes failed to provide significant performance improvements due to underestimating the effects of psycho-physiological factors on the P300 ERP elicitation process and a limited ability to predict online performance with their proposed methods. Maximizing the information rate favors the selection of stimulus presentation patterns with increased target presentation frequency, which exacerbates refractory effects and negatively impacts performance within the context of an oddball paradigm. An information-theoretic approach that seeks to understand the fundamental trade-off between information rate and reliability is desirable. Approach. We developed a performance-based paradigm (PBP) by tuning specific parameters of the stimulus presentation paradigm to maximize performance while minimizing refractory effects. We used a probabilistic-based performance prediction method as an evaluation criterion to select a final configuration of the PBP. Main results. With our PBP, we demonstrate statistically significant improvements in online performance, both in accuracy and spelling rate, compared to the conventional

  10. MHA admission criteria and program performance: do they predict career performance?

    Science.gov (United States)

    Porter, J; Galfano, V J

    1987-01-01

    The purpose of this study was to determine to what extent admission criteria predict graduate school and career performance. The study also analyzed which objective and subjective criteria served as the best predictors. MHA graduates of the University of Minnesota from 1974 to 1977 were surveyed to assess career performance. Student files served as the data base on admission criteria and program performance. Career performance was measured by four variables: total compensation, satisfaction, fiscal responsibility, and level of authority. High levels of MHA program performance were associated with women who had high undergraduate GPAs from highly selective undergraduate colleges, were undergraduate business majors, and participated in extracurricular activities. High levels of compensation were associated with relatively low undergraduate GPAs, high levels of participation in undergraduate extracurricular activities, and being single at admission to graduate school. Admission to MHA programs should be based upon both objective and subjective criteria. Emphasis should be placed upon the selection process for MHA students since admission criteria are shown to explain 30 percent of the variability in graduate program performance, and as much as 65 percent of the variance in level of position authority.

  11. Model Predictive Control based on Finite Impulse Response Models

    DEFF Research Database (Denmark)

    Prasath, Guru; Jørgensen, John Bagterp

    2008-01-01

    We develop a regularized l2 finite impulse response (FIR) predictive controller with input and input-rate constraints. Feedback is based on a simple constant output disturbance filter. The performance of the predictive controller in the face of plant-model mismatch is investigated by simulations...... and related to the uncertainty of the impulse response coefficients. The simulations can be used to benchmark l2 MPC against FIR based robust MPC as well as to estimate the maximum performance improvements by robust MPC....

  12. A human capital predictive model for agent performance in contact centres

    Directory of Open Access Journals (Sweden)

    Chris Jacobs

    2011-10-01

    Research purpose: The primary focus of this article was to develop a theoretically derived human capital predictive model for agent performance in contact centres and Business Process Outsourcing (BPO based on a review of current empirical research literature. Motivation for the study: The study was motivated by the need for a human capital predictive model that can predict agent and overall business performance. Research design: A nonempirical (theoretical research paradigm was adopted for this study and more specifically a theory or model-building approach was followed. A systematic review of published empirical research articles (for the period 2000–2009 in scholarly search portals was performed. Main findings: Eight building blocks of the human capital predictive model for agent performance in contact centres were identified. Forty-two of the human capital contact centre related articles are detailed in this study. Key empirical findings suggest that person– environment fit, job demands-resources, human resources management practices, engagement, agent well-being, agent competence; turnover intention; and agent performance are related to contact centre performance. Practical/managerial implications: The human capital predictive model serves as an operational management model that has performance implications for agents and ultimately influences the contact centre’s overall business performance. Contribution/value-add: This research can contribute to the fields of human resource management (HRM, human capital and performance management within the contact centre and BPO environment.

  13. Drug-target interaction prediction from PSSM based evolutionary information.

    Science.gov (United States)

    Mousavian, Zaynab; Khakabimamaghani, Sahand; Kavousi, Kaveh; Masoudi-Nejad, Ali

    2016-01-01

    The labor-intensive and expensive experimental process of drug-target interaction prediction has motivated many researchers to focus on in silico prediction, which leads to the helpful information in supporting the experimental interaction data. Therefore, they have proposed several computational approaches for discovering new drug-target interactions. Several learning-based methods have been increasingly developed which can be categorized into two main groups: similarity-based and feature-based. In this paper, we firstly use the bi-gram features extracted from the Position Specific Scoring Matrix (PSSM) of proteins in predicting drug-target interactions. Our results demonstrate the high-confidence prediction ability of the Bigram-PSSM model in terms of several performance indicators specifically for enzymes and ion channels. Moreover, we investigate the impact of negative selection strategy on the performance of the prediction, which is not widely taken into account in the other relevant studies. This is important, as the number of non-interacting drug-target pairs are usually extremely large in comparison with the number of interacting ones in existing drug-target interaction data. An interesting observation is that different levels of performance reduction have been attained for four datasets when we change the sampling method from the random sampling to the balanced sampling. Copyright © 2015 Elsevier Inc. All rights reserved.

  14. Prediction based on mean subset

    DEFF Research Database (Denmark)

    Øjelund, Henrik; Brown, P. J.; Madsen, Henrik

    2002-01-01

    , it is found that the proposed mean subset method has superior prediction performance than prediction based on the best subset method, and in some settings also better than the ridge regression and lasso methods. The conclusions drawn from the Monte Carlo study is corroborated in an example in which prediction......Shrinkage methods have traditionally been applied in prediction problems. In this article we develop a shrinkage method (mean subset) that forms an average of regression coefficients from individual subsets of the explanatory variables. A Bayesian approach is taken to derive an expression of how...... the coefficient vectors from each subset should be weighted. It is not computationally feasible to calculate the mean subset coefficient vector for larger problems, and thus we suggest an algorithm to find an approximation to the mean subset coefficient vector. In a comprehensive Monte Carlo simulation study...

  15. Ensemble-based prediction of RNA secondary structures.

    Science.gov (United States)

    Aghaeepour, Nima; Hoos, Holger H

    2013-04-24

    Accurate structure prediction methods play an important role for the understanding of RNA function. Energy-based, pseudoknot-free secondary structure prediction is one of the most widely used and versatile approaches, and improved methods for this task have received much attention over the past five years. Despite the impressive progress that as been achieved in this area, existing evaluations of the prediction accuracy achieved by various algorithms do not provide a comprehensive, statistically sound assessment. Furthermore, while there is increasing evidence that no prediction algorithm consistently outperforms all others, no work has been done to exploit the complementary strengths of multiple approaches. In this work, we present two contributions to the area of RNA secondary structure prediction. Firstly, we use state-of-the-art, resampling-based statistical methods together with a previously published and increasingly widely used dataset of high-quality RNA structures to conduct a comprehensive evaluation of existing RNA secondary structure prediction procedures. The results from this evaluation clarify the performance relationship between ten well-known existing energy-based pseudoknot-free RNA secondary structure prediction methods and clearly demonstrate the progress that has been achieved in recent years. Secondly, we introduce AveRNA, a generic and powerful method for combining a set of existing secondary structure prediction procedures into an ensemble-based method that achieves significantly higher prediction accuracies than obtained from any of its component procedures. Our new, ensemble-based method, AveRNA, improves the state of the art for energy-based, pseudoknot-free RNA secondary structure prediction by exploiting the complementary strengths of multiple existing prediction procedures, as demonstrated using a state-of-the-art statistical resampling approach. In addition, AveRNA allows an intuitive and effective control of the trade-off between

  16. When bad stress goes good: increased threat reactivity predicts improved category learning performance.

    Science.gov (United States)

    Ell, Shawn W; Cosley, Brandon; McCoy, Shannon K

    2011-02-01

    The way in which we respond to everyday stressors can have a profound impact on cognitive functioning. Maladaptive stress responses in particular are generally associated with impaired cognitive performance. We argue, however, that the cognitive system mediating task performance is also a critical determinant of the stress-cognition relationship. Consistent with this prediction, we observed that stress reactivity consistent with a maladaptive, threat response differentially predicted performance on two categorization tasks. Increased threat reactivity predicted enhanced performance on an information-integration task (i.e., learning is thought to depend upon a procedural-based memory system), and a (nonsignificant) trend for impaired performance on a rule-based task (i.e., learning is thought to depend upon a hypothesis-testing system). These data suggest that it is critical to consider both variability in the stress response and variability in the cognitive system mediating task performance in order to fully understand the stress-cognition relationship.

  17. Preoperative prediction of inpatient recovery of function after total hip arthroplasty using performance-based tests: a prospective cohort study.

    Science.gov (United States)

    Oosting, Ellen; Hoogeboom, Thomas J; Appelman-de Vries, Suzan A; Swets, Adam; Dronkers, Jaap J; van Meeteren, Nico L U

    2016-01-01

    The aim of this study was to evaluate the value of conventional factors, the Risk Assessment and Predictor Tool (RAPT) and performance-based functional tests as predictors of delayed recovery after total hip arthroplasty (THA). A prospective cohort study in a regional hospital in the Netherlands with 315 patients was attending for THA in 2012. The dependent variable recovery of function was assessed with the Modified Iowa Levels of Assistance scale. Delayed recovery was defined as taking more than 3 days to walk independently. Independent variables were age, sex, BMI, Charnley score, RAPT score and scores for four performance-based tests [2-minute walk test, timed up and go test (TUG), 10-meter walking test (10 mW) and hand grip strength]. Regression analysis with all variables identified older age (>70 years), Charnley score C, slow walking speed (10 mW >10.0 s) and poor functional mobility (TUG >10.5 s) as the best predictors of delayed recovery of function. This model (AUC 0.85, 95% CI 0.79-0.91) performed better than a model with conventional factors and RAPT scores, and significantly better (p = 0.04) than a model with only conventional factors (AUC 0.81, 95% CI 0.74-0.87). The combination of performance-based tests and conventional factors predicted inpatient functional recovery after THA. Two simple functional performance-based tests have a significant added value to a more conventional screening with age and comorbidities to predict recovery of functioning immediately after total hip surgery. Patients over 70 years old, with comorbidities, with a TUG score >10.5 s and a walking speed >1.0 m/s are at risk for delayed recovery of functioning. Those high risk patients need an accurate discharge plan and could benefit from targeted pre- and postoperative therapeutic exercise programs.

  18. The Real World Significance of Performance Prediction

    Science.gov (United States)

    Pardos, Zachary A.; Wang, Qing Yang; Trivedi, Shubhendu

    2012-01-01

    In recent years, the educational data mining and user modeling communities have been aggressively introducing models for predicting student performance on external measures such as standardized tests as well as within-tutor performance. While these models have brought statistically reliable improvement to performance prediction, the real world…

  19. Predicting BCI subject performance using probabilistic spatio-temporal filters.

    Directory of Open Access Journals (Sweden)

    Heung-Il Suk

    Full Text Available Recently, spatio-temporal filtering to enhance decoding for Brain-Computer-Interfacing (BCI has become increasingly popular. In this work, we discuss a novel, fully Bayesian-and thereby probabilistic-framework, called Bayesian Spatio-Spectral Filter Optimization (BSSFO and apply it to a large data set of 80 non-invasive EEG-based BCI experiments. Across the full frequency range, the BSSFO framework allows to analyze which spatio-spectral parameters are common and which ones differ across the subject population. As expected, large variability of brain rhythms is observed between subjects. We have clustered subjects according to similarities in their corresponding spectral characteristics from the BSSFO model, which is found to reflect their BCI performances well. In BCI, a considerable percentage of subjects is unable to use a BCI for communication, due to their missing ability to modulate their brain rhythms-a phenomenon sometimes denoted as BCI-illiteracy or inability. Predicting individual subjects' performance preceding the actual, time-consuming BCI-experiment enhances the usage of BCIs, e.g., by detecting users with BCI inability. This work additionally contributes by using the novel BSSFO method to predict the BCI-performance using only 2 minutes and 3 channels of resting-state EEG data recorded before the actual BCI-experiment. Specifically, by grouping the individual frequency characteristics we have nicely classified them into the subject 'prototypes' (like μ - or β -rhythm type subjects or users without ability to communicate with a BCI, and then by further building a linear regression model based on the grouping we could predict subjects' performance with the maximum correlation coefficient of 0.581 with the performance later seen in the actual BCI session.

  20. Swarm Intelligence-Based Hybrid Models for Short-Term Power Load Prediction

    Directory of Open Access Journals (Sweden)

    Jianzhou Wang

    2014-01-01

    Full Text Available Swarm intelligence (SI is widely and successfully applied in the engineering field to solve practical optimization problems because various hybrid models, which are based on the SI algorithm and statistical models, are developed to further improve the predictive abilities. In this paper, hybrid intelligent forecasting models based on the cuckoo search (CS as well as the singular spectrum analysis (SSA, time series, and machine learning methods are proposed to conduct short-term power load prediction. The forecasting performance of the proposed models is augmented by a rolling multistep strategy over the prediction horizon. The test results are representative of the out-performance of the SSA and CS in tuning the seasonal autoregressive integrated moving average (SARIMA and support vector regression (SVR in improving load forecasting, which indicates that both the SSA-based data denoising and SI-based intelligent optimization strategy can effectively improve the model’s predictive performance. Additionally, the proposed CS-SSA-SARIMA and CS-SSA-SVR models provide very impressive forecasting results, demonstrating their strong robustness and universal forecasting capacities in terms of short-term power load prediction 24 hours in advance.

  1. On the increase of predictive performance with high-level data fusion

    International Nuclear Information System (INIS)

    Doeswijk, T.G.; Smilde, A.K.; Hageman, J.A.; Westerhuis, J.A.; Eeuwijk, F.A. van

    2011-01-01

    The combination of the different data sources for classification purposes, also called data fusion, can be done at different levels: low-level, i.e. concatenating data matrices, medium-level, i.e. concatenating data matrices after feature selection and high-level, i.e. combining model outputs. In this paper the predictive performance of high-level data fusion is investigated. Partial least squares is used on each of the data sets and dummy variables representing the classes are used as response variables. Based on the estimated responses y-hat j for data set j and class k, a Gaussian distribution p(g k |y-hat j ) is fitted. A simulation study is performed that shows the theoretical performance of high-level data fusion for two classes and two data sets. Within group correlations of the predicted responses of the two models and differences between the predictive ability of each of the separate models and the fused models are studied. Results show that the error rate is always less than or equal to the best performing subset and can theoretically approach zero. Negative within group correlations always improve the predictive performance. However, if the data sets have a joint basis, as with metabolomics data, this is not likely to happen. For equally performing individual classifiers the best results are expected for small within group correlations. Fusion of a non-predictive classifier with a classifier that exhibits discriminative ability lead to increased predictive performance if the within group correlations are strong. An example with real life data shows the applicability of the simulation results.

  2. Comparison of RNA-seq and microarray-based models for clinical endpoint prediction.

    Science.gov (United States)

    Zhang, Wenqian; Yu, Ying; Hertwig, Falk; Thierry-Mieg, Jean; Zhang, Wenwei; Thierry-Mieg, Danielle; Wang, Jian; Furlanello, Cesare; Devanarayan, Viswanath; Cheng, Jie; Deng, Youping; Hero, Barbara; Hong, Huixiao; Jia, Meiwen; Li, Li; Lin, Simon M; Nikolsky, Yuri; Oberthuer, André; Qing, Tao; Su, Zhenqiang; Volland, Ruth; Wang, Charles; Wang, May D; Ai, Junmei; Albanese, Davide; Asgharzadeh, Shahab; Avigad, Smadar; Bao, Wenjun; Bessarabova, Marina; Brilliant, Murray H; Brors, Benedikt; Chierici, Marco; Chu, Tzu-Ming; Zhang, Jibin; Grundy, Richard G; He, Min Max; Hebbring, Scott; Kaufman, Howard L; Lababidi, Samir; Lancashire, Lee J; Li, Yan; Lu, Xin X; Luo, Heng; Ma, Xiwen; Ning, Baitang; Noguera, Rosa; Peifer, Martin; Phan, John H; Roels, Frederik; Rosswog, Carolina; Shao, Susan; Shen, Jie; Theissen, Jessica; Tonini, Gian Paolo; Vandesompele, Jo; Wu, Po-Yen; Xiao, Wenzhong; Xu, Joshua; Xu, Weihong; Xuan, Jiekun; Yang, Yong; Ye, Zhan; Dong, Zirui; Zhang, Ke K; Yin, Ye; Zhao, Chen; Zheng, Yuanting; Wolfinger, Russell D; Shi, Tieliu; Malkas, Linda H; Berthold, Frank; Wang, Jun; Tong, Weida; Shi, Leming; Peng, Zhiyu; Fischer, Matthias

    2015-06-25

    Gene expression profiling is being widely applied in cancer research to identify biomarkers for clinical endpoint prediction. Since RNA-seq provides a powerful tool for transcriptome-based applications beyond the limitations of microarrays, we sought to systematically evaluate the performance of RNA-seq-based and microarray-based classifiers in this MAQC-III/SEQC study for clinical endpoint prediction using neuroblastoma as a model. We generate gene expression profiles from 498 primary neuroblastomas using both RNA-seq and 44 k microarrays. Characterization of the neuroblastoma transcriptome by RNA-seq reveals that more than 48,000 genes and 200,000 transcripts are being expressed in this malignancy. We also find that RNA-seq provides much more detailed information on specific transcript expression patterns in clinico-genetic neuroblastoma subgroups than microarrays. To systematically compare the power of RNA-seq and microarray-based models in predicting clinical endpoints, we divide the cohort randomly into training and validation sets and develop 360 predictive models on six clinical endpoints of varying predictability. Evaluation of factors potentially affecting model performances reveals that prediction accuracies are most strongly influenced by the nature of the clinical endpoint, whereas technological platforms (RNA-seq vs. microarrays), RNA-seq data analysis pipelines, and feature levels (gene vs. transcript vs. exon-junction level) do not significantly affect performances of the models. We demonstrate that RNA-seq outperforms microarrays in determining the transcriptomic characteristics of cancer, while RNA-seq and microarray-based models perform similarly in clinical endpoint prediction. Our findings may be valuable to guide future studies on the development of gene expression-based predictive models and their implementation in clinical practice.

  3. Improving Computational Efficiency of Prediction in Model-Based Prognostics Using the Unscented Transform

    Science.gov (United States)

    Daigle, Matthew John; Goebel, Kai Frank

    2010-01-01

    Model-based prognostics captures system knowledge in the form of physics-based models of components, and how they fail, in order to obtain accurate predictions of end of life (EOL). EOL is predicted based on the estimated current state distribution of a component and expected profiles of future usage. In general, this requires simulations of the component using the underlying models. In this paper, we develop a simulation-based prediction methodology that achieves computational efficiency by performing only the minimal number of simulations needed in order to accurately approximate the mean and variance of the complete EOL distribution. This is performed through the use of the unscented transform, which predicts the means and covariances of a distribution passed through a nonlinear transformation. In this case, the EOL simulation acts as that nonlinear transformation. In this paper, we review the unscented transform, and describe how this concept is applied to efficient EOL prediction. As a case study, we develop a physics-based model of a solenoid valve, and perform simulation experiments to demonstrate improved computational efficiency without sacrificing prediction accuracy.

  4. Deep Recurrent Model for Server Load and Performance Prediction in Data Center

    Directory of Open Access Journals (Sweden)

    Zheng Huang

    2017-01-01

    Full Text Available Recurrent neural network (RNN has been widely applied to many sequential tagging tasks such as natural language process (NLP and time series analysis, and it has been proved that RNN works well in those areas. In this paper, we propose using RNN with long short-term memory (LSTM units for server load and performance prediction. Classical methods for performance prediction focus on building relation between performance and time domain, which makes a lot of unrealistic hypotheses. Our model is built based on events (user requests, which is the root cause of server performance. We predict the performance of the servers using RNN-LSTM by analyzing the log of servers in data center which contains user’s access sequence. Previous work for workload prediction could not generate detailed simulated workload, which is useful in testing the working condition of servers. Our method provides a new way to reproduce user request sequence to solve this problem by using RNN-LSTM. Experiment result shows that our models get a good performance in generating load and predicting performance on the data set which has been logged in online service. We did experiments with nginx web server and mysql database server, and our methods can been easily applied to other servers in data center.

  5. Predicting Performance on MOOC Assessments using Multi-Regression Models

    OpenAIRE

    Ren, Zhiyun; Rangwala, Huzefa; Johri, Aditya

    2016-01-01

    The past few years has seen the rapid growth of data min- ing approaches for the analysis of data obtained from Mas- sive Open Online Courses (MOOCs). The objectives of this study are to develop approaches to predict the scores a stu- dent may achieve on a given grade-related assessment based on information, considered as prior performance or prior ac- tivity in the course. We develop a personalized linear mul- tiple regression (PLMR) model to predict the grade for a student, prior to attempt...

  6. FERAL : Network-based classifier with application to breast cancer outcome prediction

    NARCIS (Netherlands)

    Allahyar, A.; De Ridder, J.

    2015-01-01

    Motivation: Breast cancer outcome prediction based on gene expression profiles is an important strategy for personalize patient care. To improve performance and consistency of discovered markers of the initial molecular classifiers, network-based outcome prediction methods (NOPs) have been proposed.

  7. Multi-Objective Predictive Balancing Control of Battery Packs Based on Predictive Current

    Directory of Open Access Journals (Sweden)

    Wenbiao Li

    2016-04-01

    Full Text Available Various balancing topology and control methods have been proposed for the inconsistency problem of battery packs. However, these strategies only focus on a single objective, ignore the mutual interaction among various factors and are only based on the external performance of the battery pack inconsistency, such as voltage balancing and state of charge (SOC balancing. To solve these problems, multi-objective predictive balancing control (MOPBC based on predictive current is proposed in this paper, namely, in the driving process of an electric vehicle, using predictive control to predict the battery pack output current the next time. Based on this information, the impact of the battery pack temperature caused by the output current can be obtained. Then, the influence is added to the battery pack balancing control, which makes the present degradation, temperature, and SOC imbalance achieve balance automatically due to the change of the output current the next moment. According to MOPBC, the simulation model of the balancing circuit is built with four cells in Matlab/Simulink. The simulation results show that MOPBC is not only better than the other traditional balancing control strategies but also reduces the energy loss in the balancing process.

  8. Data-Based Predictive Control with Multirate Prediction Step

    Science.gov (United States)

    Barlow, Jonathan S.

    2010-01-01

    Data-based predictive control is an emerging control method that stems from Model Predictive Control (MPC). MPC computes current control action based on a prediction of the system output a number of time steps into the future and is generally derived from a known model of the system. Data-based predictive control has the advantage of deriving predictive models and controller gains from input-output data. Thus, a controller can be designed from the outputs of complex simulation code or a physical system where no explicit model exists. If the output data happens to be corrupted by periodic disturbances, the designed controller will also have the built-in ability to reject these disturbances without the need to know them. When data-based predictive control is implemented online, it becomes a version of adaptive control. One challenge of MPC is computational requirements increasing with prediction horizon length. This paper develops a closed-loop dynamic output feedback controller that minimizes a multi-step-ahead receding-horizon cost function with multirate prediction step. One result is a reduced influence of prediction horizon and the number of system outputs on the computational requirements of the controller. Another result is an emphasis on portions of the prediction window that are sampled more frequently. A third result is the ability to include more outputs in the feedback path than in the cost function.

  9. Real-time Tsunami Inundation Prediction Using High Performance Computers

    Science.gov (United States)

    Oishi, Y.; Imamura, F.; Sugawara, D.

    2014-12-01

    Recently off-shore tsunami observation stations based on cabled ocean bottom pressure gauges are actively being deployed especially in Japan. These cabled systems are designed to provide real-time tsunami data before tsunamis reach coastlines for disaster mitigation purposes. To receive real benefits of these observations, real-time analysis techniques to make an effective use of these data are necessary. A representative study was made by Tsushima et al. (2009) that proposed a method to provide instant tsunami source prediction based on achieving tsunami waveform data. As time passes, the prediction is improved by using updated waveform data. After a tsunami source is predicted, tsunami waveforms are synthesized from pre-computed tsunami Green functions of linear long wave equations. Tsushima et al. (2014) updated the method by combining the tsunami waveform inversion with an instant inversion of coseismic crustal deformation and improved the prediction accuracy and speed in the early stages. For disaster mitigation purposes, real-time predictions of tsunami inundation are also important. In this study, we discuss the possibility of real-time tsunami inundation predictions, which require faster-than-real-time tsunami inundation simulation in addition to instant tsunami source analysis. Although the computational amount is large to solve non-linear shallow water equations for inundation predictions, it has become executable through the recent developments of high performance computing technologies. We conducted parallel computations of tsunami inundation and achieved 6.0 TFLOPS by using 19,000 CPU cores. We employed a leap-frog finite difference method with nested staggered grids of which resolution range from 405 m to 5 m. The resolution ratio of each nested domain was 1/3. Total number of grid points were 13 million, and the time step was 0.1 seconds. Tsunami sources of 2011 Tohoku-oki earthquake were tested. The inundation prediction up to 2 hours after the

  10. Prediction of residential radon exposure of the whole Swiss population: comparison of model-based predictions with measurement-based predictions.

    Science.gov (United States)

    Hauri, D D; Huss, A; Zimmermann, F; Kuehni, C E; Röösli, M

    2013-10-01

    Radon plays an important role for human exposure to natural sources of ionizing radiation. The aim of this article is to compare two approaches to estimate mean radon exposure in the Swiss population: model-based predictions at individual level and measurement-based predictions based on measurements aggregated at municipality level. A nationwide model was used to predict radon levels in each household and for each individual based on the corresponding tectonic unit, building age, building type, soil texture, degree of urbanization, and floor. Measurement-based predictions were carried out within a health impact assessment on residential radon and lung cancer. Mean measured radon levels were corrected for the average floor distribution and weighted with population size of each municipality. Model-based predictions yielded a mean radon exposure of the Swiss population of 84.1 Bq/m(3) . Measurement-based predictions yielded an average exposure of 78 Bq/m(3) . This study demonstrates that the model- and the measurement-based predictions provided similar results. The advantage of the measurement-based approach is its simplicity, which is sufficient for assessing exposure distribution in a population. The model-based approach allows predicting radon levels at specific sites, which is needed in an epidemiological study, and the results do not depend on how the measurement sites have been selected. © 2013 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  11. Driving and Low Vision: Validity of Assessments for Predicting Performance of Drivers

    Science.gov (United States)

    Strong, J. Graham; Jutai, Jeffrey W.; Russell-Minda, Elizabeth; Evans, Mal

    2008-01-01

    The authors conducted a systematic review to examine whether vision-related assessments can predict the driving performance of individuals who have low vision. The results indicate that measures of visual field, contrast sensitivity, cognitive and attention-based tests, and driver screening tools have variable utility for predicting real-world…

  12. A fuzzy expert system for predicting the performance of switched reluctance motor

    International Nuclear Information System (INIS)

    Mirzaeian, B.; Moallem, M.; Lucas, Caro

    2001-01-01

    In this paper a fuzzy expert system for predicting the performance of a switched reluctance motor has been developed. The design vector consists of design parameters, and output performance variables are efficiency and torque ripple. An accurate analysis program based on Improved Magnetic Equivalent Circuit method has been used to generate the input-output data. These input-output data is used to produce the initial fuzzy rules for predicting the performance of Switched Reluctance Motor. The initial set of fuzzy rules with triangular membership functions has been devised using a table look-up scheme. The initial fuzzy rules have been optimized to a set of fuzzy rules with Gaussian membership functions using gradient descent training scheme. The performance prediction results for a 6/8, 4 kw, Switched Reluctance Motor shows good agreement with the results obtained from Improved Magnetic Equivalent Circuit method or Finite Element analysis. The developed fuzzy expert system can be used for fast prediction of motor performance in the optimal design process or on-line control schemes of Switched Reluctance motor

  13. Cognitive load predicts point-of-care ultrasound simulator performance.

    Science.gov (United States)

    Aldekhyl, Sara; Cavalcanti, Rodrigo B; Naismith, Laura M

    2018-02-01

    The ability to maintain good performance with low cognitive load is an important marker of expertise. Incorporating cognitive load measurements in the context of simulation training may help to inform judgements of competence. This exploratory study investigated relationships between demographic markers of expertise, cognitive load measures, and simulator performance in the context of point-of-care ultrasonography. Twenty-nine medical trainees and clinicians at the University of Toronto with a range of clinical ultrasound experience were recruited. Participants answered a demographic questionnaire then used an ultrasound simulator to perform targeted scanning tasks based on clinical vignettes. Participants were scored on their ability to both acquire and interpret ultrasound images. Cognitive load measures included participant self-report, eye-based physiological indices, and behavioural measures. Data were analyzed using a multilevel linear modelling approach, wherein observations were clustered by participants. Experienced participants outperformed novice participants on ultrasound image acquisition. Ultrasound image interpretation was comparable between the two groups. Ultrasound image acquisition performance was predicted by level of training, prior ultrasound training, and cognitive load. There was significant convergence between cognitive load measurement techniques. A marginal model of ultrasound image acquisition performance including prior ultrasound training and cognitive load as fixed effects provided the best overall fit for the observed data. In this proof-of-principle study, the combination of demographic and cognitive load measures provided more sensitive metrics to predict ultrasound simulator performance. Performance assessments which include cognitive load can help differentiate between levels of expertise in simulation environments, and may serve as better predictors of skill transfer to clinical practice.

  14. Does resident ranking during recruitment accurately predict subsequent performance as a surgical resident?

    Science.gov (United States)

    Fryer, Jonathan P; Corcoran, Noreen; George, Brian; Wang, Ed; Darosa, Debra

    2012-01-01

    While the primary goal of ranking applicants for surgical residency training positions is to identify the candidates who will subsequently perform best as surgical residents, the effectiveness of the ranking process has not been adequately studied. We evaluated our general surgery resident recruitment process between 2001 and 2011 inclusive, to determine if our recruitment ranking parameters effectively predicted subsequent resident performance. We identified 3 candidate ranking parameters (United States Medical Licensing Examination [USMLE] Step 1 score, unadjusted ranking score [URS], and final adjusted ranking [FAR]), and 4 resident performance parameters (American Board of Surgery In-Training Examination [ABSITE] score, PGY1 resident evaluation grade [REG], overall REG, and independent faculty rating ranking [IFRR]), and assessed whether the former were predictive of the latter. Analyses utilized Spearman correlation coefficient. We found that the URS, which is based on objective and criterion based parameters, was a better predictor of subsequent performance than the FAR, which is a modification of the URS based on subsequent determinations of the resident selection committee. USMLE score was a reliable predictor of ABSITE scores only. However, when we compared our worst residence performances with the performances of the other residents in this evaluation, the data did not produce convincing evidence that poor resident performances could be reliably predicted by any of the recruitment ranking parameters. Finally, stratifying candidates based on their rank range did not effectively define a ranking cut-off beyond which resident performance would drop off. Based on these findings, we recommend surgery programs may be better served by utilizing a more structured resident ranking process and that subsequent adjustments to the rank list generated by this process should be undertaken with caution. Copyright © 2012 Association of Program Directors in Surgery

  15. A core competency-based objective structured clinical examination (OSCE) can predict future resident performance.

    Science.gov (United States)

    Wallenstein, Joshua; Heron, Sheryl; Santen, Sally; Shayne, Philip; Ander, Douglas

    2010-10-01

    This study evaluated the ability of an objective structured clinical examination (OSCE) administered in the first month of residency to predict future resident performance in the Accreditation Council for Graduate Medical Education (ACGME) core competencies. Eighteen Postgraduate Year 1 (PGY-1) residents completed a five-station OSCE in the first month of postgraduate training. Performance was graded in each of the ACGME core competencies. At the end of 18 months of training, faculty evaluations of resident performance in the emergency department (ED) were used to calculate a cumulative clinical evaluation score for each core competency. The correlations between OSCE scores and clinical evaluation scores at 18 months were assessed on an overall level and in each core competency. There was a statistically significant correlation between overall OSCE scores and overall clinical evaluation scores (R = 0.48, p competencies of patient care (R = 0.49, p competencies. An early-residency OSCE has the ability to predict future postgraduate performance on a global level and in specific core competencies. Used appropriately, such information can be a valuable tool for program directors in monitoring residents' progress and providing more tailored guidance. © 2010 by the Society for Academic Emergency Medicine.

  16. Implementation of neural network based non-linear predictive

    DEFF Research Database (Denmark)

    Sørensen, Paul Haase; Nørgård, Peter Magnus; Ravn, Ole

    1998-01-01

    The paper describes a control method for non-linear systems based on generalized predictive control. Generalized predictive control (GPC) was developed to control linear systems including open loop unstable and non-minimum phase systems, but has also been proposed extended for the control of non......-linear systems. GPC is model-based and in this paper we propose the use of a neural network for the modeling of the system. Based on the neural network model a controller with extended control horizon is developed and the implementation issues are discussed, with particular emphasis on an efficient Quasi......-Newton optimization algorithm. The performance is demonstrated on a pneumatic servo system....

  17. Map-Based Power-Split Strategy Design with Predictive Performance Optimization for Parallel Hybrid Electric Vehicles

    Directory of Open Access Journals (Sweden)

    Jixiang Fan

    2015-09-01

    Full Text Available In this paper, a map-based optimal energy management strategy is proposed to improve the consumption economy of a plug-in parallel hybrid electric vehicle. In the design of the maps, which provide both the torque split between engine and motor and the gear shift, not only the current vehicle speed and power demand, but also the optimality based on the predicted trajectory of vehicle dynamics are considered. To seek the optimality, the equivalent consumption, which trades off the fuel and electricity usages, is chosen as the cost function. Moreover, in order to decrease the model errors in the process of optimization conducted in the discrete time domain, the variational integrator is employed to calculate the evolution of the vehicle dynamics. To evaluate the proposed energy management strategy, the simulation results performed on a professional GT-Suit simulator are demonstrated and the comparison to a real-time optimization method is also given to show the advantage of the proposed off-line optimization approach.

  18. Artificial neural network analysis based on genetic algorithm to predict the performance characteristics of a cross flow cooling tower

    Science.gov (United States)

    Wu, Jiasheng; Cao, Lin; Zhang, Guoqiang

    2018-02-01

    Cooling tower of air conditioning has been widely used as cooling equipment, and there will be broad application prospect if it can be reversibly used as heat source under heat pump heating operation condition. In view of the complex non-linear relationship of each parameter in the process of heat and mass transfer inside tower, In this paper, the BP neural network model based on genetic algorithm optimization (GABP neural network model) is established for the reverse use of cross flow cooling tower. The model adopts the structure of 6 inputs, 13 hidden nodes and 8 outputs. With this model, the outlet air dry bulb temperature, wet bulb temperature, water temperature, heat, sensible heat ratio and heat absorbing efficiency, Lewis number, a total of 8 the proportion of main performance parameters were predicted. Furthermore, the established network model is used to predict the water temperature and heat absorption of the tower at different inlet temperatures. The mean relative error MRE between BP predicted value and experimental value are 4.47%, 3.63%, 2.38%, 3.71%, 6.35%,3.14%, 13.95% and 6.80% respectively; the mean relative error MRE between GABP predicted value and experimental value are 2.66%, 3.04%, 2.27%, 3.02%, 6.89%, 3.17%, 11.50% and 6.57% respectively. The results show that the prediction results of GABP network model are better than that of BP network model; the simulation results are basically consistent with the actual situation. The GABP network model can well predict the heat and mass transfer performance of the cross flow cooling tower.

  19. Evaluation of the performance and limitations of empirical partition-relations and process based multisurface models to predict trace element solubility in soils

    Energy Technology Data Exchange (ETDEWEB)

    Groenenberg, J.E.; Bonten, L.T.C. [Alterra, Wageningen UR, P.O. Box 47, 6700 AA Wageningen (Netherlands); Dijkstra, J.J. [Energy research Centre of the Netherlands ECN, P.O. Box 1, 1755 ZG Petten (Netherlands); De Vries, W. [Department of Environmental Systems Analysis, Wageningen University, Wageningen UR, P.O. Box 47, 6700 AA Wageningen (Netherlands); Comans, R.N.J. [Department of Soil Quality, Wageningen University, Wageningen UR, P.O. Box 47, 6700 AA Wageningen (Netherlands)

    2012-07-15

    Here we evaluate the performance and limitations of two frequently used model-types to predict trace element solubility in soils: regression based 'partition-relations' and thermodynamically based 'multisurface models', for a large set of elements. For this purpose partition-relations were derived for As, Ba, Cd, Co, Cr, Cu, Mo, Ni, Pb, Sb, Se, V, Zn. The multi-surface model included aqueous speciation, mineral equilibria, sorption to organic matter, Fe/Al-(hydr)oxides and clay. Both approaches were evaluated by their application to independent data for a wide variety of conditions. We conclude that Freundlich-based partition-relations are robust predictors for most cations and can be used for independent soils, but within the environmental conditions of the data used for their derivation. The multisurface model is shown to be able to successfully predict solution concentrations over a wide range of conditions. Predicted trends for oxy-anions agree well for both approaches but with larger (random) deviations than for cations.

  20. On line performance monitoring for predictive maintenance [Paper No.: VIA - 2

    International Nuclear Information System (INIS)

    Gupta, R.K.; Chandra, Rajesh

    1981-01-01

    There will always be progressive deterioration in the performance of dynamic equipment due to normal inevitable wear, malfunctions, failures and other reasons. In most cases it is possible to monitor some parameters of a system which would get progressively affected with the deterioration in the health of the system. By on-line monitoring of such predetermined parameters, compared with preset base data generated for a healthy system earlier, would prove very helpful in avoiding breakdowns and in proper planning of preventive and predictive maintenance. With increasing use of on-line computerised controls the generation of design base data and also the in-built self checking feature of monitoring the equipment health can be achieved by incorporating suitable software. This type of system will be helpful in: (a) predicting the life of component, (b) prewarning the operator about impending malfunctions, (c) establishing a maintenance schedule and spare inventory, and (d) analysing the failures. This type of centralised predictive maintenance is increasingly becoming important where: (a) the number of equipments are large, (b) the operation of equipment is critical from safety criteria, and (c) the minimum safety margin in the performance of the component is to be maintained. Keeping this in mind, the Fuel Handling System of Narora Atomic Power Project and the future power plants having computerised controls will have facility for on-line performance monitoring for predictive maintenance. The paper also describes methodology of the technique in detail, with a few representative cases. (author)

  1. Research on Improved Depth Belief Network-Based Prediction of Cardiovascular Diseases

    Directory of Open Access Journals (Sweden)

    Peng Lu

    2018-01-01

    Full Text Available Quantitative analysis and prediction can help to reduce the risk of cardiovascular disease. Quantitative prediction based on traditional model has low accuracy. The variance of model prediction based on shallow neural network is larger. In this paper, cardiovascular disease prediction model based on improved deep belief network (DBN is proposed. Using the reconstruction error, the network depth is determined independently, and unsupervised training and supervised optimization are combined. It ensures the accuracy of model prediction while guaranteeing stability. Thirty experiments were performed independently on the Statlog (Heart and Heart Disease Database data sets in the UCI database. Experimental results showed that the mean of prediction accuracy was 91.26% and 89.78%, respectively. The variance of prediction accuracy was 5.78 and 4.46, respectively.

  2. Gene function prediction based on Gene Ontology Hierarchy Preserving Hashing.

    Science.gov (United States)

    Zhao, Yingwen; Fu, Guangyuan; Wang, Jun; Guo, Maozu; Yu, Guoxian

    2018-02-23

    Gene Ontology (GO) uses structured vocabularies (or terms) to describe the molecular functions, biological roles, and cellular locations of gene products in a hierarchical ontology. GO annotations associate genes with GO terms and indicate the given gene products carrying out the biological functions described by the relevant terms. However, predicting correct GO annotations for genes from a massive set of GO terms as defined by GO is a difficult challenge. To combat with this challenge, we introduce a Gene Ontology Hierarchy Preserving Hashing (HPHash) based semantic method for gene function prediction. HPHash firstly measures the taxonomic similarity between GO terms. It then uses a hierarchy preserving hashing technique to keep the hierarchical order between GO terms, and to optimize a series of hashing functions to encode massive GO terms via compact binary codes. After that, HPHash utilizes these hashing functions to project the gene-term association matrix into a low-dimensional one and performs semantic similarity based gene function prediction in the low-dimensional space. Experimental results on three model species (Homo sapiens, Mus musculus and Rattus norvegicus) for interspecies gene function prediction show that HPHash performs better than other related approaches and it is robust to the number of hash functions. In addition, we also take HPHash as a plugin for BLAST based gene function prediction. From the experimental results, HPHash again significantly improves the prediction performance. The codes of HPHash are available at: http://mlda.swu.edu.cn/codes.php?name=HPHash. Copyright © 2018 Elsevier Inc. All rights reserved.

  3. Empirical component model to predict the overall performance of heating coils: Calibrations and tests based on manufacturer catalogue data

    International Nuclear Information System (INIS)

    Ruivo, Celestino R.; Angrisani, Giovanni

    2015-01-01

    Highlights: • An empirical model for predicting the performance of heating coils is presented. • Low and high heating capacity cases are used for calibration. • Versions based on several effectiveness correlations are tested. • Catalogue data are considered in approach testing. • The approach is a suitable component model to be used in dynamic simulation tools. - Abstract: A simplified methodology for predicting the overall behaviour of heating coils is presented in this paper. The coil performance is predicted by the ε-NTU method. Usually manufacturers do not provide information about the overall thermal resistance or the geometric details that are required either for the device selection or to apply known empirical correlations for the estimation of the involved thermal resistances. In the present work, heating capacity tables from the manufacturer catalogue are used to calibrate simplified approaches based on the classical theory of heat exchangers, namely the effectiveness method. Only two reference operating cases are required to calibrate each approach. The validity of the simplified approaches is investigated for a relatively high number of operating cases, listed in the technical catalogue of a manufacturer. Four types of coils of three sizes of air handling units are considered. A comparison is conducted between the heating coil capacities provided by the methodology and the values given by the manufacturer catalogue. The results show that several of the proposed approaches are suitable component models to be integrated in dynamic simulation tools of air conditioning systems such as TRNSYS or EnergyPlus

  4. Improved Fuzzy Modelling to Predict the Academic Performance of Distance Education Students

    Directory of Open Access Journals (Sweden)

    Osman Yildiz

    2013-12-01

    Full Text Available It is essential to predict distance education students’ year-end academic performance early during the course of the semester and to take precautions using such prediction-based information. This will, in particular, help enhance their academic performance and, therefore, improve the overall educational quality. The present study was on the development of a mathematical model intended to predict distance education students’ year-end academic performance using the first eight-week data on the learning management system. First, two fuzzy models were constructed, namely the classical fuzzy model and the expert fuzzy model, the latter being based on expert opinion. Afterwards, a gene-fuzzy model was developed optimizing membership functions through genetic algorithm. The data on distance education were collected through Moodle, an open source learning management system. The data were on a total of 218 students who enrolled in Basic Computer Sciences in 2012. The input data consisted of the following variables: When a student logged on to the system for the last time after the content of a lesson was uploaded, how often he/she logged on to the system, how long he/she stayed online in the last login, what score he/she got in the quiz taken in Week 4, and what score he/she got in the midterm exam taken in Week 8. A comparison was made among the predictions of the three models concerning the students’ year-end academic performance.

  5. Multivariate performance reliability prediction in real-time

    International Nuclear Information System (INIS)

    Lu, S.; Lu, H.; Kolarik, W.J.

    2001-01-01

    This paper presents a technique for predicting system performance reliability in real-time considering multiple failure modes. The technique includes on-line multivariate monitoring and forecasting of selected performance measures and conditional performance reliability estimates. The performance measures across time are treated as a multivariate time series. A state-space approach is used to model the multivariate time series. Recursive forecasting is performed by adopting Kalman filtering. The predicted mean vectors and covariance matrix of performance measures are used for the assessment of system survival/reliability with respect to the conditional performance reliability. The technique and modeling protocol discussed in this paper provide a means to forecast and evaluate the performance of an individual system in a dynamic environment in real-time. The paper also presents an example to demonstrate the technique

  6. Bayesian based Prognostic Model for Predictive Maintenance of Offshore Wind Farms

    DEFF Research Database (Denmark)

    Asgarpour, Masoud; Sørensen, John Dalsgaard

    2018-01-01

    The operation and maintenance costs of offshore wind farms can be significantly reduced if existing corrective actions are performed as efficient as possible and if future corrective actions are avoided by performing sufficient preventive actions. In this paper a prognostic model for degradation...... monitoring, fault prediction and predictive maintenance of offshore wind components is defined. The diagnostic model defined in this paper is based on degradation, remaining useful lifetime and hybrid inspection threshold models. The defined degradation model is based on an exponential distribution...

  7. Bayesian based Prognostic Model for Predictive Maintenance of Offshore Wind Farms

    DEFF Research Database (Denmark)

    Asgarpour, Masoud; Sørensen, John Dalsgaard

    2018-01-01

    monitoring, fault prediction and predictive maintenance of offshore wind components is defined. The diagnostic model defined in this paper is based on degradation, remaining useful lifetime and hybrid inspection threshold models. The defined degradation model is based on an exponential distribution......The operation and maintenance costs of offshore wind farms can be significantly reduced if existing corrective actions are performed as efficient as possible and if future corrective actions are avoided by performing sufficient preventive actions. In this paper a prognostic model for degradation...

  8. Predicting work Performance through selection interview ratings and Psychological assessment

    Directory of Open Access Journals (Sweden)

    Liziwe Nzama

    2008-11-01

    Full Text Available The aim of the study was to establish whether selection interviews used in conjunction with psychological assessments of personality traits and cognitive functioning contribute to predicting work performance. The sample consisted of 102 managers who were appointed recently in a retail organisation. The independent variables were selection interview ratings obtained on the basis of structured competency-based interview schedules by interviewing panels, fve broad dimensions of personality defned by the Five Factor Model as measured by the 15 Factor Questionnaire (15FQ+, and cognitive processing variables (current level of work, potential level of work, and 12 processing competencies measured by the Cognitive Process Profle (CPP. Work performance was measured through annual performance ratings that focused on measurable outputs of performance objectives. Only two predictor variables correlated statistically signifcantly with the criterion variable, namely interview ratings (r = 0.31 and CPP Verbal Abstraction (r = 0.34. Following multiple regression, only these variables contributed signifcantly to predicting work performance, but only 17.8% of the variance of the criterion was accounted for.

  9. Prediction of Human Performance Using Electroencephalography under Different Indoor Room Temperatures

    Science.gov (United States)

    Zhang, Tinghe; Mao, Zijing; Xu, Xiaojing; Zhang, Lin; Pack, Daniel J.; Dong, Bing; Huang, Yufei

    2018-01-01

    Varying indoor environmental conditions is known to affect office worker’s performance; wherein past research studies have reported the effects of unfavorable indoor temperature and air quality causing sick building syndrome (SBS) among office workers. Thus, investigating factors that can predict performance in changing indoor environments have become a highly important research topic bearing significant impact in our society. While past research studies have attempted to determine predictors for performance, they do not provide satisfactory prediction ability. Therefore, in this preliminary study, we attempt to predict performance during office-work tasks triggered by different indoor room temperatures (22.2 °C and 30 °C) from human brain signals recorded using electroencephalography (EEG). Seven participants were recruited, from whom EEG, skin temperature, heart rate and thermal survey questionnaires were collected. Regression analyses were carried out to investigate the effectiveness of using EEG power spectral densities (PSD) as predictors of performance. Our results indicate EEG PSDs as predictors provide the highest R2 (> 0.70), that is 17 times higher than using other physiological signals as predictors and is more robust. Finally, the paper provides insight on the selected predictors based on brain activity patterns for low- and high-performance levels under different indoor-temperatures. PMID:29690601

  10. Neither here, nor there: impression management does not predict expatriate adjustment and job performance

    OpenAIRE

    HANNAH JACKSON FOLDES; DENIZ S. ONES; HANDAN KEPIR SINANGIL

    2006-01-01

    Social desirability scale scores reflect substantive individual differences related to personality. The objective of the current study was to examine whether social desirability, and impression management specifically (a component of social desirability), is predictive of adjustment and job performance for expatriates. Based on theoretical considerations, it was proposed that impression management might be linked to expatriate job performance in a predictive and mediated relationship through ...

  11. Prediction of Lunar- and Martian-Based Intra- and Site-to-Site Task Performance.

    Science.gov (United States)

    Ade, Carl J; Broxterman, Ryan M; Craig, Jesse C; Schlup, Susanna J; Wilcox, Samuel L; Warren, Steve; Kuehl, Phillip; Gude, Dana; Jia, Chen; Barstow, Thomas J

    2016-04-01

    This study aimed to investigate the feasibility of determining the physiological parameters associated with the ability to complete simulated exploration type tasks at metabolic rates which might be expected for lunar and Martian ambulation. Running V̇O2max and gas exchange threshold (GET) were measured in 21 volunteers. Two simulated extravehicular activity field tests were completed in 1 G in regular athletic apparel at two intensities designed to elicit metabolic rates of ∼20.0 and ∼30.0 ml · kg(-1) · min(-1), which are similar to those previously reported for ambulation in simulated lunar- and Martian-based environments, respectively. All subjects were able to complete the field test at the lunar intensity, but 28% were unable to complete the field test at the Martian intensity (non-Finishers). During the Martian field test there were no differences in V̇O2 between Finishers and non-Finishers, but the non-Finishers achieved a greater %V̇O2max compared to Finishers (78.4 ± 4.6% vs. 64.9 ± 9.6%). Logistic regression analysis revealed fitness thresholds for a predicted probability of 0.5, at which Finishing and non-Finishing are equally likely, and 0.75, at which an individual has a 75% chance of Finishing, to be a V̇O2max of 38.4 ml · kg(-1) · min(-1) and 40.0 ml · kg(-1) · min(-1) or a GET of 20.1 ml · kg(-1) · min(-1) and 25.1 ml · kg(-1) · min(-1), respectively (χ(2) = 10.2). Logistic regression analysis also revealed that the expected %V̇O2max required to complete a field test could be used to successfully predict performance (χ(2) = 19.3). The results of the present investigation highlight the potential utility of V̇O2max, particularly as it relates to the metabolic demands of a surface ambulation, in defining successful completion of planetary-based exploration field tests.

  12. Ground-based adaptive optics coronagraphic performance under closed-loop predictive control

    Science.gov (United States)

    Males, Jared R.; Guyon, Olivier

    2018-01-01

    The discovery of the exoplanet Proxima b highlights the potential for the coming generation of giant segmented mirror telescopes (GSMTs) to characterize terrestrial-potentially habitable-planets orbiting nearby stars with direct imaging. This will require continued development and implementation of optimized adaptive optics systems feeding coronagraphs on the GSMTs. Such development should proceed with an understanding of the fundamental limits imposed by atmospheric turbulence. Here, we seek to address this question with a semianalytic framework for calculating the postcoronagraph contrast in a closed-loop adaptive optics system. We do this starting with the temporal power spectra of the Fourier basis calculated assuming frozen flow turbulence, and then apply closed-loop transfer functions. We include the benefits of a simple predictive controller, which we show could provide over a factor of 1400 gain in raw point spread function contrast at 1 λ/D on bright stars, and more than a factor of 30 gain on an I=7.5 mag star such as Proxima. More sophisticated predictive control can be expected to improve this even further. Assuming a photon-noise limited observing technique such as high-dispersion coronagraphy, these gains in raw contrast will decrease integration times by the same large factors. Predictive control of atmospheric turbulence should therefore be seen as one of the key technologies that will enable ground-based telescopes to characterize terrestrial planets.

  13. Performance and wake predictions of HAWTs in wind farms

    Energy Technology Data Exchange (ETDEWEB)

    Leclerc, C.; Masson, C.; Paraschivoiu, I. [Ecole Polytechnique, Montreal (Canada)

    1997-12-31

    The present contribution proposes and describes a promising way towards performance prediction of an arbitrary array of turbines. It is based on the solution of the time-averaged, steady-state, incompressible Navier-Stokes equations with an appropriate turbulence closure model. The turbines are represented by distributions of momentum sources in the Navier-Stokes equations. In this paper, the applicability and viability of the proposed methodology is demonstrated using an axisymmetric implementation. The k-{epsilon} model has been chosen for the closure of the time-averaged, turbulent flow equations and the properties of the incident flow correspond to those of a neutral atmospheric boundary layer. The proposed mathematical model is solved using a Control-Volume Finite Element Method (CVFEM). Detailed results have been obtained using the proposed method for an isolated wind turbine and for two turbines one behind another. In the case of an isolated turbine, accurate wake velocity deficit predictions are obtained and an increase in power due to atmospheric turbulence is found in agreement with measurements. In the case of two turbines, the proposed methodology provides an appropriate modelling of the wind-turbine wake and a realistic prediction of the performance degradation of the downstream turbine.

  14. Prediction on carbon dioxide emissions based on fuzzy rules

    Science.gov (United States)

    Pauzi, Herrini; Abdullah, Lazim

    2014-06-01

    There are several ways to predict air quality, varying from simple regression to models based on artificial intelligence. Most of the conventional methods are not sufficiently able to provide good forecasting performances due to the problems with non-linearity uncertainty and complexity of the data. Artificial intelligence techniques are successfully used in modeling air quality in order to cope with the problems. This paper describes fuzzy inference system (FIS) to predict CO2 emissions in Malaysia. Furthermore, adaptive neuro-fuzzy inference system (ANFIS) is used to compare the prediction performance. Data of five variables: energy use, gross domestic product per capita, population density, combustible renewable and waste and CO2 intensity are employed in this comparative study. The results from the two model proposed are compared and it is clearly shown that the ANFIS outperforms FIS in CO2 prediction.

  15. Sequence Based Prediction of Antioxidant Proteins Using a Classifier Selection Strategy.

    Directory of Open Access Journals (Sweden)

    Lina Zhang

    Full Text Available Antioxidant proteins perform significant functions in maintaining oxidation/antioxidation balance and have potential therapies for some diseases. Accurate identification of antioxidant proteins could contribute to revealing physiological processes of oxidation/antioxidation balance and developing novel antioxidation-based drugs. In this study, an ensemble method is presented to predict antioxidant proteins with hybrid features, incorporating SSI (Secondary Structure Information, PSSM (Position Specific Scoring Matrix, RSA (Relative Solvent Accessibility, and CTD (Composition, Transition, Distribution. The prediction results of the ensemble predictor are determined by an average of prediction results of multiple base classifiers. Based on a classifier selection strategy, we obtain an optimal ensemble classifier composed of RF (Random Forest, SMO (Sequential Minimal Optimization, NNA (Nearest Neighbor Algorithm, and J48 with an accuracy of 0.925. A Relief combined with IFS (Incremental Feature Selection method is adopted to obtain optimal features from hybrid features. With the optimal features, the ensemble method achieves improved performance with a sensitivity of 0.95, a specificity of 0.93, an accuracy of 0.94, and an MCC (Matthew's Correlation Coefficient of 0.880, far better than the existing method. To evaluate the prediction performance objectively, the proposed method is compared with existing methods on the same independent testing dataset. Encouragingly, our method performs better than previous studies. In addition, our method achieves more balanced performance with a sensitivity of 0.878 and a specificity of 0.860. These results suggest that the proposed ensemble method can be a potential candidate for antioxidant protein prediction. For public access, we develop a user-friendly web server for antioxidant protein identification that is freely accessible at http://antioxidant.weka.cc.

  16. Development of a Mobile Application for Building Energy Prediction Using Performance Prediction Model

    Directory of Open Access Journals (Sweden)

    Yu-Ri Kim

    2016-03-01

    Full Text Available Recently, the Korean government has enforced disclosure of building energy performance, so that such information can help owners and prospective buyers to make suitable investment plans. Such a building energy performance policy of the government makes it mandatory for the building owners to obtain engineering audits and thereby evaluate the energy performance levels of their buildings. However, to calculate energy performance levels (i.e., asset rating methodology, a qualified expert needs to have access to at least the full project documentation and/or conduct an on-site inspection of the buildings. Energy performance certification costs a lot of time and money. Moreover, the database of certified buildings is still actually quite small. A need, therefore, is increasing for a simplified and user-friendly energy performance prediction tool for non-specialists. Also, a database which allows building owners and users to compare best practices is required. In this regard, the current study developed a simplified performance prediction model through experimental design, energy simulations and ANOVA (analysis of variance. Furthermore, using the new prediction model, a related mobile application was also developed.

  17. Iowa calibration of MEPDG performance prediction models.

    Science.gov (United States)

    2013-06-01

    This study aims to improve the accuracy of AASHTO Mechanistic-Empirical Pavement Design Guide (MEPDG) pavement : performance predictions for Iowa pavement systems through local calibration of MEPDG prediction models. A total of 130 : representative p...

  18. A Free Wake Numerical Simulation for Darrieus Vertical Axis Wind Turbine Performance Prediction

    Science.gov (United States)

    Belu, Radian

    2010-11-01

    In the last four decades, several aerodynamic prediction models have been formulated for the Darrieus wind turbine performances and characteristics. We can identified two families: stream-tube and vortex. The paper presents a simplified numerical techniques for simulating vertical axis wind turbine flow, based on the lifting line theory and a free vortex wake model, including dynamic stall effects for predicting the performances of a 3-D vertical axis wind turbine. A vortex model is used in which the wake is composed of trailing stream-wise and shedding span-wise vortices, whose strengths are equal to the change in the bound vortex strength as required by the Helmholz and Kelvin theorems. Performance parameters are computed by application of the Biot-Savart law along with the Kutta-Jukowski theorem and a semi-empirical stall model. We tested the developed model with an adaptation of the earlier multiple stream-tube performance prediction model for the Darrieus turbines. Predictions by using our method are shown to compare favorably with existing experimental data and the outputs of other numerical models. The method can predict accurately the local and global performances of a vertical axis wind turbine, and can be used in the design and optimization of wind turbines for built environment applications.

  19. Testing the performance of technical trading rules in the Chinese markets based on superior predictive test

    Science.gov (United States)

    Wang, Shan; Jiang, Zhi-Qiang; Li, Sai-Ping; Zhou, Wei-Xing

    2015-12-01

    Technical trading rules have a long history of being used by practitioners in financial markets. The profitable ability and efficiency of technical trading rules are yet controversial. In this paper, we test the performance of more than seven thousand traditional technical trading rules on the Shanghai Securities Composite Index (SSCI) from May 21, 1992 through June 30, 2013 and China Securities Index 300 (CSI 300) from April 8, 2005 through June 30, 2013 to check whether an effective trading strategy could be found by using the performance measurements based on the return and Sharpe ratio. To correct for the influence of the data-snooping effect, we adopt the Superior Predictive Ability test to evaluate if there exists a trading rule that can significantly outperform the benchmark. The result shows that for SSCI, technical trading rules offer significant profitability, while for CSI 300, this ability is lost. We further partition the SSCI into two sub-series and find that the efficiency of technical trading in sub-series, which have exactly the same spanning period as that of CSI 300, is severely weakened. By testing the trading rules on both indexes with a five-year moving window, we find that during the financial bubble from 2005 to 2007, the effectiveness of technical trading rules is greatly improved. This is consistent with the predictive ability of technical trading rules which appears when the market is less efficient.

  20. Performance of a process-based hydrodynamic model in predicting shoreline change

    Science.gov (United States)

    Safak, I.; Warner, J. C.; List, J. H.

    2012-12-01

    Shoreline change is controlled by a complex combination of processes that include waves, currents, sediment characteristics and availability, geologic framework, human interventions, and sea level rise. A comprehensive data set of shoreline position (14 shorelines between 1978-2002) along the continuous and relatively non-interrupted North Carolina Coast from Oregon Inlet to Cape Hatteras (65 km) reveals a spatial pattern of alternating erosion and accretion, with an erosional average shoreline change rate of -1.6 m/yr and up to -8 m/yr in some locations. This data set gives a unique opportunity to study long-term shoreline change in an area hit by frequent storm events while relatively uninfluenced by human interventions and the effects of tidal inlets. Accurate predictions of long-term shoreline change may require a model that accurately resolves surf zone processes and sediment transport patterns. Conventional methods for predicting shoreline change such as one-line models and regression of shoreline positions have been designed for computational efficiency. These methods, however, not only have several underlying restrictions (validity for small angle of wave approach, assuming bottom contours and shoreline to be parallel, depth of closure, etc.) but also their empirical estimates of sediment transport rates in the surf zone have been shown to vary greatly from the calculations of process-based hydrodynamic models. We focus on hind-casting long-term shoreline change using components of the process-based, three-dimensional coupled-ocean-atmosphere-wave-sediment transport modeling system (COAWST). COAWST is forced with historical predictions of atmospheric and oceanographic data from public-domain global models. Through a method of coupled concurrent grid-refinement approach in COAWST, the finest grid with resolution of O(10 m) that covers the surf zone along the section of interest is forced at its spatial boundaries with waves and currents computed on the grids

  1. Predicting Expressive Dynamics in Piano Performances using Neural Networks

    NARCIS (Netherlands)

    van Herwaarden, Sam; Grachten, Maarten; de Haas, W. Bas

    2014-01-01

    This paper presents a model for predicting expressive accentuation in piano performances with neural networks. Using Restricted Boltzmann Machines (RBMs), features are learned from performance data, after which these features are used to predict performed loudness. During feature learning, data

  2. Prediction of Tennis Performance in Junior Elite Tennis Players

    Directory of Open Access Journals (Sweden)

    Tamara Kramer, Barbara C.H. Huijgen, Marije T. Elferink-Gemser, Chris Visscher

    2017-03-01

    Full Text Available Predicting current and future tennis performance can lead to improving the development of junior tennis players. The aim of this study is to investigate whether age, maturation, or physical fitness in junior elite tennis players in U13 can explain current and future tennis performance. The value of current tennis performance for future tennis performance is also investigated. A total of 86 junior elite tennis players (boys, n = 44; girls, n = 42 U13 (aged: 12.5 ± 0.3 years, and followed to U16, took part in this study. All players were top-30 ranked on the Dutch national ranking list at U13, and top-50 at U16. Age, maturation, and physical fitness, were measured at U13. A principal component analysis was used to extract four physical components from eight tests (medicine ball throwing overhead and reverse, ball throwing, SJ, CMJas, Sprint 5 and 10 meter, and the spider test. The possible relationship of age, maturation, and the physical components; “upper body power”, “lower body power”, “speed”, and “agility” with tennis performance at U13 and U16 was analyzed. Tennis performance was measured by using the ranking position on the Dutch national ranking list at U13 and U16. Regression analyses were conducted based on correlations between variables and tennis performance for boys and girls, separately. In boys U13, positive correlations were found between upper body power and tennis performance (R2 is 25%. In girls, positive correlations between maturation and lower body power with tennis performance were found at U13. Early maturing players were associated with a better tennis performance (R2 is 15%. In girls U16, only maturation correlated with tennis performance (R2 is 13%; later-maturing girls at U13 had better tennis performances at U16. Measuring junior elite tennis players at U13 is important for monitoring their development. These measurements did not predict future tennis performance of junior elite tennis players three

  3. Machine learning-based methods for prediction of linear B-cell epitopes.

    Science.gov (United States)

    Wang, Hsin-Wei; Pai, Tun-Wen

    2014-01-01

    B-cell epitope prediction facilitates immunologists in designing peptide-based vaccine, diagnostic test, disease prevention, treatment, and antibody production. In comparison with T-cell epitope prediction, the performance of variable length B-cell epitope prediction is still yet to be satisfied. Fortunately, due to increasingly available verified epitope databases, bioinformaticians could adopt machine learning-based algorithms on all curated data to design an improved prediction tool for biomedical researchers. Here, we have reviewed related epitope prediction papers, especially those for linear B-cell epitope prediction. It should be noticed that a combination of selected propensity scales and statistics of epitope residues with machine learning-based tools formulated a general way for constructing linear B-cell epitope prediction systems. It is also observed from most of the comparison results that the kernel method of support vector machine (SVM) classifier outperformed other machine learning-based approaches. Hence, in this chapter, except reviewing recently published papers, we have introduced the fundamentals of B-cell epitope and SVM techniques. In addition, an example of linear B-cell prediction system based on physicochemical features and amino acid combinations is illustrated in details.

  4. Gate valve performance prediction

    International Nuclear Information System (INIS)

    Harrison, D.H.; Damerell, P.S.; Wang, J.K.; Kalsi, M.S.; Wolfe, K.J.

    1994-01-01

    The Electric Power Research Institute is carrying out a program to improve the performance prediction methods for motor-operated valves. As part of this program, an analytical method to predict the stem thrust required to stroke a gate valve has been developed and has been assessed against data from gate valve tests. The method accounts for the loads applied to the disc by fluid flow and for the detailed mechanical interaction of the stem, disc, guides, and seats. To support development of the method, two separate-effects test programs were carried out. One test program determined friction coefficients for contacts between gate valve parts by using material specimens in controlled environments. The other test program investigated the interaction of the stem, disc, guides, and seat using a special fixture with full-sized gate valve parts. The method has been assessed against flow-loop and in-plant test data. These tests include valve sizes from 3 to 18 in. and cover a considerable range of flow, temperature, and differential pressure. Stem thrust predictions for the method bound measured results. In some cases, the bounding predictions are substantially higher than the stem loads required for valve operation, as a result of the bounding nature of the friction coefficients in the method

  5. Probability-based collaborative filtering model for predicting gene–disease associations

    OpenAIRE

    Zeng, Xiangxiang; Ding, Ningxiang; Rodríguez-Patón, Alfonso; Zou, Quan

    2017-01-01

    Background Accurately predicting pathogenic human genes has been challenging in recent research. Considering extensive gene–disease data verified by biological experiments, we can apply computational methods to perform accurate predictions with reduced time and expenses. Methods We propose a probability-based collaborative filtering model (PCFM) to predict pathogenic human genes. Several kinds of data sets, containing data of humans and data of other nonhuman species, are integrated in our mo...

  6. Prediction based chaos control via a new neural network

    International Nuclear Information System (INIS)

    Shen Liqun; Wang Mao; Liu Wanyu; Sun Guanghui

    2008-01-01

    In this Letter, a new chaos control scheme based on chaos prediction is proposed. To perform chaos prediction, a new neural network architecture for complex nonlinear approximation is proposed. And the difficulty in building and training the neural network is also reduced. Simulation results of Logistic map and Lorenz system show the effectiveness of the proposed chaos control scheme and the proposed neural network

  7. Implementation of neural network based non-linear predictive control

    DEFF Research Database (Denmark)

    Sørensen, Paul Haase; Nørgård, Peter Magnus; Ravn, Ole

    1999-01-01

    This paper describes a control method for non-linear systems based on generalized predictive control. Generalized predictive control (GPC) was developed to control linear systems, including open-loop unstable and non-minimum phase systems, but has also been proposed to be extended for the control...... of non-linear systems. GPC is model based and in this paper we propose the use of a neural network for the modeling of the system. Based on the neural network model, a controller with extended control horizon is developed and the implementation issues are discussed, with particular emphasis...... on an efficient quasi-Newton algorithm. The performance is demonstrated on a pneumatic servo system....

  8. Predictive validity of pre-admission assessments on medical student performance.

    Science.gov (United States)

    Dabaliz, Al-Awwab; Kaadan, Samy; Dabbagh, M Marwan; Barakat, Abdulaziz; Shareef, Mohammad Abrar; Al-Tannir, Mohamad; Obeidat, Akef; Mohamed, Ayman

    2017-11-24

    To examine the predictive validity of pre-admission variables on students' performance in a medical school in Saudi Arabia. In this retrospective study, we collected admission and college performance data for 737 students in preclinical and clinical years. Data included high school scores and other standardized test scores, such as those of the National Achievement Test and the General Aptitude Test. Additionally, we included the scores of the Test of English as a Foreign Language (TOEFL) and the International English Language Testing System (IELTS) exams. Those datasets were then compared with college performance indicators, namely the cumulative Grade Point Average (cGPA) and progress test, using multivariate linear regression analysis. In preclinical years, both the National Achievement Test (p=0.04, B=0.08) and TOEFL (p=0.017, B=0.01) scores were positive predictors of cGPA, whereas the General Aptitude Test (p=0.048, B=-0.05) negatively predicted cGPA. Moreover, none of the pre-admission variables were predictive of progress test performance in the same group. On the other hand, none of the pre-admission variables were predictive of cGPA in clinical years. Overall, cGPA strongly predict-ed students' progress test performance (p<0.001 and B=19.02). Only the National Achievement Test and TOEFL significantly predicted performance in preclinical years. However, these variables do not predict progress test performance, meaning that they do not predict the functional knowledge reflected in the progress test. We report various strengths and deficiencies in the current medical college admission criteria, and call for employing more sensitive and valid ones that predict student performance and functional knowledge, especially in the clinical years.

  9. Meta-path based heterogeneous combat network link prediction

    Science.gov (United States)

    Li, Jichao; Ge, Bingfeng; Yang, Kewei; Chen, Yingwu; Tan, Yuejin

    2017-09-01

    The combat system-of-systems in high-tech informative warfare, composed of many interconnected combat systems of different types, can be regarded as a type of complex heterogeneous network. Link prediction for heterogeneous combat networks (HCNs) is of significant military value, as it facilitates reconfiguring combat networks to represent the complex real-world network topology as appropriate with observed information. This paper proposes a novel integrated methodology framework called HCNMP (HCN link prediction based on meta-path) to predict multiple types of links simultaneously for an HCN. More specifically, the concept of HCN meta-paths is introduced, through which the HCNMP can accumulate information by extracting different features of HCN links for all the six defined types. Next, an HCN link prediction model, based on meta-path features, is built to predict all types of links of the HCN simultaneously. Then, the solution algorithm for the HCN link prediction model is proposed, in which the prediction results are obtained by iteratively updating with the newly predicted results until the results in the HCN converge or reach a certain maximum iteration number. Finally, numerical experiments on the dataset of a real HCN are conducted to demonstrate the feasibility and effectiveness of the proposed HCNMP, in comparison with 30 baseline methods. The results show that the performance of the HCNMP is superior to those of the baseline methods.

  10. Predictive performance models and multiple task performance

    Science.gov (United States)

    Wickens, Christopher D.; Larish, Inge; Contorer, Aaron

    1989-01-01

    Five models that predict how performance of multiple tasks will interact in complex task scenarios are discussed. The models are shown in terms of the assumptions they make about human operator divided attention. The different assumptions about attention are then empirically validated in a multitask helicopter flight simulation. It is concluded from this simulation that the most important assumption relates to the coding of demand level of different component tasks.

  11. Estimating the Performance of Random Forest versus Multiple Regression for Predicting Prices of the Apartments

    Directory of Open Access Journals (Sweden)

    Marjan Čeh

    2018-05-01

    Full Text Available The goal of this study is to analyse the predictive performance of the random forest machine learning technique in comparison to commonly used hedonic models based on multiple regression for the prediction of apartment prices. A data set that includes 7407 records of apartment transactions referring to real estate sales from 2008–2013 in the city of Ljubljana, the capital of Slovenia, was used in order to test and compare the predictive performances of both models. Apparent challenges faced during modelling included (1 the non-linear nature of the prediction assignment task; (2 input data being based on transactions occurring over a period of great price changes in Ljubljana whereby a 28% decline was noted in six consecutive testing years; and (3 the complex urban form of the case study area. Available explanatory variables, organised as a Geographic Information Systems (GIS ready dataset, including the structural and age characteristics of the apartments as well as environmental and neighbourhood information were considered in the modelling procedure. All performance measures (R2 values, sales ratios, mean average percentage error (MAPE, coefficient of dispersion (COD revealed significantly better results for predictions obtained by the random forest method, which confirms the prospective of this machine learning technique on apartment price prediction.

  12. Wavelet-based prediction of oil prices

    International Nuclear Information System (INIS)

    Yousefi, Shahriar; Weinreich, Ilona; Reinarz, Dominik

    2005-01-01

    This paper illustrates an application of wavelets as a possible vehicle for investigating the issue of market efficiency in futures markets for oil. The paper provides a short introduction to the wavelets and a few interesting wavelet-based contributions in economics and finance are briefly reviewed. A wavelet-based prediction procedure is introduced and market data on crude oil is used to provide forecasts over different forecasting horizons. The results are compared with data from futures markets for oil and the relative performance of this procedure is used to investigate whether futures markets are efficiently priced

  13. A comprehensive performance evaluation on the prediction results of existing cooperative transcription factors identification algorithms.

    Science.gov (United States)

    Lai, Fu-Jou; Chang, Hong-Tsun; Huang, Yueh-Min; Wu, Wei-Sheng

    2014-01-01

    Eukaryotic transcriptional regulation is known to be highly connected through the networks of cooperative transcription factors (TFs). Measuring the cooperativity of TFs is helpful for understanding the biological relevance of these TFs in regulating genes. The recent advances in computational techniques led to various predictions of cooperative TF pairs in yeast. As each algorithm integrated different data resources and was developed based on different rationales, it possessed its own merit and claimed outperforming others. However, the claim was prone to subjectivity because each algorithm compared with only a few other algorithms and only used a small set of performance indices for comparison. This motivated us to propose a series of indices to objectively evaluate the prediction performance of existing algorithms. And based on the proposed performance indices, we conducted a comprehensive performance evaluation. We collected 14 sets of predicted cooperative TF pairs (PCTFPs) in yeast from 14 existing algorithms in the literature. Using the eight performance indices we adopted/proposed, the cooperativity of each PCTFP was measured and a ranking score according to the mean cooperativity of the set was given to each set of PCTFPs under evaluation for each performance index. It was seen that the ranking scores of a set of PCTFPs vary with different performance indices, implying that an algorithm used in predicting cooperative TF pairs is of strength somewhere but may be of weakness elsewhere. We finally made a comprehensive ranking for these 14 sets. The results showed that Wang J's study obtained the best performance evaluation on the prediction of cooperative TF pairs in yeast. In this study, we adopted/proposed eight performance indices to make a comprehensive performance evaluation on the prediction results of 14 existing cooperative TFs identification algorithms. Most importantly, these proposed indices can be easily applied to measure the performance of new

  14. Aerodynamic performance prediction of Darrieus-type wind turbines

    Directory of Open Access Journals (Sweden)

    Ion NILĂ

    2010-06-01

    Full Text Available The prediction of Darrieus wind turbine aerodynamic performances provides the necessarydesign and operational data base related to the wind potential. In this sense it provides the type ofturbine suitable to the area where it is to be installed. Two calculation methods are analyzed for arotor with straight blades. The first one is a global method that allows an assessment of the turbinenominal power by a brief calculation. This method leads to an overestimation of performances. Thesecond is the calculation method of the gust factor and momentum which deals with the pale as beingcomposed of different elements that don’t influence each other. This method, developed based on thetheory of the turbine blades, leads to values close to the statistical data obtained experimentally. Thevalues obtained by the calculation method of gust factor - momentum led to the concept of a Darrieusturbine, which will be tested for different wind values in the INCAS subsonic wind tunnel.

  15. Prediction-based dynamic load-sharing heuristics

    Science.gov (United States)

    Goswami, Kumar K.; Devarakonda, Murthy; Iyer, Ravishankar K.

    1993-01-01

    The authors present dynamic load-sharing heuristics that use predicted resource requirements of processes to manage workloads in a distributed system. A previously developed statistical pattern-recognition method is employed for resource prediction. While nonprediction-based heuristics depend on a rapidly changing system status, the new heuristics depend on slowly changing program resource usage patterns. Furthermore, prediction-based heuristics can be more effective since they use future requirements rather than just the current system state. Four prediction-based heuristics, two centralized and two distributed, are presented. Using trace driven simulations, they are compared against random scheduling and two effective nonprediction based heuristics. Results show that the prediction-based centralized heuristics achieve up to 30 percent better response times than the nonprediction centralized heuristic, and that the prediction-based distributed heuristics achieve up to 50 percent improvements relative to their nonprediction counterpart.

  16. The wind power prediction research based on mind evolutionary algorithm

    Science.gov (United States)

    Zhuang, Ling; Zhao, Xinjian; Ji, Tianming; Miao, Jingwen; Cui, Haina

    2018-04-01

    When the wind power is connected to the power grid, its characteristics of fluctuation, intermittent and randomness will affect the stability of the power system. The wind power prediction can guarantee the power quality and reduce the operating cost of power system. There were some limitations in several traditional wind power prediction methods. On the basis, the wind power prediction method based on Mind Evolutionary Algorithm (MEA) is put forward and a prediction model is provided. The experimental results demonstrate that MEA performs efficiently in term of the wind power prediction. The MEA method has broad prospect of engineering application.

  17. Performance prediction method for a multi-stage Knudsen pump

    Science.gov (United States)

    Kugimoto, K.; Hirota, Y.; Kizaki, Y.; Yamaguchi, H.; Niimi, T.

    2017-12-01

    In this study, the novel method to predict the performance of a multi-stage Knudsen pump is proposed. The performance prediction method is carried out in two steps numerically with the assistance of a simple experimental result. In the first step, the performance of a single-stage Knudsen pump was measured experimentally under various pressure conditions, and the relationship of the mass flow rate was obtained with respect to the average pressure between the inlet and outlet of the pump and the pressure difference between them. In the second step, the performance of a multi-stage pump was analyzed by a one-dimensional model derived from the mass conservation law. The performances predicted by the 1D-model of 1-stage, 2-stage, 3-stage, and 4-stage pumps were validated by the experimental results for the corresponding number of stages. It was concluded that the proposed prediction method works properly.

  18. Predictive models for PEM-electrolyzer performance using adaptive neuro-fuzzy inference systems

    Energy Technology Data Exchange (ETDEWEB)

    Becker, Steffen [University of Tasmania, Hobart 7001, Tasmania (Australia); Karri, Vishy [Australian College of Kuwait (Kuwait)

    2010-09-15

    Predictive models were built using neural network based Adaptive Neuro-Fuzzy Inference Systems for hydrogen flow rate, electrolyzer system-efficiency and stack-efficiency respectively. A comprehensive experimental database forms the foundation for the predictive models. It is argued that, due to the high costs associated with the hydrogen measuring equipment; these reliable predictive models can be implemented as virtual sensors. These models can also be used on-line for monitoring and safety of hydrogen equipment. The quantitative accuracy of the predictive models is appraised using statistical techniques. These mathematical models are found to be reliable predictive tools with an excellent accuracy of {+-}3% compared with experimental values. The predictive nature of these models did not show any significant bias to either over prediction or under prediction. These predictive models, built on a sound mathematical and quantitative basis, can be seen as a step towards establishing hydrogen performance prediction models as generic virtual sensors for wider safety and monitoring applications. (author)

  19. Calibration of PMIS pavement performance prediction models.

    Science.gov (United States)

    2012-02-01

    Improve the accuracy of TxDOTs existing pavement performance prediction models through calibrating these models using actual field data obtained from the Pavement Management Information System (PMIS). : Ensure logical performance superiority patte...

  20. Do Maximal Roller Skiing Speed and Double Poling Performance Predict Youth Cross-Country Skiing Performance?

    Directory of Open Access Journals (Sweden)

    Roland Stöggl, Erich Müller, Thomas Stöggl

    2017-09-01

    Full Text Available The aims of the current study were to analyze whether specific roller skiing tests and cycle length are determinants of youth cross-country (XC skiing performance, and to evaluate sex specific differences by applying non-invasive diagnostics. Forty-nine young XC skiers (33 boys; 13.8 ± 0.6 yrs and 16 girls; 13.4 ± 0.9 yrs performed roller skiing tests consisting of both shorter (50 m and longer durations (575 m. Test results were correlated with on snow XC skiing performance (PXC based on 3 skating and 3 classical distance competitions (3 to 6 km. The main findings of the current study were: 1 Anthropometrics and maturity status were related to boys’, but not to girls’ PXC; 2 Significant moderate to acceptable correlations between girls’ and boys’ short duration maximal roller skiing speed (double poling, V2 skating, leg skating and PXC were found; 3 Boys’ PXC was best predicted by double poling test performance on flat and uphill, while girls’ performance was mainly predicted by uphill double poling test performance; 4 When controlling for maturity offset, boys’ PXC was still highly associated with the roller skiing tests. The use of simple non-invasive roller skiing tests for determination of PXC represents practicable support for ski clubs, schools or skiing federations in the guidance and evaluation of young talent.

  1. Examining the Roles of Reasoning and Working Memory in Predicting Casual Game Performance across Extended Gameplay

    Science.gov (United States)

    Kranz, Michael B.; Baniqued, Pauline L.; Voss, Michelle W.; Lee, Hyunkyu; Kramer, Arthur F.

    2017-01-01

    The variety and availability of casual video games presents an exciting opportunity for applications such as cognitive training. Casual games have been associated with fluid abilities such as working memory (WM) and reasoning, but the importance of these cognitive constructs in predicting performance may change across extended gameplay and vary with game structure. The current investigation examined the relationship between cognitive abilities and casual game performance over time by analyzing first and final session performance over 4–5 weeks of game play. We focused on two groups of subjects who played different types of casual games previously shown to relate to WM and reasoning when played for a single session: (1) puzzle-based games played adaptively across sessions and (2) speeded switching games played non-adaptively across sessions. Reasoning uniquely predicted first session casual game scores for both groups and accounted for much of the relationship with WM. Furthermore, over time, WM became uniquely important for predicting casual game performance for the puzzle-based adaptive games but not for the speeded switching non-adaptive games. These results extend the burgeoning literature on cognitive abilities involved in video games by showing differential relationships of fluid abilities across different game types and extended play. More broadly, the current study illustrates the usefulness of using multiple cognitive measures in predicting performance, and provides potential directions for game-based cognitive training research. PMID:28326042

  2. Examining the Roles of Reasoning and Working Memory in Predicting Casual Game Performance across Extended Gameplay.

    Science.gov (United States)

    Kranz, Michael B; Baniqued, Pauline L; Voss, Michelle W; Lee, Hyunkyu; Kramer, Arthur F

    2017-01-01

    The variety and availability of casual video games presents an exciting opportunity for applications such as cognitive training. Casual games have been associated with fluid abilities such as working memory (WM) and reasoning, but the importance of these cognitive constructs in predicting performance may change across extended gameplay and vary with game structure. The current investigation examined the relationship between cognitive abilities and casual game performance over time by analyzing first and final session performance over 4-5 weeks of game play. We focused on two groups of subjects who played different types of casual games previously shown to relate to WM and reasoning when played for a single session: (1) puzzle-based games played adaptively across sessions and (2) speeded switching games played non-adaptively across sessions. Reasoning uniquely predicted first session casual game scores for both groups and accounted for much of the relationship with WM. Furthermore, over time, WM became uniquely important for predicting casual game performance for the puzzle-based adaptive games but not for the speeded switching non-adaptive games. These results extend the burgeoning literature on cognitive abilities involved in video games by showing differential relationships of fluid abilities across different game types and extended play. More broadly, the current study illustrates the usefulness of using multiple cognitive measures in predicting performance, and provides potential directions for game-based cognitive training research.

  3. Research on the Prediction Model of CPU Utilization Based on ARIMA-BP Neural Network

    Directory of Open Access Journals (Sweden)

    Wang Jina

    2016-01-01

    Full Text Available The dynamic deployment technology of the virtual machine is one of the current cloud computing research focuses. The traditional methods mainly work after the degradation of the service performance that usually lag. To solve the problem a new prediction model based on the CPU utilization is constructed in this paper. A reference offered by the new prediction model of the CPU utilization is provided to the VM dynamic deployment process which will speed to finish the deployment process before the degradation of the service performance. By this method it not only ensure the quality of services but also improve the server performance and resource utilization. The new prediction method of the CPU utilization based on the ARIMA-BP neural network mainly include four parts: preprocess the collected data, build the predictive model of ARIMA-BP neural network, modify the nonlinear residuals of the time series by the BP prediction algorithm and obtain the prediction results by analyzing the above data comprehensively.

  4. Predictive Power of Machine Learning for Optimizing Solar Water Heater Performance: The Potential Application of High-Throughput Screening

    Directory of Open Access Journals (Sweden)

    Hao Li

    2017-01-01

    Full Text Available Predicting the performance of solar water heater (SWH is challenging due to the complexity of the system. Fortunately, knowledge-based machine learning can provide a fast and precise prediction method for SWH performance. With the predictive power of machine learning models, we can further solve a more challenging question: how to cost-effectively design a high-performance SWH? Here, we summarize our recent studies and propose a general framework of SWH design using a machine learning-based high-throughput screening (HTS method. Design of water-in-glass evacuated tube solar water heater (WGET-SWH is selected as a case study to show the potential application of machine learning-based HTS to the design and optimization of solar energy systems.

  5. Falling in the elderly: Do statistical models matter for performance criteria of fall prediction? Results from two large population-based studies.

    Science.gov (United States)

    Kabeshova, Anastasiia; Launay, Cyrille P; Gromov, Vasilii A; Fantino, Bruno; Levinoff, Elise J; Allali, Gilles; Beauchet, Olivier

    2016-01-01

    To compare performance criteria (i.e., sensitivity, specificity, positive predictive value, negative predictive value, area under receiver operating characteristic curve and accuracy) of linear and non-linear statistical models for fall risk in older community-dwellers. Participants were recruited in two large population-based studies, "Prévention des Chutes, Réseau 4" (PCR4, n=1760, cross-sectional design, retrospective collection of falls) and "Prévention des Chutes Personnes Agées" (PCPA, n=1765, cohort design, prospective collection of falls). Six linear statistical models (i.e., logistic regression, discriminant analysis, Bayes network algorithm, decision tree, random forest, boosted trees), three non-linear statistical models corresponding to artificial neural networks (multilayer perceptron, genetic algorithm and neuroevolution of augmenting topologies [NEAT]) and the adaptive neuro fuzzy interference system (ANFIS) were used. Falls ≥1 characterizing fallers and falls ≥2 characterizing recurrent fallers were used as outcomes. Data of studies were analyzed separately and together. NEAT and ANFIS had better performance criteria compared to other models. The highest performance criteria were reported with NEAT when using PCR4 database and falls ≥1, and with both NEAT and ANFIS when pooling data together and using falls ≥2. However, sensitivity and specificity were unbalanced. Sensitivity was higher than specificity when identifying fallers, whereas the converse was found when predicting recurrent fallers. Our results showed that NEAT and ANFIS were non-linear statistical models with the best performance criteria for the prediction of falls but their sensitivity and specificity were unbalanced, underscoring that models should be used respectively for the screening of fallers and the diagnosis of recurrent fallers. Copyright © 2015 European Federation of Internal Medicine. Published by Elsevier B.V. All rights reserved.

  6. Balancing Model Performance and Simplicity to Predict Postoperative Primary Care Blood Pressure Elevation.

    Science.gov (United States)

    Schonberger, Robert B; Dai, Feng; Brandt, Cynthia A; Burg, Matthew M

    2015-09-01

    Because of uncertainty regarding the reliability of perioperative blood pressures and traditional notions downplaying the role of anesthesiologists in longitudinal patient care, there is no consensus for anesthesiologists to recommend postoperative primary care blood pressure follow-up for patients presenting for surgery with an increased blood pressure. The decision of whom to refer should ideally be based on a predictive model that balances performance with ease-of-use. If an acceptable decision rule was developed, a new practice paradigm integrating the surgical encounter into broader public health efforts could be tested, with the goal of reducing long-term morbidity from hypertension among surgical patients. Using national data from US veterans receiving surgical care, we determined the prevalence of poorly controlled outpatient clinic blood pressures ≥140/90 mm Hg, based on the mean of up to 4 readings in the year after surgery. Four increasingly complex logistic regression models were assessed to predict this outcome. The first included the mean of 2 preoperative blood pressure readings; other models progressively added a broad array of demographic and clinical data. After internal validation, the C-statistics and the Net Reclassification Index between the simplest and most complex models were assessed. The performance characteristics of several simple blood pressure referral thresholds were then calculated. Among 215,621 patients, poorly controlled outpatient clinic blood pressure was present postoperatively in 25.7% (95% confidence interval [CI], 25.5%-25.9%) including 14.2% (95% CI, 13.9%-14.6%) of patients lacking a hypertension history. The most complex prediction model demonstrated statistically significant, but clinically marginal, improvement in discrimination over a model based on preoperative blood pressure alone (C-statistic, 0.736 [95% CI, 0.734-0.739] vs 0.721 [95% CI, 0.718-0.723]; P for difference 1 of 4 patients (95% CI, 25

  7. Microarray-based cancer prediction using soft computing approach.

    Science.gov (United States)

    Wang, Xiaosheng; Gotoh, Osamu

    2009-05-26

    One of the difficulties in using gene expression profiles to predict cancer is how to effectively select a few informative genes to construct accurate prediction models from thousands or ten thousands of genes. We screen highly discriminative genes and gene pairs to create simple prediction models involved in single genes or gene pairs on the basis of soft computing approach and rough set theory. Accurate cancerous prediction is obtained when we apply the simple prediction models for four cancerous gene expression datasets: CNS tumor, colon tumor, lung cancer and DLBCL. Some genes closely correlated with the pathogenesis of specific or general cancers are identified. In contrast with other models, our models are simple, effective and robust. Meanwhile, our models are interpretable for they are based on decision rules. Our results demonstrate that very simple models may perform well on cancerous molecular prediction and important gene markers of cancer can be detected if the gene selection approach is chosen reasonably.

  8. Predicting Performance Ratings Using Motivational Antecedents

    National Research Council Canada - National Science Library

    Zazania, Michelle

    1998-01-01

    This research examined the role of motivation in predicting peer and trainer ratings of student performance and contrasted the relative importance of various antecedents for peer and trainer ratings...

  9. Determination of Constructs and Dimensions of Employability Skills Based Work Performance Prediction: A Triangular Approach

    OpenAIRE

    Rahmat, Normala; Buntat, Yahya; Ayub, Abdul Rahman

    2015-01-01

    The level of the employability skills of the graduates as determined by job role and mapped to the employability skills, which correspond to the requirement of employers, will have significant impact on the graduates’ job performance. The main objective of this study was to identify the constructs and dimensions of employability skills, which can predict the work performance of electronic polytechnic graduate in electrical and electronics industry. A triangular qualitative approach was used i...

  10. Statistical and Machine Learning Models to Predict Programming Performance

    OpenAIRE

    Bergin, Susan

    2006-01-01

    This thesis details a longitudinal study on factors that influence introductory programming success and on the development of machine learning models to predict incoming student performance. Although numerous studies have developed models to predict programming success, the models struggled to achieve high accuracy in predicting the likely performance of incoming students. Our approach overcomes this by providing a machine learning technique, using a set of three significant...

  11. Deep-Learning-Based Drug-Target Interaction Prediction.

    Science.gov (United States)

    Wen, Ming; Zhang, Zhimin; Niu, Shaoyu; Sha, Haozhi; Yang, Ruihan; Yun, Yonghuan; Lu, Hongmei

    2017-04-07

    Identifying interactions between known drugs and targets is a major challenge in drug repositioning. In silico prediction of drug-target interaction (DTI) can speed up the expensive and time-consuming experimental work by providing the most potent DTIs. In silico prediction of DTI can also provide insights about the potential drug-drug interaction and promote the exploration of drug side effects. Traditionally, the performance of DTI prediction depends heavily on the descriptors used to represent the drugs and the target proteins. In this paper, to accurately predict new DTIs between approved drugs and targets without separating the targets into different classes, we developed a deep-learning-based algorithmic framework named DeepDTIs. It first abstracts representations from raw input descriptors using unsupervised pretraining and then applies known label pairs of interaction to build a classification model. Compared with other methods, it is found that DeepDTIs reaches or outperforms other state-of-the-art methods. The DeepDTIs can be further used to predict whether a new drug targets to some existing targets or whether a new target interacts with some existing drugs.

  12. Esophageal cancer prediction based on qualitative features using adaptive fuzzy reasoning method

    Directory of Open Access Journals (Sweden)

    Raed I. Hamed

    2015-04-01

    Full Text Available Esophageal cancer is one of the most common cancers world-wide and also the most common cause of cancer death. In this paper, we present an adaptive fuzzy reasoning algorithm for rule-based systems using fuzzy Petri nets (FPNs, where the fuzzy production rules are represented by FPN. We developed an adaptive fuzzy Petri net (AFPN reasoning algorithm as a prognostic system to predict the outcome for esophageal cancer based on the serum concentrations of C-reactive protein and albumin as a set of input variables. The system can perform fuzzy reasoning automatically to evaluate the degree of truth of the proposition representing the risk degree value with a weight value to be optimally tuned based on the observed data. In addition, the implementation process for esophageal cancer prediction is fuzzily deducted by the AFPN algorithm. Performance of the composite model is evaluated through a set of experiments. Simulations and experimental results demonstrate the effectiveness and performance of the proposed algorithms. A comparison of the predictive performance of AFPN models with other methods and the analysis of the curve showed the same results with an intuitive behavior of AFPN models.

  13. New Temperature-based Models for Predicting Global Solar Radiation

    International Nuclear Information System (INIS)

    Hassan, Gasser E.; Youssef, M. Elsayed; Mohamed, Zahraa E.; Ali, Mohamed A.; Hanafy, Ahmed A.

    2016-01-01

    Highlights: • New temperature-based models for estimating solar radiation are investigated. • The models are validated against 20-years measured data of global solar radiation. • The new temperature-based model shows the best performance for coastal sites. • The new temperature-based model is more accurate than the sunshine-based models. • The new model is highly applicable with weather temperature forecast techniques. - Abstract: This study presents new ambient-temperature-based models for estimating global solar radiation as alternatives to the widely used sunshine-based models owing to the unavailability of sunshine data at all locations around the world. Seventeen new temperature-based models are established, validated and compared with other three models proposed in the literature (the Annandale, Allen and Goodin models) to estimate the monthly average daily global solar radiation on a horizontal surface. These models are developed using a 20-year measured dataset of global solar radiation for the case study location (Lat. 30°51′N and long. 29°34′E), and then, the general formulae of the newly suggested models are examined for ten different locations around Egypt. Moreover, the local formulae for the models are established and validated for two coastal locations where the general formulae give inaccurate predictions. Mostly common statistical errors are utilized to evaluate the performance of these models and identify the most accurate model. The obtained results show that the local formula for the most accurate new model provides good predictions for global solar radiation at different locations, especially at coastal sites. Moreover, the local and general formulas of the most accurate temperature-based model also perform better than the two most accurate sunshine-based models from the literature. The quick and accurate estimations of the global solar radiation using this approach can be employed in the design and evaluation of performance for

  14. Prediction of Job Performance: Review of Military Studies

    Science.gov (United States)

    1982-03-01

    an assessment center to predict filed leadership performance of Army officers and NCOs. Proceedings of the 19th Annual Military Testing Association...C. Behaviors, results, and organizational effectiveness: The problem of criteria. In Dunnette, M. D. (Ed.), Handbook of Industrial and organizatin ...than for the Navy enlisted group. 30. Dyer, F. N., & Hlilligoss, R. Z. Using an assessment center to predict field leadership performance of Army

  15. The relative importance of managerial competencies for predicting the perceived job performance of Broad-Based Black Economic Empowerment verification practitioners

    Directory of Open Access Journals (Sweden)

    Barbara M. Seate

    2016-04-01

    Full Text Available Orientation: There is a need for the growing Broad-Based Black Economic Empowerment (B-BBEE verification industry to assess competencies and determine skills gaps for the management of the verification practitioners’ perceived job performance. Knowing which managerial competencies are important for different managerial functions is vital for developing and improving training and development programmes. Research purpose: The purpose of this study was to determine the managerial capabilities that are required of the B-BBEE verification practitioners, in order to improve their perceived job performance. Motivation for the study: The growing number of the B-BBEE verification practitioners calls for more focused training and development. Generating such a training and development programme demands empirical research into the relative importance of managerial competencies. Research approach, design and method: A quantitative design using the survey approach was adopted. A questionnaire was administered to a stratified sample of 87 B-BBEE verification practitioners. Data were analysed using the Statistical Package for Social Sciences (version 22.0 and Smart Partial Least Squares software. Main findings: The results of the correlation analysis revealed that there were strong and positive associations between technical skills, interpersonal skills, compliance to standards and ethics, managerial skills and perceived job performance. Results of the regression analysis showed that managerial skills, compliance to standards and ethics and interpersonal skills were statistically significant in predicting perceived job performance. However, technical skills were insignificant in predicting perceived job performance. Practical/managerial implications: The study has shown that the B-BBEE verification industry, insofar as the technical skills of the practitioners are concerned, does have suitably qualified staff with the requisite educational qualifications. At

  16. Stringent DDI-based prediction of H. sapiens-M. tuberculosis H37Rv protein-protein interactions.

    Science.gov (United States)

    Zhou, Hufeng; Rezaei, Javad; Hugo, Willy; Gao, Shangzhi; Jin, Jingjing; Fan, Mengyuan; Yong, Chern-Han; Wozniak, Michal; Wong, Limsoon

    2013-01-01

    H. sapiens-M. tuberculosis H37Rv protein-protein interaction (PPI) data are very important information to illuminate the infection mechanism of M. tuberculosis H37Rv. But current H. sapiens-M. tuberculosis H37Rv PPI data are very scarce. This seriously limits the study of the interaction between this important pathogen and its host H. sapiens. Computational prediction of H. sapiens-M. tuberculosis H37Rv PPIs is an important strategy to fill in the gap. Domain-domain interaction (DDI) based prediction is one of the frequently used computational approaches in predicting both intra-species and inter-species PPIs. However, the performance of DDI-based host-pathogen PPI prediction has been rather limited. We develop a stringent DDI-based prediction approach with emphasis on (i) differences between the specific domain sequences on annotated regions of proteins under the same domain ID and (ii) calculation of the interaction strength of predicted PPIs based on the interacting residues in their interaction interfaces. We compare our stringent DDI-based approach to a conventional DDI-based approach for predicting PPIs based on gold standard intra-species PPIs and coherent informative Gene Ontology terms assessment. The assessment results show that our stringent DDI-based approach achieves much better performance in predicting PPIs than the conventional approach. Using our stringent DDI-based approach, we have predicted a small set of reliable H. sapiens-M. tuberculosis H37Rv PPIs which could be very useful for a variety of related studies. We also analyze the H. sapiens-M. tuberculosis H37Rv PPIs predicted by our stringent DDI-based approach using cellular compartment distribution analysis, functional category enrichment analysis and pathway enrichment analysis. The analyses support the validity of our prediction result. Also, based on an analysis of the H. sapiens-M. tuberculosis H37Rv PPI network predicted by our stringent DDI-based approach, we have discovered some

  17. Novel prediction- and subblock-based algorithm for fractal image compression

    International Nuclear Information System (INIS)

    Chung, K.-L.; Hsu, C.-H.

    2006-01-01

    Fractal encoding is the most consuming part in fractal image compression. In this paper, a novel two-phase prediction- and subblock-based fractal encoding algorithm is presented. Initially the original gray image is partitioned into a set of variable-size blocks according to the S-tree- and interpolation-based decomposition principle. In the first phase, each current block of variable-size range block tries to find the best matched domain block based on the proposed prediction-based search strategy which utilizes the relevant neighboring variable-size domain blocks. The first phase leads to a significant computation-saving effect. If the domain block found within the predicted search space is unacceptable, in the second phase, a subblock strategy is employed to partition the current variable-size range block into smaller blocks to improve the image quality. Experimental results show that our proposed prediction- and subblock-based fractal encoding algorithm outperforms the conventional full search algorithm and the recently published spatial-correlation-based algorithm by Truong et al. in terms of encoding time and image quality. In addition, the performance comparison among our proposed algorithm and the other two algorithms, the no search-based algorithm and the quadtree-based algorithm, are also investigated

  18. Blind Test of Physics-Based Prediction of Protein Structures

    Science.gov (United States)

    Shell, M. Scott; Ozkan, S. Banu; Voelz, Vincent; Wu, Guohong Albert; Dill, Ken A.

    2009-01-01

    We report here a multiprotein blind test of a computer method to predict native protein structures based solely on an all-atom physics-based force field. We use the AMBER 96 potential function with an implicit (GB/SA) model of solvation, combined with replica-exchange molecular-dynamics simulations. Coarse conformational sampling is performed using the zipping and assembly method (ZAM), an approach that is designed to mimic the putative physical routes of protein folding. ZAM was applied to the folding of six proteins, from 76 to 112 monomers in length, in CASP7, a community-wide blind test of protein structure prediction. Because these predictions have about the same level of accuracy as typical bioinformatics methods, and do not utilize information from databases of known native structures, this work opens up the possibility of predicting the structures of membrane proteins, synthetic peptides, or other foldable polymers, for which there is little prior knowledge of native structures. This approach may also be useful for predicting physical protein folding routes, non-native conformations, and other physical properties from amino acid sequences. PMID:19186130

  19. Yield performance and stability of CMS-based triticale hybrids.

    Science.gov (United States)

    Mühleisen, Jonathan; Piepho, Hans-Peter; Maurer, Hans Peter; Reif, Jochen Christoph

    2015-02-01

    CMS-based triticale hybrids showed only marginal midparent heterosis for grain yield and lower dynamic yield stability compared to inbred lines. Hybrids of triticale (×Triticosecale Wittmack) are expected to possess outstanding yield performance and increased dynamic yield stability. The objectives of the present study were to (1) examine the optimum choice of the biometrical model to compare yield stability of hybrids versus lines, (2) investigate whether hybrids exhibit a more pronounced grain yield performance and yield stability, and (3) study optimal strategies to predict yield stability of hybrids. Thirteen female and seven male parental lines and their 91 factorial hybrids as well as 30 commercial lines were evaluated for grain yield in up to 20 environments. Hybrids were produced using a cytoplasmic male sterility (CMS)-inducing cytoplasm that originated from Triticumtimopheevii Zhuk. We found that the choice of the biometrical model can cause contrasting results and concluded that a group-by-environment interaction term should be added to the model when estimating stability variance of hybrids and lines. midparent heterosis for grain yield was on average 3 % with a range from -15.0 to 11.5 %. No hybrid outperformed the best inbred line. Hybrids had, on average, lower dynamic yield stability compared to the inbred lines. Grain yield performance of hybrids could be predicted based on midparent values and general combining ability (GCA)-predicted values. In contrast, stability variance of hybrids could be predicted only based on GCA-predicted values. We speculated that negative effects of the used CMS cytoplasm might be the reason for the low performance and yield stability of the hybrids. For this purpose a detailed study on the reasons for the drawback of the currently existing CMS system in triticale is urgently required comprising also the search of potentially alternative hybridization systems.

  20. Trust-based collective view prediction

    CERN Document Server

    Luo, Tiejian; Xu, Guandong; Zhou, Jia

    2013-01-01

    Collective view prediction is to judge the opinions of an active web user based on unknown elements by referring to the collective mind of the whole community. Content-based recommendation and collaborative filtering are two mainstream collective view prediction techniques. They generate predictions by analyzing the text features of the target object or the similarity of users' past behaviors. Still, these techniques are vulnerable to the artificially-injected noise data, because they are not able to judge the reliability and credibility of the information sources. Trust-based Collective View

  1. Chaos Time Series Prediction Based on Membrane Optimization Algorithms

    Directory of Open Access Journals (Sweden)

    Meng Li

    2015-01-01

    Full Text Available This paper puts forward a prediction model based on membrane computing optimization algorithm for chaos time series; the model optimizes simultaneously the parameters of phase space reconstruction (τ,m and least squares support vector machine (LS-SVM (γ,σ by using membrane computing optimization algorithm. It is an important basis for spectrum management to predict accurately the change trend of parameters in the electromagnetic environment, which can help decision makers to adopt an optimal action. Then, the model presented in this paper is used to forecast band occupancy rate of frequency modulation (FM broadcasting band and interphone band. To show the applicability and superiority of the proposed model, this paper will compare the forecast model presented in it with conventional similar models. The experimental results show that whether single-step prediction or multistep prediction, the proposed model performs best based on three error measures, namely, normalized mean square error (NMSE, root mean square error (RMSE, and mean absolute percentage error (MAPE.

  2. Numerical simulation of turbulence flow in a Kaplan turbine -Evaluation on turbine performance prediction accuracy-

    International Nuclear Information System (INIS)

    Ko, P; Kurosawa, S

    2014-01-01

    The understanding and accurate prediction of the flow behaviour related to cavitation and pressure fluctuation in a Kaplan turbine are important to the design work enhancing the turbine performance including the elongation of the operation life span and the improvement of turbine efficiency. In this paper, high accuracy turbine and cavitation performance prediction method based on entire flow passage for a Kaplan turbine is presented and evaluated. Two-phase flow field is predicted by solving Reynolds-Averaged Navier-Stokes equations expressed by volume of fluid method tracking the free surface and combined with Reynolds Stress model. The growth and collapse of cavitation bubbles are modelled by the modified Rayleigh-Plesset equation. The prediction accuracy is evaluated by comparing with the model test results of Ns 400 Kaplan model turbine. As a result that the experimentally measured data including turbine efficiency, cavitation performance, and pressure fluctuation are accurately predicted. Furthermore, the cavitation occurrence on the runner blade surface and the influence to the hydraulic loss of the flow passage are discussed. Evaluated prediction method for the turbine flow and performance is introduced to facilitate the future design and research works on Kaplan type turbine

  3. Numerical simulation of turbulence flow in a Kaplan turbine -Evaluation on turbine performance prediction accuracy-

    Science.gov (United States)

    Ko, P.; Kurosawa, S.

    2014-03-01

    The understanding and accurate prediction of the flow behaviour related to cavitation and pressure fluctuation in a Kaplan turbine are important to the design work enhancing the turbine performance including the elongation of the operation life span and the improvement of turbine efficiency. In this paper, high accuracy turbine and cavitation performance prediction method based on entire flow passage for a Kaplan turbine is presented and evaluated. Two-phase flow field is predicted by solving Reynolds-Averaged Navier-Stokes equations expressed by volume of fluid method tracking the free surface and combined with Reynolds Stress model. The growth and collapse of cavitation bubbles are modelled by the modified Rayleigh-Plesset equation. The prediction accuracy is evaluated by comparing with the model test results of Ns 400 Kaplan model turbine. As a result that the experimentally measured data including turbine efficiency, cavitation performance, and pressure fluctuation are accurately predicted. Furthermore, the cavitation occurrence on the runner blade surface and the influence to the hydraulic loss of the flow passage are discussed. Evaluated prediction method for the turbine flow and performance is introduced to facilitate the future design and research works on Kaplan type turbine.

  4. Mean streamline analysis for performance prediction of cross-flow fans

    International Nuclear Information System (INIS)

    Kim, Jae Won; Oh, Hyoung Woo

    2004-01-01

    This paper presents the mean streamline analysis using the empirical loss correlations for performance prediction of cross-flow fans. Comparison of overall performance predictions with test data of a cross-flow fan system with a simplified vortex wall scroll casing and with the published experimental characteristics for a cross-flow fan has been carried out to demonstrate the accuracy of the proposed method. Predicted performance curves by the present mean streamline analysis agree well with experimental data for two different cross-flow fans over the normal operating conditions. The prediction method presented herein can be used efficiently as a tool for the preliminary design and performance analysis of general-purpose cross-flow fans

  5. Network-based ranking methods for prediction of novel disease associated microRNAs.

    Science.gov (United States)

    Le, Duc-Hau

    2015-10-01

    Many studies have shown roles of microRNAs on human disease and a number of computational methods have been proposed to predict such associations by ranking candidate microRNAs according to their relevance to a disease. Among them, machine learning-based methods usually have a limitation in specifying non-disease microRNAs as negative training samples. Meanwhile, network-based methods are becoming dominant since they well exploit a "disease module" principle in microRNA functional similarity networks. Of which, random walk with restart (RWR) algorithm-based method is currently state-of-the-art. The use of this algorithm was inspired from its success in predicting disease gene because the "disease module" principle also exists in protein interaction networks. Besides, many algorithms designed for webpage ranking have been successfully applied in ranking disease candidate genes because web networks share topological properties with protein interaction networks. However, these algorithms have not yet been utilized for disease microRNA prediction. We constructed microRNA functional similarity networks based on shared targets of microRNAs, and then we integrated them with a microRNA functional synergistic network, which was recently identified. After analyzing topological properties of these networks, in addition to RWR, we assessed the performance of (i) PRINCE (PRIoritizatioN and Complex Elucidation), which was proposed for disease gene prediction; (ii) PageRank with Priors (PRP) and K-Step Markov (KSM), which were used for studying web networks; and (iii) a neighborhood-based algorithm. Analyses on topological properties showed that all microRNA functional similarity networks are small-worldness and scale-free. The performance of each algorithm was assessed based on average AUC values on 35 disease phenotypes and average rankings of newly discovered disease microRNAs. As a result, the performance on the integrated network was better than that on individual ones. In

  6. Predicting university performance in psychology: the role of previous performance and discipline-specific knowledge

    OpenAIRE

    Betts, LR; Elder, TJ; Hartley, J; Blurton, A

    2008-01-01

    Recent initiatives to enhance retention and widen participation ensure it is crucial to understand the factors that predict students' performance during their undergraduate degree. The present research used Structural Equation Modeling (SEM) to test three separate models that examined the extent to which British Psychology students' A-level entry qualifications predicted: (1) their performance in years 1-3 of their Psychology degree, and (2) their overall degree performance. Students' overall...

  7. Estimating Stochastic Volatility Models using Prediction-based Estimating Functions

    DEFF Research Database (Denmark)

    Lunde, Asger; Brix, Anne Floor

    to the performance of the GMM estimator based on conditional moments of integrated volatility from Bollerslev and Zhou (2002). The case where the observed log-price process is contaminated by i.i.d. market microstructure (MMS) noise is also investigated. First, the impact of MMS noise on the parameter estimates from......In this paper prediction-based estimating functions (PBEFs), introduced in Sørensen (2000), are reviewed and PBEFs for the Heston (1993) stochastic volatility model are derived. The finite sample performance of the PBEF based estimator is investigated in a Monte Carlo study, and compared...... to correctly account for the noise are investigated. Our Monte Carlo study shows that the estimator based on PBEFs outperforms the GMM estimator, both in the setting with and without MMS noise. Finally, an empirical application investigates the possible challenges and general performance of applying the PBEF...

  8. TBM performance prediction in Yucca Mountain welded tuff from linear cutter tests

    International Nuclear Information System (INIS)

    Gertsch, R.; Ozdemir, L.; Gertsch, L.

    1992-01-01

    This paper discusses performance prediction which were developed for tunnel boring machines operating in welded tuff for the construction of the experimental study facility and the potential nuclear waste repository at Yucca Mountain. The predictions were based on test data obtained from an extensive series of linear cutting tests performed on samples of Topopah String welded tuff from the Yucca Mountain Project site. Using the cutter force, spacing, and penetration data from the experimental program, the thrust, torque, power, and rate of penetration were estimated for a 25 ft diameter tunnel boring machine (TBM) operating in welded tuff. The result show that the Topopah Spring welded tuff (TSw2) can be excavated at relatively high rates of advance with state-of-the-art TBMs. The result also show, however, that the TBM torque and power requirements will be higher than estimated based on rock physical properties and past tunneling experience in rock formations of similar strength

  9. Modeling and prediction of flotation performance using support vector regression

    Directory of Open Access Journals (Sweden)

    Despotović Vladimir

    2017-01-01

    Full Text Available Continuous efforts have been made in recent year to improve the process of paper recycling, as it is of critical importance for saving the wood, water and energy resources. Flotation deinking is considered to be one of the key methods for separation of ink particles from the cellulose fibres. Attempts to model the flotation deinking process have often resulted in complex models that are difficult to implement and use. In this paper a model for prediction of flotation performance based on Support Vector Regression (SVR, is presented. Representative data samples were created in laboratory, under a variety of practical control variables for the flotation deinking process, including different reagents, pH values and flotation residence time. Predictive model was created that was trained on these data samples, and the flotation performance was assessed showing that Support Vector Regression is a promising method even when dataset used for training the model is limited.

  10. Accurate Multisteps Traffic Flow Prediction Based on SVM

    Directory of Open Access Journals (Sweden)

    Zhang Mingheng

    2013-01-01

    Full Text Available Accurate traffic flow prediction is prerequisite and important for realizing intelligent traffic control and guidance, and it is also the objective requirement for intelligent traffic management. Due to the strong nonlinear, stochastic, time-varying characteristics of urban transport system, artificial intelligence methods such as support vector machine (SVM are now receiving more and more attentions in this research field. Compared with the traditional single-step prediction method, the multisteps prediction has the ability that can predict the traffic state trends over a certain period in the future. From the perspective of dynamic decision, it is far important than the current traffic condition obtained. Thus, in this paper, an accurate multi-steps traffic flow prediction model based on SVM was proposed. In which, the input vectors were comprised of actual traffic volume and four different types of input vectors were compared to verify their prediction performance with each other. Finally, the model was verified with actual data in the empirical analysis phase and the test results showed that the proposed SVM model had a good ability for traffic flow prediction and the SVM-HPT model outperformed the other three models for prediction.

  11. A comparison of SAR ATR performance with information theoretic predictions

    Science.gov (United States)

    Blacknell, David

    2003-09-01

    Performance assessment of automatic target detection and recognition algorithms for SAR systems (or indeed any other sensors) is essential if the military utility of the system / algorithm mix is to be quantified. This is a relatively straightforward task if extensive trials data from an existing system is used. However, a crucial requirement is to assess the potential performance of novel systems as a guide to procurement decisions. This task is no longer straightforward since a hypothetical system cannot provide experimental trials data. QinetiQ has previously developed a theoretical technique for classification algorithm performance assessment based on information theory. The purpose of the study presented here has been to validate this approach. To this end, experimental SAR imagery of targets has been collected using the QinetiQ Enhanced Surveillance Radar to allow algorithm performance assessments as a number of parameters are varied. In particular, performance comparisons can be made for (i) resolutions up to 0.1m, (ii) single channel versus polarimetric (iii) targets in the open versus targets in scrubland and (iv) use versus non-use of camouflage. The change in performance as these parameters are varied has been quantified from the experimental imagery whilst the information theoretic approach has been used to predict the expected variation of performance with parameter value. A comparison of these measured and predicted assessments has revealed the strengths and weaknesses of the theoretical technique as will be discussed in the paper.

  12. Driver's mental workload prediction model based on physiological indices.

    Science.gov (United States)

    Yan, Shengyuan; Tran, Cong Chi; Wei, Yingying; Habiyaremye, Jean Luc

    2017-09-15

    Developing an early warning model to predict the driver's mental workload (MWL) is critical and helpful, especially for new or less experienced drivers. The present study aims to investigate the correlation between new drivers' MWL and their work performance, regarding the number of errors. Additionally, the group method of data handling is used to establish the driver's MWL predictive model based on subjective rating (NASA task load index [NASA-TLX]) and six physiological indices. The results indicate that the NASA-TLX and the number of errors are positively correlated, and the predictive model shows the validity of the proposed model with an R 2 value of 0.745. The proposed model is expected to provide a reference value for the new drivers of their MWL by providing the physiological indices, and the driving lesson plans can be proposed to sustain an appropriate MWL as well as improve the driver's work performance.

  13. Kernel-based whole-genome prediction of complex traits: a review.

    Science.gov (United States)

    Morota, Gota; Gianola, Daniel

    2014-01-01

    Prediction of genetic values has been a focus of applied quantitative genetics since the beginning of the 20th century, with renewed interest following the advent of the era of whole genome-enabled prediction. Opportunities offered by the emergence of high-dimensional genomic data fueled by post-Sanger sequencing technologies, especially molecular markers, have driven researchers to extend Ronald Fisher and Sewall Wright's models to confront new challenges. In particular, kernel methods are gaining consideration as a regression method of choice for genome-enabled prediction. Complex traits are presumably influenced by many genomic regions working in concert with others (clearly so when considering pathways), thus generating interactions. Motivated by this view, a growing number of statistical approaches based on kernels attempt to capture non-additive effects, either parametrically or non-parametrically. This review centers on whole-genome regression using kernel methods applied to a wide range of quantitative traits of agricultural importance in animals and plants. We discuss various kernel-based approaches tailored to capturing total genetic variation, with the aim of arriving at an enhanced predictive performance in the light of available genome annotation information. Connections between prediction machines born in animal breeding, statistics, and machine learning are revisited, and their empirical prediction performance is discussed. Overall, while some encouraging results have been obtained with non-parametric kernels, recovering non-additive genetic variation in a validation dataset remains a challenge in quantitative genetics.

  14. Kernel-based whole-genome prediction of complex traits: a review

    Directory of Open Access Journals (Sweden)

    Gota eMorota

    2014-10-01

    Full Text Available Prediction of genetic values has been a focus of applied quantitative genetics since the beginning of the 20th century, with renewed interest following the advent of the era of whole genome-enabled prediction. Opportunities offered by the emergence of high-dimensional genomic data fueled by post-Sanger sequencing technologies, especially molecular markers, have driven researchers to extend Ronald Fisher and Sewall Wright's models to confront new challenges. In particular, kernel methods are gaining consideration as a regression method of choice for genome-enabled prediction. Complex traits are presumably influenced by many genomic regions working in concert with others (clearly so when considering pathways, thus generating interactions. Motivated by this view, a growing number of statistical approaches based on kernels attempt to capture non-additive effects, either parametrically or non-parametrically. This review centers on whole-genome regression using kernel methods applied to a wide range of quantitative traits of agricultural importance in animals and plants. We discuss various kernel-based approaches tailored to capturing total genetic variation, with the aim of arriving at an enhanced predictive performance in the light of available genome annotation information. Connections between prediction machines born in animal breeding, statistics, and machine learning are revisited, and their empirical prediction performance is discussed. Overall, while some encouraging results have been obtained with non-parametric kernels, recovering non-additive genetic variation in a validation dataset remains a challenge in quantitative genetics.

  15. Mental Strategies Predict Performance and Satisfaction with Performance among Soccer Players.

    Science.gov (United States)

    Kruk, Magdalena; Blecharz, Jan; Boberska, Monika; Zarychta, Karolina; Luszczynska, Aleksandra

    2017-10-01

    This study investigated the changes in mental strategies across the season and their effects on performance and satisfaction with individual performance. Data were collected three times: at the pre-season at Time 1 (T1; baseline), in the mid-season at Time 2 (T2; two-month follow-up), and at the end-of-season at Time 3 (T3; nine-month follow-up) among male soccer players (N = 97) aged 16-27. Athletes completed the questionnaires assessing the use of nine psychological strategies in competition and the level of satisfaction with individual performance. Endurance performance was measured objectively with a 300 m run. A high level of relaxation (T1) explained better 300 m run performance (T3) and a high level of self-talk explained a higher satisfaction with individual performance (T3). A rare use of distractibility and emotional control (T1) predicted a higher level of satisfaction with individual performance (T3). No predictive role of other psychological strategies was found. The use of emotional control, relaxation, and distractibility increased over the season, whereas the use of imagery and negative thinking declined. Besides the roles of self-talk, imagery, relaxation and goal-setting, the effects of distractibility and emotional control should be taken into account when considering athletes' mental training programs.

  16. User's Self-Prediction of Performance in Motor Imagery Brain-Computer Interface.

    Science.gov (United States)

    Ahn, Minkyu; Cho, Hohyun; Ahn, Sangtae; Jun, Sung C

    2018-01-01

    Performance variation is a critical issue in motor imagery brain-computer interface (MI-BCI), and various neurophysiological, psychological, and anatomical correlates have been reported in the literature. Although the main aim of such studies is to predict MI-BCI performance for the prescreening of poor performers, studies which focus on the user's sense of the motor imagery process and directly estimate MI-BCI performance through the user's self-prediction are lacking. In this study, we first test each user's self-prediction idea regarding motor imagery experimental datasets. Fifty-two subjects participated in a classical, two-class motor imagery experiment and were asked to evaluate their easiness with motor imagery and to predict their own MI-BCI performance. During the motor imagery experiment, an electroencephalogram (EEG) was recorded; however, no feedback on motor imagery was given to subjects. From EEG recordings, the offline classification accuracy was estimated and compared with several questionnaire scores of subjects, as well as with each subject's self-prediction of MI-BCI performance. The subjects' performance predictions during motor imagery task showed a high positive correlation ( r = 0.64, p performance even without feedback information. This implies that the human brain is an active learning system and, by self-experiencing the endogenous motor imagery process, it can sense and adopt the quality of the process. Thus, it is believed that users may be able to predict MI-BCI performance and results may contribute to a better understanding of low performance and advancing BCI.

  17. An Adaptive Handover Prediction Scheme for Seamless Mobility Based Wireless Networks

    Directory of Open Access Journals (Sweden)

    Ali Safa Sadiq

    2014-01-01

    Full Text Available We propose an adaptive handover prediction (AHP scheme for seamless mobility based wireless networks. That is, the AHP scheme incorporates fuzzy logic with AP prediction process in order to lend cognitive capability to handover decision making. Selection metrics, including received signal strength, mobile node relative direction towards the access points in the vicinity, and access point load, are collected and considered inputs of the fuzzy decision making system in order to select the best preferable AP around WLANs. The obtained handover decision which is based on the calculated quality cost using fuzzy inference system is also based on adaptable coefficients instead of fixed coefficients. In other words, the mean and the standard deviation of the normalized network prediction metrics of fuzzy inference system, which are collected from available WLANs are obtained adaptively. Accordingly, they are applied as statistical information to adjust or adapt the coefficients of membership functions. In addition, we propose an adjustable weight vector concept for input metrics in order to cope with the continuous, unpredictable variation in their membership degrees. Furthermore, handover decisions are performed in each MN independently after knowing RSS, direction toward APs, and AP load. Finally, performance evaluation of the proposed scheme shows its superiority compared with representatives of the prediction approaches.

  18. A Sensor Dynamic Measurement Error Prediction Model Based on NAPSO-SVM.

    Science.gov (United States)

    Jiang, Minlan; Jiang, Lan; Jiang, Dingde; Li, Fei; Song, Houbing

    2018-01-15

    Dynamic measurement error correction is an effective way to improve sensor precision. Dynamic measurement error prediction is an important part of error correction, and support vector machine (SVM) is often used for predicting the dynamic measurement errors of sensors. Traditionally, the SVM parameters were always set manually, which cannot ensure the model's performance. In this paper, a SVM method based on an improved particle swarm optimization (NAPSO) is proposed to predict the dynamic measurement errors of sensors. Natural selection and simulated annealing are added in the PSO to raise the ability to avoid local optima. To verify the performance of NAPSO-SVM, three types of algorithms are selected to optimize the SVM's parameters: the particle swarm optimization algorithm (PSO), the improved PSO optimization algorithm (NAPSO), and the glowworm swarm optimization (GSO). The dynamic measurement error data of two sensors are applied as the test data. The root mean squared error and mean absolute percentage error are employed to evaluate the prediction models' performances. The experimental results show that among the three tested algorithms the NAPSO-SVM method has a better prediction precision and a less prediction errors, and it is an effective method for predicting the dynamic measurement errors of sensors.

  19. Neural Network-Based Coronary Heart Disease Risk Prediction Using Feature Correlation Analysis

    Directory of Open Access Journals (Sweden)

    Jae Kwon Kim

    2017-01-01

    Full Text Available Background. Of the machine learning techniques used in predicting coronary heart disease (CHD, neural network (NN is popularly used to improve performance accuracy. Objective. Even though NN-based systems provide meaningful results based on clinical experiments, medical experts are not satisfied with their predictive performances because NN is trained in a “black-box” style. Method. We sought to devise an NN-based prediction of CHD risk using feature correlation analysis (NN-FCA using two stages. First, the feature selection stage, which makes features acceding to the importance in predicting CHD risk, is ranked, and second, the feature correlation analysis stage, during which one learns about the existence of correlations between feature relations and the data of each NN predictor output, is determined. Result. Of the 4146 individuals in the Korean dataset evaluated, 3031 had low CHD risk and 1115 had CHD high risk. The area under the receiver operating characteristic (ROC curve of the proposed model (0.749 ± 0.010 was larger than the Framingham risk score (FRS (0.393 ± 0.010. Conclusions. The proposed NN-FCA, which utilizes feature correlation analysis, was found to be better than FRS in terms of CHD risk prediction. Furthermore, the proposed model resulted in a larger ROC curve and more accurate predictions of CHD risk in the Korean population than the FRS.

  20. SNBRFinder: A Sequence-Based Hybrid Algorithm for Enhanced Prediction of Nucleic Acid-Binding Residues.

    Directory of Open Access Journals (Sweden)

    Xiaoxia Yang

    Full Text Available Protein-nucleic acid interactions are central to various fundamental biological processes. Automated methods capable of reliably identifying DNA- and RNA-binding residues in protein sequence are assuming ever-increasing importance. The majority of current algorithms rely on feature-based prediction, but their accuracy remains to be further improved. Here we propose a sequence-based hybrid algorithm SNBRFinder (Sequence-based Nucleic acid-Binding Residue Finder by merging a feature predictor SNBRFinderF and a template predictor SNBRFinderT. SNBRFinderF was established using the support vector machine whose inputs include sequence profile and other complementary sequence descriptors, while SNBRFinderT was implemented with the sequence alignment algorithm based on profile hidden Markov models to capture the weakly homologous template of query sequence. Experimental results show that SNBRFinderF was clearly superior to the commonly used sequence profile-based predictor and SNBRFinderT can achieve comparable performance to the structure-based template methods. Leveraging the complementary relationship between these two predictors, SNBRFinder reasonably improved the performance of both DNA- and RNA-binding residue predictions. More importantly, the sequence-based hybrid prediction reached competitive performance relative to our previous structure-based counterpart. Our extensive and stringent comparisons show that SNBRFinder has obvious advantages over the existing sequence-based prediction algorithms. The value of our algorithm is highlighted by establishing an easy-to-use web server that is freely accessible at http://ibi.hzau.edu.cn/SNBRFinder.

  1. SNBRFinder: A Sequence-Based Hybrid Algorithm for Enhanced Prediction of Nucleic Acid-Binding Residues.

    Science.gov (United States)

    Yang, Xiaoxia; Wang, Jia; Sun, Jun; Liu, Rong

    2015-01-01

    Protein-nucleic acid interactions are central to various fundamental biological processes. Automated methods capable of reliably identifying DNA- and RNA-binding residues in protein sequence are assuming ever-increasing importance. The majority of current algorithms rely on feature-based prediction, but their accuracy remains to be further improved. Here we propose a sequence-based hybrid algorithm SNBRFinder (Sequence-based Nucleic acid-Binding Residue Finder) by merging a feature predictor SNBRFinderF and a template predictor SNBRFinderT. SNBRFinderF was established using the support vector machine whose inputs include sequence profile and other complementary sequence descriptors, while SNBRFinderT was implemented with the sequence alignment algorithm based on profile hidden Markov models to capture the weakly homologous template of query sequence. Experimental results show that SNBRFinderF was clearly superior to the commonly used sequence profile-based predictor and SNBRFinderT can achieve comparable performance to the structure-based template methods. Leveraging the complementary relationship between these two predictors, SNBRFinder reasonably improved the performance of both DNA- and RNA-binding residue predictions. More importantly, the sequence-based hybrid prediction reached competitive performance relative to our previous structure-based counterpart. Our extensive and stringent comparisons show that SNBRFinder has obvious advantages over the existing sequence-based prediction algorithms. The value of our algorithm is highlighted by establishing an easy-to-use web server that is freely accessible at http://ibi.hzau.edu.cn/SNBRFinder.

  2. When predictions take control: The effect of task predictions on task switching performance

    Directory of Open Access Journals (Sweden)

    Wout eDuthoo

    2012-08-01

    Full Text Available In this paper, we aimed to investigate the role of self-generated predictions in the flexible control of behaviour. Therefore, we ran a task switching experiment in which participants were asked to try to predict the upcoming task in three conditions varying in switch rate (30%, 50% and 70%. Irrespective of their predictions, the colour of the target indicated which task participants had to perform. In line with previous studies (Mayr, 2006; Monsell & Mizon, 2006, the switch cost was attenuated as the switch rate increased. Importantly, a clear task repetition bias was found in all conditions, yet the task repetition prediction rate dropped from 78% over 66% to 49% with increasing switch probability in the three conditions. Irrespective of condition, the switch cost was strongly reduced in expectation of a task alternation compared to the cost of an unexpected task alternation following repetition predictions. Hence, our data suggest that the reduction in the switch cost with increasing switch probability is caused by a diminished expectancy for the task to repeat. Taken together, this paper highlights the importance of predictions in the flexible control of behaviour, and suggests a crucial role for task repetition expectancy in the context-sensitive adjusting of task switching performance.

  3. Comparison of Different Approaches to Predict the Performance of Pumps As Turbines (PATs

    Directory of Open Access Journals (Sweden)

    Mauro Venturini

    2018-04-01

    Full Text Available This paper deals with the comparison of different methods which can be used for the prediction of the performance curves of pumps as turbines (PATs. The considered approaches are four, i.e., one physics-based simulation model (“white box” model, two “gray box” models, which integrate theory on turbomachines with specific data correlations, and one “black box” model. More in detail, the modeling approaches are: (1 a physics-based simulation model developed by the same authors, which includes the equations for estimating head, power, and efficiency and uses loss coefficients and specific parameters; (2 a model developed by Derakhshan and Nourbakhsh, which first predicts the best efficiency point of a PAT and then reconstructs their complete characteristic curves by means of two ad hoc equations; (3 the prediction model developed by Singh and Nestmann, which predicts the complete turbine characteristics based on pump shape and size; (4 an Evolutionary Polynomial Regression model, which represents a data-driven hybrid scheme which can be used for identifying the explicit mathematical relationship between PAT and pump curves. All approaches are applied to literature data, relying on both pump and PAT performance curves of head, power, and efficiency over the entire range of operation. The experimental data were provided by Derakhshan and Nourbakhsh for four different turbomachines, working in both pump and PAT mode with specific speed values in the range 1.53–5.82. This paper provides a quantitative assessment of the predictions made by means of the considered approaches and also analyzes consistency from a physical point of view. Advantages and drawbacks of each method are also analyzed and discussed.

  4. Evaluating the predictive performance of empirical estimators of natural mortality rate using information on over 200 fish species

    Science.gov (United States)

    Then, Amy Y.; Hoenig, John M; Hall, Norman G.; Hewitt, David A.

    2015-01-01

    Many methods have been developed in the last 70 years to predict the natural mortality rate, M, of a stock based on empirical evidence from comparative life history studies. These indirect or empirical methods are used in most stock assessments to (i) obtain estimates of M in the absence of direct information, (ii) check on the reasonableness of a direct estimate of M, (iii) examine the range of plausible M estimates for the stock under consideration, and (iv) define prior distributions for Bayesian analyses. The two most cited empirical methods have appeared in the literature over 2500 times to date. Despite the importance of these methods, there is no consensus in the literature on how well these methods work in terms of prediction error or how their performance may be ranked. We evaluate estimators based on various combinations of maximum age (tmax), growth parameters, and water temperature by seeing how well they reproduce >200 independent, direct estimates of M. We use tenfold cross-validation to estimate the prediction error of the estimators and to rank their performance. With updated and carefully reviewed data, we conclude that a tmax-based estimator performs the best among all estimators evaluated. The tmax-based estimators in turn perform better than the Alverson–Carney method based on tmax and the von Bertalanffy K coefficient, Pauly’s method based on growth parameters and water temperature and methods based just on K. It is possible to combine two independent methods by computing a weighted mean but the improvement over the tmax-based methods is slight. Based on cross-validation prediction error, model residual patterns, model parsimony, and biological considerations, we recommend the use of a tmax-based estimator (M=4.899tmax−0.916">M=4.899t−0.916maxM=4.899tmax−0.916, prediction error = 0.32) when possible and a growth-based method (M=4.118K0.73L∞−0.33">M=4.118K0.73L−0.33∞M=4.118K0.73L∞−0.33 , prediction error

  5. Genome-Wide Prediction of the Performance of Three-Way Hybrids in Barley

    Directory of Open Access Journals (Sweden)

    Zuo Li

    2017-03-01

    Full Text Available Predicting the grain yield performance of three-way hybrids is challenging. Three-way crosses are relevant for hybrid breeding in barley ( L. and maize ( L. adapted to East Africa. The main goal of our study was to implement and evaluate genome-wide prediction approaches of the performance of three-way hybrids using data of single-cross hybrids for a scenario in which parental lines of the three-way hybrids originate from three genetically distinct subpopulations. We extended the ridge regression best linear unbiased prediction (RRBLUP and devised a genomic selection model allowing for subpopulation-specific marker effects (GSA-RRBLUP: general and subpopulation-specific additive RRBLUP. Using an empirical barley data set, we showed that applying GSA-RRBLUP tripled the prediction ability of three-way hybrids from 0.095 to 0.308 compared with RRBLUP, modeling one additive effect for all three subpopulations. The experimental findings were further substantiated with computer simulations. Our results emphasize the potential of GSA-RRBLUP to improve genome-wide hybrid prediction of three-way hybrids for scenarios of genetically diverse parental populations. Because of the advantages of the GSA-RRBLUP model in dealing with hybrids from different parental populations, it may also be a promising approach to boost the prediction ability for hybrid breeding programs based on genetically diverse heterotic groups.

  6. Soil-pipe interaction modeling for pipe behavior prediction with super learning based methods

    Science.gov (United States)

    Shi, Fang; Peng, Xiang; Liu, Huan; Hu, Yafei; Liu, Zheng; Li, Eric

    2018-03-01

    Underground pipelines are subject to severe distress from the surrounding expansive soil. To investigate the structural response of water mains to varying soil movements, field data, including pipe wall strains in situ soil water content, soil pressure and temperature, was collected. The research on monitoring data analysis has been reported, but the relationship between soil properties and pipe deformation has not been well-interpreted. To characterize the relationship between soil property and pipe deformation, this paper presents a super learning based approach combining feature selection algorithms to predict the water mains structural behavior in different soil environments. Furthermore, automatic variable selection method, e.i. recursive feature elimination algorithm, were used to identify the critical predictors contributing to the pipe deformations. To investigate the adaptability of super learning to different predictive models, this research employed super learning based methods to three different datasets. The predictive performance was evaluated by R-squared, root-mean-square error and mean absolute error. Based on the prediction performance evaluation, the superiority of super learning was validated and demonstrated by predicting three types of pipe deformations accurately. In addition, a comprehensive understand of the water mains working environments becomes possible.

  7. Cloud-based Predictive Modeling System and its Application to Asthma Readmission Prediction

    Science.gov (United States)

    Chen, Robert; Su, Hang; Khalilia, Mohammed; Lin, Sizhe; Peng, Yue; Davis, Tod; Hirsh, Daniel A; Searles, Elizabeth; Tejedor-Sojo, Javier; Thompson, Michael; Sun, Jimeng

    2015-01-01

    The predictive modeling process is time consuming and requires clinical researchers to handle complex electronic health record (EHR) data in restricted computational environments. To address this problem, we implemented a cloud-based predictive modeling system via a hybrid setup combining a secure private server with the Amazon Web Services (AWS) Elastic MapReduce platform. EHR data is preprocessed on a private server and the resulting de-identified event sequences are hosted on AWS. Based on user-specified modeling configurations, an on-demand web service launches a cluster of Elastic Compute 2 (EC2) instances on AWS to perform feature selection and classification algorithms in a distributed fashion. Afterwards, the secure private server aggregates results and displays them via interactive visualization. We tested the system on a pediatric asthma readmission task on a de-identified EHR dataset of 2,967 patients. We conduct a larger scale experiment on the CMS Linkable 2008–2010 Medicare Data Entrepreneurs’ Synthetic Public Use File dataset of 2 million patients, which achieves over 25-fold speedup compared to sequential execution. PMID:26958172

  8. Neither here, nor there: impression management does not predict expatriate adjustment and job performance

    Directory of Open Access Journals (Sweden)

    HANNAH JACKSON FOLDES

    2006-09-01

    Full Text Available Social desirability scale scores reflect substantive individual differences related to personality. The objective of the current study was to examine whether social desirability, and impression management specifically (a component of social desirability, is predictive of adjustment and job performance for expatriates. Based on theoretical considerations, it was proposed that impression management might be linked to expatriate job performance in a predictive and mediated relationship through adjustment. Job performance ratings provided by host country national co-workers were obtained for 308 expatriates on assignment in Turkey. Expatriates responded to a measure of personality and cross cultural adjustment. It was found that impression management scale scores were not related to either adjustment or job performance. These results are discussed in the broader context of research on social desirability, expatriate job performance, and expatriate research in general.

  9. On the comparison of stochastic model predictive control strategies applied to a hydrogen-based microgrid

    Science.gov (United States)

    Velarde, P.; Valverde, L.; Maestre, J. M.; Ocampo-Martinez, C.; Bordons, C.

    2017-03-01

    In this paper, a performance comparison among three well-known stochastic model predictive control approaches, namely, multi-scenario, tree-based, and chance-constrained model predictive control is presented. To this end, three predictive controllers have been designed and implemented in a real renewable-hydrogen-based microgrid. The experimental set-up includes a PEM electrolyzer, lead-acid batteries, and a PEM fuel cell as main equipment. The real experimental results show significant differences from the plant components, mainly in terms of use of energy, for each implemented technique. Effectiveness, performance, advantages, and disadvantages of these techniques are extensively discussed and analyzed to give some valid criteria when selecting an appropriate stochastic predictive controller.

  10. Performance Trends During Sleep Deprivation on a Tilt-Based Control Task.

    Science.gov (United States)

    Bolkhovsky, Jeffrey B; Ritter, Frank E; Chon, Ki H; Qin, Michael

    2018-07-01

    Understanding human behavior under the effects of sleep deprivation allows for the mitigation of risk due to reduced performance. To further this goal, this study investigated the effects of short-term sleep deprivation using a tilt-based control device and examined whether existing user models accurately predict targeting performance. A task in which the user tilts a surface to roll a ball into a target was developed to examine motor performance. A model was built to predict human performance for this task under various levels of sleep deprivation. Every 2 h, 10 subjects completed the task until they reached 24 h of wakefulness. Performance measurements of this task, which were based on Fitts' law, included movement time, task throughput, and time intercept. The model predicted significant performance decrements over the 24-h period with an increase in movement time (R2 = 0.61), a decrease in throughput (R2 = 0.57), and an increase in time intercept (R2 = 0.60). However, it was found that in experimental trials there was no significant change in movement time (R2 = 0.11), throughput (R2 = 0.15), or time intercept (R2 = 0.27). The results found were unexpected as performance decrement is frequently reported during sleep deprivation. These findings suggest a reexamination of the initial thought of sleep loss leading to a decrement in all aspects of performance.Bolkovsky JB, Ritter FE, Chon KH, Qin M. Performance trends during sleep deprivation on a tilt-based control task. Aerosp Med Hum Perform. 2018; 89(7):626-633.

  11. Improved Helicopter Rotor Performance Prediction through Loose and Tight CFD/CSD Coupling

    Science.gov (United States)

    Ickes, Jacob C.

    Helicopters and other Vertical Take-Off or Landing (VTOL) vehicles exhibit an interesting combination of structural dynamic and aerodynamic phenomena which together drive the rotor performance. The combination of factors involved make simulating the rotor a challenging and multidisciplinary effort, and one which is still an active area of interest in the industry because of the money and time it could save during design. Modern tools allow the prediction of rotorcraft physics from first principles. Analysis of the rotor system with this level of accuracy provides the understanding necessary to improve its performance. There has historically been a divide between the comprehensive codes which perform aeroelastic rotor simulations using simplified aerodynamic models, and the very computationally intensive Navier-Stokes Computational Fluid Dynamics (CFD) solvers. As computer resources become more available, efforts have been made to replace the simplified aerodynamics of the comprehensive codes with the more accurate results from a CFD code. The objective of this work is to perform aeroelastic rotorcraft analysis using first-principles simulations for both fluids and structural predictions using tools available at the University of Toledo. Two separate codes are coupled together in both loose coupling (data exchange on a periodic interval) and tight coupling (data exchange each time step) schemes. To allow the coupling to be carried out in a reliable and efficient way, a Fluid-Structure Interaction code was developed which automatically performs primary functions of loose and tight coupling procedures. Flow phenomena such as transonics, dynamic stall, locally reversed flow on a blade, and Blade-Vortex Interaction (BVI) were simulated in this work. Results of the analysis show aerodynamic load improvement due to the inclusion of the CFD-based airloads in the structural dynamics analysis of the Computational Structural Dynamics (CSD) code. Improvements came in the form

  12. Predicting Liaison: an Example-Based Approach

    NARCIS (Netherlands)

    Greefhorst, A.P.M.; Bosch, A.P.J. van den

    2016-01-01

    Predicting liaison in French is a non-trivial problem to model. We compare a memory-based machine-learning algorithm with a rule-based baseline. The memory-based learner is trained to predict whether liaison occurs between two words on the basis of lexical, orthographic, morphosyntactic, and

  13. Predictive Performance Tuning of OpenACC Accelerated Applications

    KAUST Repository

    Siddiqui, Shahzeb

    2014-05-04

    Graphics Processing Units (GPUs) are gradually becoming mainstream in supercomputing as their capabilities to significantly accelerate a large spectrum of scientific applications have been clearly identified and proven. Moreover, with the introduction of high level programming models such as OpenACC [1] and OpenMP 4.0 [2], these devices are becoming more accessible and practical to use by a larger scientific community. However, performance optimization of OpenACC accelerated applications usually requires an in-depth knowledge of the hardware and software specifications. We suggest a prediction-based performance tuning mechanism [3] to quickly tune OpenACC parameters for a given application to dynamically adapt to the execution environment on a given system. This approach is applied to a finite difference kernel to tune the OpenACC gang and vector clauses for mapping the compute kernels into the underlying accelerator architecture. Our experiments show a significant performance improvement against the default compiler parameters and a faster tuning by an order of magnitude compared to the brute force search tuning.

  14. Prediction of Protein Structural Classes for Low-Similarity Sequences Based on Consensus Sequence and Segmented PSSM

    Directory of Open Access Journals (Sweden)

    Yunyun Liang

    2015-01-01

    Full Text Available Prediction of protein structural classes for low-similarity sequences is useful for understanding fold patterns, regulation, functions, and interactions of proteins. It is well known that feature extraction is significant to prediction of protein structural class and it mainly uses protein primary sequence, predicted secondary structure sequence, and position-specific scoring matrix (PSSM. Currently, prediction solely based on the PSSM has played a key role in improving the prediction accuracy. In this paper, we propose a novel method called CSP-SegPseP-SegACP by fusing consensus sequence (CS, segmented PsePSSM, and segmented autocovariance transformation (ACT based on PSSM. Three widely used low-similarity datasets (1189, 25PDB, and 640 are adopted in this paper. Then a 700-dimensional (700D feature vector is constructed and the dimension is decreased to 224D by using principal component analysis (PCA. To verify the performance of our method, rigorous jackknife cross-validation tests are performed on 1189, 25PDB, and 640 datasets. Comparison of our results with the existing PSSM-based methods demonstrates that our method achieves the favorable and competitive performance. This will offer an important complementary to other PSSM-based methods for prediction of protein structural classes for low-similarity sequences.

  15. Predicting the Impacts of Intravehicular Displays on Driving Performance with Human Performance Modeling

    Science.gov (United States)

    Mitchell, Diane Kuhl; Wojciechowski, Josephine; Samms, Charneta

    2012-01-01

    A challenge facing the U.S. National Highway Traffic Safety Administration (NHTSA), as well as international safety experts, is the need to educate car drivers about the dangers associated with performing distraction tasks while driving. Researchers working for the U.S. Army Research Laboratory have developed a technique for predicting the increase in mental workload that results when distraction tasks are combined with driving. They implement this technique using human performance modeling. They have predicted workload associated with driving combined with cell phone use. In addition, they have predicted the workload associated with driving military vehicles combined with threat detection. Their technique can be used by safety personnel internationally to demonstrate the dangers of combining distracter tasks with driving and to mitigate the safety risks.

  16. TRM performance prediction in Yucca Mountain welded tuff from linear cutter tests

    International Nuclear Information System (INIS)

    Gertsch, R.; Ozdemir, L.; Gertsch, L.

    1992-01-01

    Performance predictions were developed for tunnel boring machines operating in welded tuff for the construction of the experimental study facility and the potential nuclear waste repository at Yucca Mountain. The predictions were based on test data obtained from an extensive series of linear cutting tests performed on samples of Topopah Spring welded tuff from the Yucca Mountain Project site. Using the cutter force, spacing, and penetration data from the experimental program, the thrust, torque, power, and rate of penetration were estimated for a 25 ft diameter tunnel boring machine (TBM) operating in welded tuff. Guidelines were developed for the optimal design of the TBM cutterhead to achieve high production rates at the lowest possible excavation costs. The results show that the Topopah Spring welded tuff (TSw2) can be excavated at relatively high rates of advance with state-of-the-art TBMs. The results also show, however, that the TBM torque and power requirements will be higher than estimated based on rock physical properties and past tunneling experience in rock formations of similar strength

  17. Both Reaction Time and Accuracy Measures of Intraindividual Variability Predict Cognitive Performance in Alzheimer's Disease

    Directory of Open Access Journals (Sweden)

    Björn U. Christ

    2018-04-01

    Full Text Available Dementia researchers around the world prioritize the urgent need for sensitive measurement tools that can detect cognitive and functional change at the earliest stages of Alzheimer's disease (AD. Sensitive indicators of underlying neural pathology assist in the early detection of cognitive change and are thus important for the evaluation of early-intervention clinical trials. One method that may be particularly well-suited to help achieve this goal involves the quantification of intraindividual variability (IIV in cognitive performance. The current study aimed to directly compare two methods of estimating IIV (fluctuations in accuracy-based scores vs. those in latency-based scores to predict cognitive performance in AD. Specifically, we directly compared the relative sensitivity of reaction time (RT—and accuracy-based estimates of IIV to cognitive compromise. The novelty of the present study, however, centered on the patients we tested [a group of patients with Alzheimer's disease (AD] and the outcome measures we used (a measure of general cognitive function and a measure of episodic memory function. Hence, we compared intraindividual standard deviations (iSDs from two RT tasks and three accuracy-based memory tasks in patients with possible or probable Alzheimer's dementia (n = 23 and matched healthy controls (n = 25. The main analyses modeled the relative contributions of RT vs. accuracy-based measures of IIV toward the prediction of performance on measures of (a overall cognitive functioning, and (b episodic memory functioning. Results indicated that RT-based IIV measures are superior predictors of neurocognitive impairment (as indexed by overall cognitive and memory performance than accuracy-based IIV measures, even after adjusting for the timescale of measurement. However, one accuracy-based IIV measure (derived from a recognition memory test also differentiated patients with AD from controls, and significantly predicted episodic memory

  18. Performance of the FV3-powered Next Generation Global Prediction System for Harvey and Irma, and a vision for a "beyond weather timescale" prediction system for long-range hurricane track and intensity predictions

    Science.gov (United States)

    Lin, S. J.; Bender, M.; Harris, L.; Hazelton, A.

    2017-12-01

    The performance of a GFDL developed FV3-based Next Generation Global Prediction System (NGGPS) for Harvey and Irma will be reported. We will report on aspects of track and intensity errors (vs operational models), heavy precipitation (Harvey), rapid intensification, and simulated structure (in comparison with ground based radar), and point to a need of a future long-range (from day-5 up to 30 days) physically based ensemble hurricane prediction system for providing useful information to the forecasters, beyond the usual weather timescale.

  19. Study on performance prediction and energy saving of indirect evaporative cooling system

    Energy Technology Data Exchange (ETDEWEB)

    Yoo, Seong Yeon; Kim, Tae Ho; Kim, Myung Ho [Dept. of Mechanical Design Engineering, Chungnam National University, Daejeon (Korea, Republic of)

    2015-09-15

    The purpose of this study is to predict the performance of an indirect evaporative cooling system, and to evaluate its energy saving effect when applied to the exhaust heat recovery system of an air-handling unit. We derive the performance correlation of the indirect evaporative cooling system using a plastic heat exchanger based on experimental data obtained in various conditions. We predict the variations in the performance of the system for various return and outdoor air conditioning systems using the obtained correlation. We also analyze the energy saving of the system realized by the exhaust heat recovery using the typical meteorological data for several cities in Korea. The average utilization rate of the sensible cooling system for the exhaust heat recovery is 44.3% during summer, while that of the evaporative cooling system is 96.7%. The energy saving of the evaporative cooling system is much higher compared to the sensible cooling system, and was about 3.89 times the value obtained in Seoul.

  20. An estimator-based distributed voltage-predictive control strategy for ac islanded microgrids

    DEFF Research Database (Denmark)

    Wang, Yanbo; Chen, Zhe; Wang, Xiongfei

    2015-01-01

    This paper presents an estimator-based voltage predictive control strategy for AC islanded microgrids, which is able to perform voltage control without any communication facilities. The proposed control strategy is composed of a network voltage estimator and a voltage predictive controller for each...... and has a good capability to reject uncertain perturbations of islanded microgrids....

  1. Cognitive performance modeling based on general systems performance theory.

    Science.gov (United States)

    Kondraske, George V

    2010-01-01

    General Systems Performance Theory (GSPT) was initially motivated by problems associated with quantifying different aspects of human performance. It has proved to be invaluable for measurement development and understanding quantitative relationships between human subsystem capacities and performance in complex tasks. It is now desired to bring focus to the application of GSPT to modeling of cognitive system performance. Previous studies involving two complex tasks (i.e., driving and performing laparoscopic surgery) and incorporating measures that are clearly related to cognitive performance (information processing speed and short-term memory capacity) were revisited. A GSPT-derived method of task analysis and performance prediction termed Nonlinear Causal Resource Analysis (NCRA) was employed to determine the demand on basic cognitive performance resources required to support different levels of complex task performance. This approach is presented as a means to determine a cognitive workload profile and the subsequent computation of a single number measure of cognitive workload (CW). Computation of CW may be a viable alternative to measuring it. Various possible "more basic" performance resources that contribute to cognitive system performance are discussed. It is concluded from this preliminary exploration that a GSPT-based approach can contribute to defining cognitive performance models that are useful for both individual subjects and specific groups (e.g., military pilots).

  2. Prediction of performance and evaluation of flexible pavement rehabilitation strategies

    Directory of Open Access Journals (Sweden)

    Kang-Won Wayne Lee

    2017-04-01

    Full Text Available Five test sections with different additives and strategies were established to rehabilitate a State-maintained highway more effectively in Rhode Island (RI: control, calcium chloride, asphalt emulsion, Portland cement and geogrid. Resilient moduli of subgrade soils and subbase materials before and after full depth rehabilitation were employed as input parameters to predict the performance of pavement structures using AASHTOWare Pavement ME Design (Pavement ME software in terms of rutting, cracking and roughness. It was attempted to use Level 1 input (which includes traffic full spectrum data, climate data and structural layer properties for Pavement ME. Traffic data was obtained from a Weigh-in-Motion (WIM instrument and Providence station was used for collecting climatic data. Volumetric properties, dynamic modulus and creep compliance were used as input parameters for 19 mm (0.75 in. warm mix asphalt (WMA base and 12.5 mm (0.5 in. WMA surface layer. The results indicated that all test sections observed AC top-down (longitudinal cracking except Portland cement section which passed for all criteria. The order in terms of performance (best to worst for all test sections by Pavement ME was Portland cement, calcium chloride, control, geogrid, and asphalt emulsion. It was also observed that all test sections passed for both bottom up and top down fatigue cracking by increasing thickness of either of the two top asphalt layers. Test sections with five different base/subbase materials were evaluated in last two years through visual condition survey and measurements of deflection and roughness to confirm the prediction, but there was no serious distress and roughness. Thus these experiments allowed selecting the best rehabilitation/reconstruction techniques for the particular and/or similar highway, and a framework was formulated to select an optimal technique and/or strategy for future rehabilitation/reconstruction projects. Finally, guidelines for

  3. Using Machine Learning to Predict Student Performance

    OpenAIRE

    Pojon, Murat

    2017-01-01

    This thesis examines the application of machine learning algorithms to predict whether a student will be successful or not. The specific focus of the thesis is the comparison of machine learning methods and feature engineering techniques in terms of how much they improve the prediction performance. Three different machine learning methods were used in this thesis. They are linear regression, decision trees, and naïve Bayes classification. Feature engineering, the process of modification ...

  4. A Wavelet Kernel-Based Primal Twin Support Vector Machine for Economic Development Prediction

    Directory of Open Access Journals (Sweden)

    Fang Su

    2013-01-01

    Full Text Available Economic development forecasting allows planners to choose the right strategies for the future. This study is to propose economic development prediction method based on the wavelet kernel-based primal twin support vector machine algorithm. As gross domestic product (GDP is an important indicator to measure economic development, economic development prediction means GDP prediction in this study. The wavelet kernel-based primal twin support vector machine algorithm can solve two smaller sized quadratic programming problems instead of solving a large one as in the traditional support vector machine algorithm. Economic development data of Anhui province from 1992 to 2009 are used to study the prediction performance of the wavelet kernel-based primal twin support vector machine algorithm. The comparison of mean error of economic development prediction between wavelet kernel-based primal twin support vector machine and traditional support vector machine models trained by the training samples with the 3–5 dimensional input vectors, respectively, is given in this paper. The testing results show that the economic development prediction accuracy of the wavelet kernel-based primal twin support vector machine model is better than that of traditional support vector machine.

  5. Dimensionless Numerical Approaches for the Performance Prediction of Marine Waterjet Propulsion Units

    Directory of Open Access Journals (Sweden)

    Marco Altosole

    2012-01-01

    Full Text Available One of the key issues at early design stage of a high-speed craft is the selection and the performance prediction of the propulsion system because at this stage only few information about the vessel are available. The objective of this work is precisely to provide the designer, in the case of waterjet propelled craft, with a simple and reliable calculation tool, able to predict the waterjet working points in design and off-design conditions, allowing to investigate several propulsive options during the ship design process. In the paper two original dimensionless numerical procedures, one referred to jet units for naval applications and the other more suitable for planing boats, are presented. The first procedure is based on a generalized performance map for mixed flow pumps, derived from the analysis of several waterjet pumps by applying similitude principles of the hydraulic machines. The second approach, validated by some comparisons with current waterjet installations, is based on a complete physical approach, from which a set of non-dimensional waterjet characteristics has been drawn by the authors. The presented application examples show the validity and the degree of accuracy of the proposed methodologies for the performance evaluation of waterjet propulsion systems.

  6. Link prediction based on nonequilibrium cooperation effect

    Science.gov (United States)

    Li, Lanxi; Zhu, Xuzhen; Tian, Hui

    2018-04-01

    Link prediction in complex networks has become a common focus of many researchers. But most existing methods concentrate on neighbors, and rarely consider degree heterogeneity of two endpoints. Node degree represents the importance or status of endpoints. We describe the large-degree heterogeneity as the nonequilibrium between nodes. This nonequilibrium facilitates a stable cooperation between endpoints, so that two endpoints with large-degree heterogeneity tend to connect stably. We name such a phenomenon as the nonequilibrium cooperation effect. Therefore, this paper proposes a link prediction method based on the nonequilibrium cooperation effect to improve accuracy. Theoretical analysis will be processed in advance, and at the end, experiments will be performed in 12 real-world networks to compare the mainstream methods with our indices in the network through numerical analysis.

  7. Improving stability of prediction models based on correlated omics data by using network approaches.

    Directory of Open Access Journals (Sweden)

    Renaud Tissier

    Full Text Available Building prediction models based on complex omics datasets such as transcriptomics, proteomics, metabolomics remains a challenge in bioinformatics and biostatistics. Regularized regression techniques are typically used to deal with the high dimensionality of these datasets. However, due to the presence of correlation in the datasets, it is difficult to select the best model and application of these methods yields unstable results. We propose a novel strategy for model selection where the obtained models also perform well in terms of overall predictability. Several three step approaches are considered, where the steps are 1 network construction, 2 clustering to empirically derive modules or pathways, and 3 building a prediction model incorporating the information on the modules. For the first step, we use weighted correlation networks and Gaussian graphical modelling. Identification of groups of features is performed by hierarchical clustering. The grouping information is included in the prediction model by using group-based variable selection or group-specific penalization. We compare the performance of our new approaches with standard regularized regression via simulations. Based on these results we provide recommendations for selecting a strategy for building a prediction model given the specific goal of the analysis and the sizes of the datasets. Finally we illustrate the advantages of our approach by application of the methodology to two problems, namely prediction of body mass index in the DIetary, Lifestyle, and Genetic determinants of Obesity and Metabolic syndrome study (DILGOM and prediction of response of each breast cancer cell line to treatment with specific drugs using a breast cancer cell lines pharmacogenomics dataset.

  8. Prediction-error of Prediction Error (PPE)-based Reversible Data Hiding

    OpenAIRE

    Wu, Han-Zhou; Wang, Hong-Xia; Shi, Yun-Qing

    2016-01-01

    This paper presents a novel reversible data hiding (RDH) algorithm for gray-scaled images, in which the prediction-error of prediction error (PPE) of a pixel is used to carry the secret data. In the proposed method, the pixels to be embedded are firstly predicted with their neighboring pixels to obtain the corresponding prediction errors (PEs). Then, by exploiting the PEs of the neighboring pixels, the prediction of the PEs of the pixels can be determined. And, a sorting technique based on th...

  9. Performance Prediction of Constrained Waveform Design for Adaptive Radar

    Science.gov (United States)

    2016-11-01

    the famous Woodward quote, having a ubiquitous feeling for all radar waveform design (and performance prediction) researchers , that is found at the end...discuss research that develops performance prediction models to quantify the impact on SINR when an amplitude constraint is placed on a radar waveform...optimize the radar perfor- mance for the particular scenario and tasks. There have also been several survey papers on various topics in waveform design for

  10. Tracking Neuronal Connectivity from Electric Brain Signals to Predict Performance.

    Science.gov (United States)

    Vecchio, Fabrizio; Miraglia, Francesca; Rossini, Paolo Maria

    2018-05-01

    The human brain is a complex container of interconnected networks. Network neuroscience is a recent venture aiming to explore the connection matrix built from the human brain or human "Connectome." Network-based algorithms provide parameters that define global organization of the brain; when they are applied to electroencephalographic (EEG) signals network, configuration and excitability can be monitored in millisecond time frames, providing remarkable information on their instantaneous efficacy also for a given task's performance via online evaluation of the underlying instantaneous networks before, during, and after the task. Here we provide an updated summary on the connectome analysis for the prediction of performance via the study of task-related dynamics of brain network organization from EEG signals.

  11. Probability-based collaborative filtering model for predicting gene-disease associations.

    Science.gov (United States)

    Zeng, Xiangxiang; Ding, Ningxiang; Rodríguez-Patón, Alfonso; Zou, Quan

    2017-12-28

    Accurately predicting pathogenic human genes has been challenging in recent research. Considering extensive gene-disease data verified by biological experiments, we can apply computational methods to perform accurate predictions with reduced time and expenses. We propose a probability-based collaborative filtering model (PCFM) to predict pathogenic human genes. Several kinds of data sets, containing data of humans and data of other nonhuman species, are integrated in our model. Firstly, on the basis of a typical latent factorization model, we propose model I with an average heterogeneous regularization. Secondly, we develop modified model II with personal heterogeneous regularization to enhance the accuracy of aforementioned models. In this model, vector space similarity or Pearson correlation coefficient metrics and data on related species are also used. We compared the results of PCFM with the results of four state-of-arts approaches. The results show that PCFM performs better than other advanced approaches. PCFM model can be leveraged for predictions of disease genes, especially for new human genes or diseases with no known relationships.

  12. Short-term wind power prediction based on LSSVM–GSA model

    International Nuclear Information System (INIS)

    Yuan, Xiaohui; Chen, Chen; Yuan, Yanbin; Huang, Yuehua; Tan, Qingxiong

    2015-01-01

    Highlights: • A hybrid model is developed for short-term wind power prediction. • The model is based on LSSVM and gravitational search algorithm. • Gravitational search algorithm is used to optimize parameters of LSSVM. • Effect of different kernel function of LSSVM on wind power prediction is discussed. • Comparative studies show that prediction accuracy of wind power is improved. - Abstract: Wind power forecasting can improve the economical and technical integration of wind energy into the existing electricity grid. Due to its intermittency and randomness, it is hard to forecast wind power accurately. For the purpose of utilizing wind power to the utmost extent, it is very important to make an accurate prediction of the output power of a wind farm under the premise of guaranteeing the security and the stability of the operation of the power system. In this paper, a hybrid model (LSSVM–GSA) based on the least squares support vector machine (LSSVM) and gravitational search algorithm (GSA) is proposed to forecast the short-term wind power. As the kernel function and the related parameters of the LSSVM have a great influence on the performance of the prediction model, the paper establishes LSSVM model based on different kernel functions for short-term wind power prediction. And then an optimal kernel function is determined and the parameters of the LSSVM model are optimized by using GSA. Compared with the Back Propagation (BP) neural network and support vector machine (SVM) model, the simulation results show that the hybrid LSSVM–GSA model based on exponential radial basis kernel function and GSA has higher accuracy for short-term wind power prediction. Therefore, the proposed LSSVM–GSA is a better model for short-term wind power prediction

  13. Predicting sample size required for classification performance

    Directory of Open Access Journals (Sweden)

    Figueroa Rosa L

    2012-02-01

    Full Text Available Abstract Background Supervised learning methods need annotated data in order to generate efficient models. Annotated data, however, is a relatively scarce resource and can be expensive to obtain. For both passive and active learning methods, there is a need to estimate the size of the annotated sample required to reach a performance target. Methods We designed and implemented a method that fits an inverse power law model to points of a given learning curve created using a small annotated training set. Fitting is carried out using nonlinear weighted least squares optimization. The fitted model is then used to predict the classifier's performance and confidence interval for larger sample sizes. For evaluation, the nonlinear weighted curve fitting method was applied to a set of learning curves generated using clinical text and waveform classification tasks with active and passive sampling methods, and predictions were validated using standard goodness of fit measures. As control we used an un-weighted fitting method. Results A total of 568 models were fitted and the model predictions were compared with the observed performances. Depending on the data set and sampling method, it took between 80 to 560 annotated samples to achieve mean average and root mean squared error below 0.01. Results also show that our weighted fitting method outperformed the baseline un-weighted method (p Conclusions This paper describes a simple and effective sample size prediction algorithm that conducts weighted fitting of learning curves. The algorithm outperformed an un-weighted algorithm described in previous literature. It can help researchers determine annotation sample size for supervised machine learning.

  14. Ligand and structure-based classification models for Prediction of P-glycoprotein inhibitors

    DEFF Research Database (Denmark)

    Klepsch, Freya; Poongavanam, Vasanthanathan; Ecker, Gerhard Franz

    2014-01-01

    an algorithm based on Euclidean distance. Results show that random forest and SVM performed best for classification of P-gp inhibitors and non-inhibitors, correctly predicting 73/75 % of the external test set compounds. Classification based on the docking experiments using the scoring function Chem...

  15. Methodologies for predicting the part-load performance of aero-derivative gas turbines

    DEFF Research Database (Denmark)

    Haglind, Fredrik; Elmegaard, Brian

    2009-01-01

    Prediction of the part-load performance of gas turbines is advantageous in various applications. Sometimes reasonable part-load performance is sufficient, while in other cases complete agreement with the performance of an existing machine is desirable. This paper is aimed at providing some guidance...... on methodologies for predicting part-load performance of aero-derivative gas turbines. Two different design models – one simple and one more complex – are created. Subsequently, for each of these models, the part-load performance is predicted using component maps and turbine constants, respectively. Comparisons...... with manufacturer data are made. With respect to the design models, the simple model, featuring a compressor, combustor and turbines, results in equally good performance prediction in terms of thermal efficiency and exhaust temperature as does a more complex model. As for part-load predictions, the results suggest...

  16. Empirical comparison of web-based antimicrobial peptide prediction tools.

    Science.gov (United States)

    Gabere, Musa Nur; Noble, William Stafford

    2017-07-01

    Antimicrobial peptides (AMPs) are innate immune molecules that exhibit activities against a range of microbes, including bacteria, fungi, viruses and protozoa. Recent increases in microbial resistance against current drugs has led to a concomitant increase in the need for novel antimicrobial agents. Over the last decade, a number of AMP prediction tools have been designed and made freely available online. These AMP prediction tools show potential to discriminate AMPs from non-AMPs, but the relative quality of the predictions produced by the various tools is difficult to quantify. We compiled two sets of AMP and non-AMP peptides, separated into three categories-antimicrobial, antibacterial and bacteriocins. Using these benchmark data sets, we carried out a systematic evaluation of ten publicly available AMP prediction methods. Among the six general AMP prediction tools-ADAM, CAMPR3(RF), CAMPR3(SVM), MLAMP, DBAASP and MLAMP-we find that CAMPR3(RF) provides a statistically significant improvement in performance, as measured by the area under the receiver operating characteristic (ROC) curve, relative to the other five methods. Surprisingly, for antibacterial prediction, the original AntiBP method significantly outperforms its successor, AntiBP2 based on one benchmark dataset. The two bacteriocin prediction tools, BAGEL3 and BACTIBASE, both provide very good performance and BAGEL3 outperforms its predecessor, BACTIBASE, on the larger of the two benchmarks. gaberemu@ngha.med.sa or william-noble@uw.edu. Supplementary data are available at Bioinformatics online. © The Author 2017. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com

  17. Utilizing Machine Learning and Automated Performance Metrics to Evaluate Robot-Assisted Radical Prostatectomy Performance and Predict Outcomes.

    Science.gov (United States)

    Hung, Andrew J; Chen, Jian; Che, Zhengping; Nilanon, Tanachat; Jarc, Anthony; Titus, Micha; Oh, Paul J; Gill, Inderbir S; Liu, Yan

    2018-05-01

    Surgical performance is critical for clinical outcomes. We present a novel machine learning (ML) method of processing automated performance metrics (APMs) to evaluate surgical performance and predict clinical outcomes after robot-assisted radical prostatectomy (RARP). We trained three ML algorithms utilizing APMs directly from robot system data (training material) and hospital length of stay (LOS; training label) (≤2 days and >2 days) from 78 RARP cases, and selected the algorithm with the best performance. The selected algorithm categorized the cases as "Predicted as expected LOS (pExp-LOS)" and "Predicted as extended LOS (pExt-LOS)." We compared postoperative outcomes of the two groups (Kruskal-Wallis/Fisher's exact tests). The algorithm then predicted individual clinical outcomes, which we compared with actual outcomes (Spearman's correlation/Fisher's exact tests). Finally, we identified five most relevant APMs adopted by the algorithm during predicting. The "Random Forest-50" (RF-50) algorithm had the best performance, reaching 87.2% accuracy in predicting LOS (73 cases as "pExp-LOS" and 5 cases as "pExt-LOS"). The "pExp-LOS" cases outperformed the "pExt-LOS" cases in surgery time (3.7 hours vs 4.6 hours, p = 0.007), LOS (2 days vs 4 days, p = 0.02), and Foley duration (9 days vs 14 days, p = 0.02). Patient outcomes predicted by the algorithm had significant association with the "ground truth" in surgery time (p algorithm in predicting, were largely related to camera manipulation. To our knowledge, ours is the first study to show that APMs and ML algorithms may help assess surgical RARP performance and predict clinical outcomes. With further accrual of clinical data (oncologic and functional data), this process will become increasingly relevant and valuable in surgical assessment and training.

  18. Web-based turbine cycle performance analysis for nuclear power plants

    International Nuclear Information System (INIS)

    Heo, Gyun Young; Lee, Sung Jin; Chang, Soon Heung; Choi, Seong Soo

    2000-01-01

    As an approach to improve the economical efficiency of operating nuclear power plants, a thermal performance analysis tool for steam turbine cycle has been developed. For the validation and the prediction of the signals used in thermal performance analysis, a few statistical signal processing techniques are integrated. The developed tool provides predicted performance calculation capability that is steady-state wet steam turbine cycle simulation, and measurement performance calculation capability which determines component- and cycle-level performance indexes. Web-based interface with all performance analysis is implemented, so even remote users can achieve performance analysis. Comparing to ASME PTC6 (Performance Test Code 6), the focusing point of the developed tool is historical performance analysis rather than single accurate performance test. The proposed signal processing techniques are validated using actual plant signals, and turbine cycle models are tested by benchmarking with a commercial thermal analysis tool

  19. Enhancing pavement performance prediction models for the Illinois Tollway System

    OpenAIRE

    Laxmikanth Premkumar; William R. Vavrik

    2016-01-01

    Accurate pavement performance prediction represents an important role in prioritizing future maintenance and rehabilitation needs, and predicting future pavement condition in a pavement management system. The Illinois State Toll Highway Authority (Tollway) with over 2000 lane miles of pavement utilizes the condition rating survey (CRS) methodology to rate pavement performance. Pavement performance models developed in the past for the Illinois Department of Transportation (IDOT) are used by th...

  20. Goal Setting and Expectancy Theory Predictions of Effort and Performance.

    Science.gov (United States)

    Dossett, Dennis L.; Luce, Helen E.

    Neither expectancy (VIE) theory nor goal setting alone are effective determinants of individual effort and task performance. To test the combined ability of VIE and goal setting to predict effort and performance, 44 real estate agents and their managers completed questionnaires. Quarterly income goals predicted managers' ratings of agents' effort,…

  1. Structural protein descriptors in 1-dimension and their sequence-based predictions.

    Science.gov (United States)

    Kurgan, Lukasz; Disfani, Fatemeh Miri

    2011-09-01

    The last few decades observed an increasing interest in development and application of 1-dimensional (1D) descriptors of protein structure. These descriptors project 3D structural features onto 1D strings of residue-wise structural assignments. They cover a wide-range of structural aspects including conformation of the backbone, burying depth/solvent exposure and flexibility of residues, and inter-chain residue-residue contacts. We perform first-of-its-kind comprehensive comparative review of the existing 1D structural descriptors. We define, review and categorize ten structural descriptors and we also describe, summarize and contrast over eighty computational models that are used to predict these descriptors from the protein sequences. We show that the majority of the recent sequence-based predictors utilize machine learning models, with the most popular being neural networks, support vector machines, hidden Markov models, and support vector and linear regressions. These methods provide high-throughput predictions and most of them are accessible to a non-expert user via web servers and/or stand-alone software packages. We empirically evaluate several recent sequence-based predictors of secondary structure, disorder, and solvent accessibility descriptors using a benchmark set based on CASP8 targets. Our analysis shows that the secondary structure can be predicted with over 80% accuracy and segment overlap (SOV), disorder with over 0.9 AUC, 0.6 Matthews Correlation Coefficient (MCC), and 75% SOV, and relative solvent accessibility with PCC of 0.7 and MCC of 0.6 (0.86 when homology is used). We demonstrate that the secondary structure predicted from sequence without the use of homology modeling is as good as the structure extracted from the 3D folds predicted by top-performing template-based methods.

  2. Measuring and Predicting Sleep and Performance During Military Operations

    Science.gov (United States)

    2012-08-23

    strengths of this modeling approach is that accurate predictions of fatigue, performance, or alert- ness can be made from observed sleep timing...and, in which fatigue, performance, or alertness predictions are required prior to the task. Limitations of Current Models The strengths and...mean ± SD, 35.9 ± 1.2 hours), crews flew to Auckland , New Zealand, where another short layover was un- dertaken (23.6 ± 0.95 hours). A final flight

  3. Prediction of Cognitive Performance and Subjective Sleepiness Using a Model of Arousal Dynamics.

    Science.gov (United States)

    Postnova, Svetlana; Lockley, Steven W; Robinson, Peter A

    2018-04-01

    A model of arousal dynamics is applied to predict objective performance and subjective sleepiness measures, including lapses and reaction time on a visual Performance Vigilance Test (vPVT), performance on a mathematical addition task (ADD), and the Karolinska Sleepiness Scale (KSS). The arousal dynamics model is comprised of a physiologically based flip-flop switch between the wake- and sleep-active neuronal populations and a dynamic circadian oscillator, thus allowing prediction of sleep propensity. Published group-level experimental constant routine (CR) and forced desynchrony (FD) data are used to calibrate the model to predict performance and sleepiness. Only the studies using dim light (performance measures during CR and FD protocols, with sleep-wake cycles ranging from 20 to 42.85 h and a 2:1 wake-to-sleep ratio. New metrics relating model outputs to performance and sleepiness data are developed and tested against group average outcomes from 7 (vPVT lapses), 5 (ADD), and 8 (KSS) experimental protocols, showing good quantitative and qualitative agreement with the data (root mean squared error of 0.38, 0.19, and 0.35, respectively). The weights of the homeostatic and circadian effects are found to be different between the measures, with KSS having stronger homeostatic influence compared with the objective measures of performance. Using FD data in addition to CR data allows us to challenge the model in conditions of both acute sleep deprivation and structured circadian misalignment, ensuring that the role of the circadian and homeostatic drives in performance is properly captured.

  4. Performance prediction of mechanical excavators from linear cutter tests on Yucca Mountain welded tuffs

    International Nuclear Information System (INIS)

    Gertsch, R.; Ozdemir, L.

    1992-09-01

    The performances of mechanical excavators are predicted for excavations in welded tuff. Emphasis is given to tunnel boring machine evaluations based on linear cutting machine test data obtained on samples of Topopah Spring welded tuff. The tests involve measurement of forces as cutters are applied to the rock surface at certain spacing and penetrations. Two disc and two point-attack cutters representing currently available technology are thus evaluated. The performance predictions based on these direct experimental measurements are believed to be more accurate than any previous values for mechanical excavation of welded tuff. The calculations of performance are predicated on minimizing the amount of energy required to excavate the welded tuff. Specific energy decreases with increasing spacing and penetration, and reaches its lowest at the widest spacing and deepest penetration used in this test program. Using the force, spacing, and penetration data from this experimental program, the thrust, torque, power, and rate of penetration are calculated for several types of mechanical excavators. The results of this study show that the candidate excavators will require higher torque and power than heretofore estimated

  5. Power system dynamic state estimation using prediction based evolutionary technique

    International Nuclear Information System (INIS)

    Basetti, Vedik; Chandel, Ashwani K.; Chandel, Rajeevan

    2016-01-01

    In this paper, a new robust LWS (least winsorized square) estimator is proposed for dynamic state estimation of a power system. One of the main advantages of this estimator is that it has an inbuilt bad data rejection property and is less sensitive to bad data measurements. In the proposed approach, Brown's double exponential smoothing technique has been utilised for its reliable performance at the prediction step. The state estimation problem is solved as an optimisation problem using a new jDE-self adaptive differential evolution with prediction based population re-initialisation technique at the filtering step. This new stochastic search technique has been embedded with different state scenarios using the predicted state. The effectiveness of the proposed LWS technique is validated under different conditions, namely normal operation, bad data, sudden load change, and loss of transmission line conditions on three different IEEE test bus systems. The performance of the proposed approach is compared with the conventional extended Kalman filter. On the basis of various performance indices, the results thus obtained show that the proposed technique increases the accuracy and robustness of power system dynamic state estimation performance. - Highlights: • To estimate the states of the power system under dynamic environment. • The performance of the EKF method is degraded during anomaly conditions. • The proposed method remains robust towards anomalies. • The proposed method provides precise state estimates even in the presence of anomalies. • The results show that prediction accuracy is enhanced by using the proposed model.

  6. The Satellite Clock Bias Prediction Method Based on Takagi-Sugeno Fuzzy Neural Network

    Science.gov (United States)

    Cai, C. L.; Yu, H. G.; Wei, Z. C.; Pan, J. D.

    2017-05-01

    The continuous improvement of the prediction accuracy of Satellite Clock Bias (SCB) is the key problem of precision navigation. In order to improve the precision of SCB prediction and better reflect the change characteristics of SCB, this paper proposes an SCB prediction method based on the Takagi-Sugeno fuzzy neural network. Firstly, the SCB values are pre-treated based on their characteristics. Then, an accurate Takagi-Sugeno fuzzy neural network model is established based on the preprocessed data to predict SCB. This paper uses the precise SCB data with different sampling intervals provided by IGS (International Global Navigation Satellite System Service) to realize the short-time prediction experiment, and the results are compared with the ARIMA (Auto-Regressive Integrated Moving Average) model, GM(1,1) model, and the quadratic polynomial model. The results show that the Takagi-Sugeno fuzzy neural network model is feasible and effective for the SCB short-time prediction experiment, and performs well for different types of clocks. The prediction results for the proposed method are better than the conventional methods obviously.

  7. An approach for prediction of petroleum production facility performance considering Arctic influence factors

    International Nuclear Information System (INIS)

    Gao Xueli; Barabady, Javad; Markeset, Tore

    2010-01-01

    As the oil and gas (O and G) industry is increasing the focus on petroleum exploration and development in the Arctic region, it is becoming increasingly important to design exploration and production facilities to suit the local operating conditions. The cold and harsh climate, the long distance from customer and suppliers' markets, and the sensitive environment may have considerable influence on the choice of design solutions and production performance characteristics such as throughput capacity, reliability, availability, maintainability, and supportability (RAMS) as well as operational and maintenance activities. Due to this, data and information collected for similar systems used in a normal climate may not be suitable. Hence, it is important to study and develop methods for prediction of the production performance characteristics during the design and operation phases. The aim of this paper is to present an approach for prediction of the production performance for oil and gas production facilities considering influencing factors in Arctic conditions. The proportional repair model (PRM) is developed in order to predict repair rate in Arctic conditions. The model is based on the proportional hazard model (PHM). A simple case study is used to demonstrate how the proposed approach can be applied.

  8. Pavement Performance : Approaches Using Predictive Analytics

    Science.gov (United States)

    2018-03-23

    Acceptable pavement condition is paramount to road safety. Using predictive analytics techniques, this project attempted to develop models that provide an assessment of pavement condition based on an array of indictors that include pavement distress,...

  9. Performance of predictive models in phase equilibria of complex associating systems: PC-SAFT and CEOS/GE

    Directory of Open Access Journals (Sweden)

    N. Bender

    2013-03-01

    Full Text Available Cubic equations of state combined with excess Gibbs energy predictive models (like UNIFAC and equations of state based on applied statistical mechanics are among the main alternatives for phase equilibria prediction involving polar substances in wide temperature and pressure ranges. In this work, the predictive performances of the PC-SAFT with association contribution and Peng-Robinson (PR combined with UNIFAC (Do through mixing rules are compared. Binary and multi-component systems involving polar and non-polar substances were analyzed. Results were also compared to experimental data available in the literature. Results show a similar predictive performance for PC-SAFT with association and cubic equations combined with UNIFAC (Do through mixing rules. Although PC-SAFT with association requires less parameters, it is more complex and requires more computation time.

  10. Bayesian based Prognostic Model for Predictive Maintenance of Offshore Wind Farms

    DEFF Research Database (Denmark)

    Asgarpour, Masoud

    2017-01-01

    monitoring, fault detection and predictive maintenance of offshore wind components is defined. The diagnostic model defined in this paper is based on degradation, remaining useful lifetime and hybrid inspection threshold models. The defined degradation model is based on an exponential distribution......The operation and maintenance costs of offshore wind farms can be significantly reduced if existing corrective actions are performed as efficient as possible and if future corrective actions are avoided by performing sufficient preventive actions. In this paper a prognostic model for degradation...

  11. Annotation-Based Whole Genomic Prediction and Selection

    DEFF Research Database (Denmark)

    Kadarmideen, Haja; Do, Duy Ngoc; Janss, Luc

    Genomic selection is widely used in both animal and plant species, however, it is performed with no input from known genomic or biological role of genetic variants and therefore is a black box approach in a genomic era. This study investigated the role of different genomic regions and detected QTLs...... in their contribution to estimated genomic variances and in prediction of genomic breeding values by applying SNP annotation approaches to feed efficiency. Ensembl Variant Predictor (EVP) and Pig QTL database were used as the source of genomic annotation for 60K chip. Genomic prediction was performed using the Bayes...... classes. Predictive accuracy was 0.531, 0.532, 0.302, and 0.344 for DFI, RFI, ADG and BF, respectively. The contribution per SNP to total genomic variance was similar among annotated classes across different traits. Predictive performance of SNP classes did not significantly differ from randomized SNP...

  12. Knowledge-based Fragment Binding Prediction

    Science.gov (United States)

    Tang, Grace W.; Altman, Russ B.

    2014-01-01

    Target-based drug discovery must assess many drug-like compounds for potential activity. Focusing on low-molecular-weight compounds (fragments) can dramatically reduce the chemical search space. However, approaches for determining protein-fragment interactions have limitations. Experimental assays are time-consuming, expensive, and not always applicable. At the same time, computational approaches using physics-based methods have limited accuracy. With increasing high-resolution structural data for protein-ligand complexes, there is now an opportunity for data-driven approaches to fragment binding prediction. We present FragFEATURE, a machine learning approach to predict small molecule fragments preferred by a target protein structure. We first create a knowledge base of protein structural environments annotated with the small molecule substructures they bind. These substructures have low-molecular weight and serve as a proxy for fragments. FragFEATURE then compares the structural environments within a target protein to those in the knowledge base to retrieve statistically preferred fragments. It merges information across diverse ligands with shared substructures to generate predictions. Our results demonstrate FragFEATURE's ability to rediscover fragments corresponding to the ligand bound with 74% precision and 82% recall on average. For many protein targets, it identifies high scoring fragments that are substructures of known inhibitors. FragFEATURE thus predicts fragments that can serve as inputs to fragment-based drug design or serve as refinement criteria for creating target-specific compound libraries for experimental or computational screening. PMID:24762971

  13. EPMLR: sequence-based linear B-cell epitope prediction method using multiple linear regression.

    Science.gov (United States)

    Lian, Yao; Ge, Meng; Pan, Xian-Ming

    2014-12-19

    B-cell epitopes have been studied extensively due to their immunological applications, such as peptide-based vaccine development, antibody production, and disease diagnosis and therapy. Despite several decades of research, the accurate prediction of linear B-cell epitopes has remained a challenging task. In this work, based on the antigen's primary sequence information, a novel linear B-cell epitope prediction model was developed using the multiple linear regression (MLR). A 10-fold cross-validation test on a large non-redundant dataset was performed to evaluate the performance of our model. To alleviate the problem caused by the noise of negative dataset, 300 experiments utilizing 300 sub-datasets were performed. We achieved overall sensitivity of 81.8%, precision of 64.1% and area under the receiver operating characteristic curve (AUC) of 0.728. We have presented a reliable method for the identification of linear B cell epitope using antigen's primary sequence information. Moreover, a web server EPMLR has been developed for linear B-cell epitope prediction: http://www.bioinfo.tsinghua.edu.cn/epitope/EPMLR/ .

  14. Hanford grout: predicting long-term performance

    International Nuclear Information System (INIS)

    Sewart, G.H.; Mitchell, D.H.; Treat, R.L.; McMakin, A.H.

    1987-01-01

    Grouted disposal is being planned for the low-level portion of liquid radioactive wastes at the Hanford site in Washington state. The performance of the disposal system must be such that it will protect people and the environment for thousands of years after disposal. To predict whether a specific grout disposal system will comply with existing and foreseen regulations, a performance assessment (PA) is performed. Long-term PAs are conducted for a range of performance conditions. Performance assessment is an inexact science. Quantifying projected impacts is especially difficult when only scant data exist on the behavior of certain components of the disposal system over thousands of years. To develop defensible results, we are honing the models and obtaining experimental data. The combination of engineered features and PA refinements is being used to ensure that Hanford grout will meet its principal goal: to protect people and the environment in the future

  15. Size-based predictions of food web patterns

    DEFF Research Database (Denmark)

    Zhang, Lai; Hartvig, Martin; Knudsen, Kim

    2014-01-01

    We employ size-based theoretical arguments to derive simple analytic predictions of ecological patterns and properties of natural communities: size-spectrum exponent, maximum trophic level, and susceptibility to invasive species. The predictions are brought about by assuming that an infinite number...... of species are continuously distributed on a size-trait axis. It is, however, an open question whether such predictions are valid for a food web with a finite number of species embedded in a network structure. We address this question by comparing the size-based predictions to results from dynamic food web...... simulations with varying species richness. To this end, we develop a new size- and trait-based food web model that can be simplified into an analytically solvable size-based model. We confirm existing solutions for the size distribution and derive novel predictions for maximum trophic level and invasion...

  16. Predicting space telerobotic operator training performance from human spatial ability assessment

    Science.gov (United States)

    Liu, Andrew M.; Oman, Charles M.; Galvan, Raquel; Natapoff, Alan

    2013-11-01

    Our goal was to determine whether existing tests of spatial ability can predict an astronaut's qualification test performance after robotic training. Because training astronauts to be qualified robotics operators is so long and expensive, NASA is interested in tools that can predict robotics performance before training begins. Currently, the Astronaut Office does not have a validated tool to predict robotics ability as part of its astronaut selection or training process. Commonly used tests of human spatial ability may provide such a tool to predict robotics ability. We tested the spatial ability of 50 active astronauts who had completed at least one robotics training course, then used logistic regression models to analyze the correlation between spatial ability test scores and the astronauts' performance in their evaluation test at the end of the training course. The fit of the logistic function to our data is statistically significant for several spatial tests. However, the prediction performance of the logistic model depends on the criterion threshold assumed. To clarify the critical selection issues, we show how the probability of correct classification vs. misclassification varies as a function of the mental rotation test criterion level. Since the costs of misclassification are low, the logistic models of spatial ability and robotic performance are reliable enough only to be used to customize regular and remedial training. We suggest several changes in tracking performance throughout robotics training that could improve the range and reliability of predictive models.

  17. DEEPre: sequence-based enzyme EC number prediction by deep learning

    KAUST Repository

    Li, Yu

    2017-10-20

    Annotation of enzyme function has a broad range of applications, such as metagenomics, industrial biotechnology, and diagnosis of enzyme deficiency-caused diseases. However, the time and resource required make it prohibitively expensive to experimentally determine the function of every enzyme. Therefore, computational enzyme function prediction has become increasingly important. In this paper, we develop such an approach, determining the enzyme function by predicting the Enzyme Commission number.We propose an end-to-end feature selection and classification model training approach, as well as an automatic and robust feature dimensionality uniformization method, DEEPre, in the field of enzyme function prediction. Instead of extracting manuallycrafted features from enzyme sequences, our model takes the raw sequence encoding as inputs, extracting convolutional and sequential features from the raw encoding based on the classification result to directly improve the prediction performance. The thorough cross-fold validation experiments conducted on two large-scale datasets show that DEEPre improves the prediction performance over the previous state-of-the-art methods. In addition, our server outperforms five other servers in determining the main class of enzymes on a separate low-homology dataset. Two case studies demonstrate DEEPre\\'s ability to capture the functional difference of enzyme isoforms.The server could be accessed freely at http://www.cbrc.kaust.edu.sa/DEEPre.

  18. DEEPre: sequence-based enzyme EC number prediction by deep learning

    KAUST Repository

    Li, Yu; Wang, Sheng; Umarov, Ramzan; Xie, Bingqing; Fan, Ming; Li, Lihua; Gao, Xin

    2017-01-01

    Annotation of enzyme function has a broad range of applications, such as metagenomics, industrial biotechnology, and diagnosis of enzyme deficiency-caused diseases. However, the time and resource required make it prohibitively expensive to experimentally determine the function of every enzyme. Therefore, computational enzyme function prediction has become increasingly important. In this paper, we develop such an approach, determining the enzyme function by predicting the Enzyme Commission number.We propose an end-to-end feature selection and classification model training approach, as well as an automatic and robust feature dimensionality uniformization method, DEEPre, in the field of enzyme function prediction. Instead of extracting manuallycrafted features from enzyme sequences, our model takes the raw sequence encoding as inputs, extracting convolutional and sequential features from the raw encoding based on the classification result to directly improve the prediction performance. The thorough cross-fold validation experiments conducted on two large-scale datasets show that DEEPre improves the prediction performance over the previous state-of-the-art methods. In addition, our server outperforms five other servers in determining the main class of enzymes on a separate low-homology dataset. Two case studies demonstrate DEEPre's ability to capture the functional difference of enzyme isoforms.The server could be accessed freely at http://www.cbrc.kaust.edu.sa/DEEPre.

  19. TWT transmitter fault prediction based on ANFIS

    Science.gov (United States)

    Li, Mengyan; Li, Junshan; Li, Shuangshuang; Wang, Wenqing; Li, Fen

    2017-11-01

    Fault prediction is an important component of health management, and plays an important role in the reliability guarantee of complex electronic equipments. Transmitter is a unit with high failure rate. The cathode performance of TWT is a common fault of transmitter. In this dissertation, a model based on a set of key parameters of TWT is proposed. By choosing proper parameters and applying adaptive neural network training model, this method, combined with analytic hierarchy process (AHP), has a certain reference value for the overall health judgment of TWT transmitters.

  20. Gender Differences in Performance Predictions: Evidence from the Cognitive Reflection Test.

    Science.gov (United States)

    Ring, Patrick; Neyse, Levent; David-Barett, Tamas; Schmidt, Ulrich

    2016-01-01

    This paper studies performance predictions in the 7-item Cognitive Reflection Test (CRT) and whether they differ by gender. After participants completed the CRT, they predicted their own (i), the other participants' (ii), men's (iii), and women's (iv) number of correct answers. In keeping with existing literature, men scored higher on the CRT than women and both men and women were too optimistic about their own performance. When we compare gender-specific predictions, we observe that men think they perform significantly better than other men and do so significantly more than women. The equality between women's predictions about their own performance and their female peers cannot be rejected. Our findings contribute to the growing literature on the underpinnings of behavior in economics and in psychology by uncovering gender differences in confidence about one's ability relative to same and opposite sex peers.

  1. Burst muscle performance predicts the speed, acceleration, and turning performance of Anna's hummingbirds.

    Science.gov (United States)

    Segre, Paolo S; Dakin, Roslyn; Zordan, Victor B; Dickinson, Michael H; Straw, Andrew D; Altshuler, Douglas L

    2015-11-19

    Despite recent advances in the study of animal flight, the biomechanical determinants of maneuverability are poorly understood. It is thought that maneuverability may be influenced by intrinsic body mass and wing morphology, and by physiological muscle capacity, but this hypothesis has not yet been evaluated because it requires tracking a large number of free flight maneuvers from known individuals. We used an automated tracking system to record flight sequences from 20 Anna's hummingbirds flying solo and in competition in a large chamber. We found that burst muscle capacity predicted most performance metrics. Hummingbirds with higher burst capacity flew with faster velocities, accelerations, and rotations, and they used more demanding complex turns. In contrast, body mass did not predict variation in maneuvering performance, and wing morphology predicted only the use of arcing turns and high centripetal accelerations. Collectively, our results indicate that burst muscle capacity is a key predictor of maneuverability.

  2. Predicting Student Academic Performance: A Comparison of Two Meta-Heuristic Algorithms Inspired by Cuckoo Birds for Training Neural Networks

    Directory of Open Access Journals (Sweden)

    Jeng-Fung Chen

    2014-10-01

    Full Text Available Predicting student academic performance with a high accuracy facilitates admission decisions and enhances educational services at educational institutions. This raises the need to propose a model that predicts student performance, based on the results of standardized exams, including university entrance exams, high school graduation exams, and other influential factors. In this study, an approach to the problem based on the artificial neural network (ANN with the two meta-heuristic algorithms inspired by cuckoo birds and their lifestyle, namely, Cuckoo Search (CS and Cuckoo Optimization Algorithm (COA is proposed. In particular, we used previous exam results and other factors, such as the location of the student’s high school and the student’s gender as input variables, and predicted the student academic performance. The standard CS and standard COA were separately utilized to train the feed-forward network for prediction. The algorithms optimized the weights between layers and biases of the neuron network. The simulation results were then discussed and analyzed to investigate the prediction ability of the neural network trained by these two algorithms. The findings demonstrated that both CS and COA have potential in training ANN and ANN-COA obtained slightly better results for predicting student academic performance in this case. It is expected that this work may be used to support student admission procedures and strengthen the service system in educational institutions.

  3. Disturbance estimator based predictive current control of grid-connected inverters

    OpenAIRE

    Al-Khafaji, Ahmed Samawi Ghthwan

    2013-01-01

    ABSTRACT: The work presented in my thesis considers one of the modern discrete-time control approaches based on digital signal processing methods, that have been developed to improve the performance control of grid-connected three-phase inverters. Disturbance estimator based predictive current control of grid-connected inverters is proposed. For inverter modeling with respect to the design of current controllers, we choose the d-q synchronous reference frame to make it easier to understand an...

  4. Use of Neural Networks for modeling and predicting boiler's operating performance

    International Nuclear Information System (INIS)

    Kljajić, Miroslav; Gvozdenac, Dušan; Vukmirović, Srdjan

    2012-01-01

    The need for high boiler operating performance requires the application of improved techniques for the rational use of energy. The analysis presented is guided by an effort to find possibilities for ways energy resources can be used wisely to secure a more efficient final energy supply. However, the biggest challenges are related to the variety and stochastic nature of influencing factors. The paper presents a method for modeling, assessing, and predicting the efficiency of boilers based on measured operating performance. The method utilizes a neural network approach to analyze and predict boiler efficiency and also to discover possibilities for enhancing efficiency. The analysis is based on energy surveys of 65 randomly selected boilers located at over 50 sites in the northern province of Serbia. These surveys included a representative range of industrial, public and commercial users of steam and hot water. The sample covered approximately 25% of all boilers in the province and yielded reliable and relevant results. By creating a database combined with soft computing assistance a wide range of possibilities are created for identifying and assessing factors of influence and making a critical evaluation of practices used on the supply side as a source of identified inefficiency. -- Highlights: ► We develop the model for assessing and predicting efficiency of boilers. ► The method implies the use of Artificial Neural Network approach for analysis. ► The results obtained correspond well to collected and measured data. ► Findings confirm and present good abilities of preventive or proactive approach. ► Analysis reveals and specifies opportunities for increasing efficiency of boilers.

  5. Literature-based condition-specific miRNA-mRNA target prediction.

    Directory of Open Access Journals (Sweden)

    Minsik Oh

    Full Text Available miRNAs are small non-coding RNAs that regulate gene expression by binding to the 3'-UTR of genes. Many recent studies have reported that miRNAs play important biological roles by regulating specific mRNAs or genes. Many sequence-based target prediction algorithms have been developed to predict miRNA targets. However, these methods are not designed for condition-specific target predictions and produce many false positives; thus, expression-based target prediction algorithms have been developed for condition-specific target predictions. A typical strategy to utilize expression data is to leverage the negative control roles of miRNAs on genes. To control false positives, a stringent cutoff value is typically set, but in this case, these methods tend to reject many true target relationships, i.e., false negatives. To overcome these limitations, additional information should be utilized. The literature is probably the best resource that we can utilize. Recent literature mining systems compile millions of articles with experiments designed for specific biological questions, and the systems provide a function to search for specific information. To utilize the literature information, we used a literature mining system, BEST, that automatically extracts information from the literature in PubMed and that allows the user to perform searches of the literature with any English words. By integrating omics data analysis methods and BEST, we developed Context-MMIA, a miRNA-mRNA target prediction method that combines expression data analysis results and the literature information extracted based on the user-specified context. In the pathway enrichment analysis using genes included in the top 200 miRNA-targets, Context-MMIA outperformed the four existing target prediction methods that we tested. In another test on whether prediction methods can re-produce experimentally validated target relationships, Context-MMIA outperformed the four existing target prediction

  6. Offset Free Tracking Predictive Control Based on Dynamic PLS Framework

    Directory of Open Access Journals (Sweden)

    Jin Xin

    2017-10-01

    Full Text Available This paper develops an offset free tracking model predictive control based on a dynamic partial least square (PLS framework. First, state space model is used as the inner model of PLS to describe the dynamic system, where subspace identification method is used to identify the inner model. Based on the obtained model, multiple independent model predictive control (MPC controllers are designed. Due to the decoupling character of PLS, these controllers are running separately, which is suitable for distributed control framework. In addition, the increment of inner model output is considered in the cost function of MPC, which involves integral action in the controller. Hence, the offset free tracking performance is guaranteed. The results of an industry background simulation demonstrate the effectiveness of proposed method.

  7. Gender Differences in Performance Predictions: Evidence from the Cognitive Reflection Test

    Directory of Open Access Journals (Sweden)

    Patrick Ring

    2016-11-01

    Full Text Available This paper studies performance predictions in the 7-item Cognitive Reflection Test (CRT and whether they differ by gender. After participants completed the CRT, they predicted their own (i, the other participants’ (ii, men’s (iii, and women’s (iv number of correct answers. In keeping with existing literature, men scored higher on the CRT than women and both men and women were too optimistic about their own performance. When we compare gender-specific predictions, we observe that men think they perform significantly better than other men and do so significantly more than women. The equality between women’s predictions about their own performance and their female peers cannot be rejected. Our findings contribute to the growing literature on the underpinnings of behavior in economics and in psychology by uncovering gender differences in confidence about one’s ability relative to same and opposite sex peers.

  8. Module-based outcome prediction using breast cancer compendia.

    Directory of Open Access Journals (Sweden)

    Martin H van Vliet

    Full Text Available BACKGROUND: The availability of large collections of microarray datasets (compendia, or knowledge about grouping of genes into pathways (gene sets, is typically not exploited when training predictors of disease outcome. These can be useful since a compendium increases the number of samples, while gene sets reduce the size of the feature space. This should be favorable from a machine learning perspective and result in more robust predictors. METHODOLOGY: We extracted modules of regulated genes from gene sets, and compendia. Through supervised analysis, we constructed predictors which employ modules predictive of breast cancer outcome. To validate these predictors we applied them to independent data, from the same institution (intra-dataset, and other institutions (inter-dataset. CONCLUSIONS: We show that modules derived from single breast cancer datasets achieve better performance on the validation data compared to gene-based predictors. We also show that there is a trend in compendium specificity and predictive performance: modules derived from a single breast cancer dataset, and a breast cancer specific compendium perform better compared to those derived from a human cancer compendium. Additionally, the module-based predictor provides a much richer insight into the underlying biology. Frequently selected gene sets are associated with processes such as cell cycle, E2F regulation, DNA damage response, proteasome and glycolysis. We analyzed two modules related to cell cycle, and the OCT1 transcription factor, respectively. On an individual basis, these modules provide a significant separation in survival subgroups on the training and independent validation data.

  9. The trickle-down effect of predictability: Secondary task performance benefits from predictability in the primary task.

    Directory of Open Access Journals (Sweden)

    Magdalena Ewa Król

    Full Text Available Predictions optimize processing by reducing attentional resources allocation to expected or predictable sensory data. Our study demonstrates that these saved processing resources can be then used on concurrent stimuli, and in consequence improve their processing and encoding. We illustrate this "trickle-down" effect with a dual task, where the primary task varied in terms of predictability. The primary task involved detection of a pre-specified symbol that appeared at some point of a short video of a dot moving along a random, semi-predictable or predictable trajectory. The concurrent secondary task involved memorization of photographs representing either emotionally neutral or non-neutral (social or threatening content. Performance in the secondary task was measured by a memory test. We found that participants allocated more attention to unpredictable (random and semi-predictable stimuli than to predictable stimuli. Additionally, when the stimuli in the primary task were more predictable, participants performed better in the secondary task, as evidenced by higher sensitivity in the memory test. Finally, social or threatening stimuli were allocated more "looking time" and a larger number of saccades than neutral stimuli. This effect was stronger for the threatening stimuli than social stimuli. Thus, predictability of environmental input is used in optimizing the allocation of attentional resources, which trickles-down and benefits the processing of concurrent stimuli.

  10. The trickle-down effect of predictability: Secondary task performance benefits from predictability in the primary task.

    Science.gov (United States)

    Król, Magdalena Ewa; Król, Michał

    2017-01-01

    Predictions optimize processing by reducing attentional resources allocation to expected or predictable sensory data. Our study demonstrates that these saved processing resources can be then used on concurrent stimuli, and in consequence improve their processing and encoding. We illustrate this "trickle-down" effect with a dual task, where the primary task varied in terms of predictability. The primary task involved detection of a pre-specified symbol that appeared at some point of a short video of a dot moving along a random, semi-predictable or predictable trajectory. The concurrent secondary task involved memorization of photographs representing either emotionally neutral or non-neutral (social or threatening) content. Performance in the secondary task was measured by a memory test. We found that participants allocated more attention to unpredictable (random and semi-predictable) stimuli than to predictable stimuli. Additionally, when the stimuli in the primary task were more predictable, participants performed better in the secondary task, as evidenced by higher sensitivity in the memory test. Finally, social or threatening stimuli were allocated more "looking time" and a larger number of saccades than neutral stimuli. This effect was stronger for the threatening stimuli than social stimuli. Thus, predictability of environmental input is used in optimizing the allocation of attentional resources, which trickles-down and benefits the processing of concurrent stimuli.

  11. Predicting microRNA precursors with a generalized Gaussian components based density estimation algorithm

    Directory of Open Access Journals (Sweden)

    Wu Chi-Yeh

    2010-01-01

    Full Text Available Abstract Background MicroRNAs (miRNAs are short non-coding RNA molecules, which play an important role in post-transcriptional regulation of gene expression. There have been many efforts to discover miRNA precursors (pre-miRNAs over the years. Recently, ab initio approaches have attracted more attention because they do not depend on homology information and provide broader applications than comparative approaches. Kernel based classifiers such as support vector machine (SVM are extensively adopted in these ab initio approaches due to the prediction performance they achieved. On the other hand, logic based classifiers such as decision tree, of which the constructed model is interpretable, have attracted less attention. Results This article reports the design of a predictor of pre-miRNAs with a novel kernel based classifier named the generalized Gaussian density estimator (G2DE based classifier. The G2DE is a kernel based algorithm designed to provide interpretability by utilizing a few but representative kernels for constructing the classification model. The performance of the proposed predictor has been evaluated with 692 human pre-miRNAs and has been compared with two kernel based and two logic based classifiers. The experimental results show that the proposed predictor is capable of achieving prediction performance comparable to those delivered by the prevailing kernel based classification algorithms, while providing the user with an overall picture of the distribution of the data set. Conclusion Software predictors that identify pre-miRNAs in genomic sequences have been exploited by biologists to facilitate molecular biology research in recent years. The G2DE employed in this study can deliver prediction accuracy comparable with the state-of-the-art kernel based machine learning algorithms. Furthermore, biologists can obtain valuable insights about the different characteristics of the sequences of pre-miRNAs with the models generated by the G

  12. An auxiliary optimization method for complex public transit route network based on link prediction

    Science.gov (United States)

    Zhang, Lin; Lu, Jian; Yue, Xianfei; Zhou, Jialin; Li, Yunxuan; Wan, Qian

    2018-02-01

    Inspired by the missing (new) link prediction and the spurious existing link identification in link prediction theory, this paper establishes an auxiliary optimization method for public transit route network (PTRN) based on link prediction. First, link prediction applied to PTRN is described, and based on reviewing the previous studies, the summary indices set and its algorithms set are collected for the link prediction experiment. Second, through analyzing the topological properties of Jinan’s PTRN established by the Space R method, we found that this is a typical small-world network with a relatively large average clustering coefficient. This phenomenon indicates that the structural similarity-based link prediction will show a good performance in this network. Then, based on the link prediction experiment of the summary indices set, three indices with maximum accuracy are selected for auxiliary optimization of Jinan’s PTRN. Furthermore, these link prediction results show that the overall layout of Jinan’s PTRN is stable and orderly, except for a partial area that requires optimization and reconstruction. The above pattern conforms to the general pattern of the optimal development stage of PTRN in China. Finally, based on the missing (new) link prediction and the spurious existing link identification, we propose optimization schemes that can be used not only to optimize current PTRN but also to evaluate PTRN planning.

  13. Review and evaluation of performance measures for survival prediction models in external validation settings

    Directory of Open Access Journals (Sweden)

    M. Shafiqur Rahman

    2017-04-01

    Full Text Available Abstract Background When developing a prediction model for survival data it is essential to validate its performance in external validation settings using appropriate performance measures. Although a number of such measures have been proposed, there is only limited guidance regarding their use in the context of model validation. This paper reviewed and evaluated a wide range of performance measures to provide some guidelines for their use in practice. Methods An extensive simulation study based on two clinical datasets was conducted to investigate the performance of the measures in external validation settings. Measures were selected from categories that assess the overall performance, discrimination and calibration of a survival prediction model. Some of these have been modified to allow their use with validation data, and a case study is provided to describe how these measures can be estimated in practice. The measures were evaluated with respect to their robustness to censoring and ease of interpretation. All measures are implemented, or are straightforward to implement, in statistical software. Results Most of the performance measures were reasonably robust to moderate levels of censoring. One exception was Harrell’s concordance measure which tended to increase as censoring increased. Conclusions We recommend that Uno’s concordance measure is used to quantify concordance when there are moderate levels of censoring. Alternatively, Gönen and Heller’s measure could be considered, especially if censoring is very high, but we suggest that the prediction model is re-calibrated first. We also recommend that Royston’s D is routinely reported to assess discrimination since it has an appealing interpretation. The calibration slope is useful for both internal and external validation settings and recommended to report routinely. Our recommendation would be to use any of the predictive accuracy measures and provide the corresponding predictive

  14. A Particle Swarm Optimization-Based Approach with Local Search for Predicting Protein Folding.

    Science.gov (United States)

    Yang, Cheng-Hong; Lin, Yu-Shiun; Chuang, Li-Yeh; Chang, Hsueh-Wei

    2017-10-01

    The hydrophobic-polar (HP) model is commonly used for predicting protein folding structures and hydrophobic interactions. This study developed a particle swarm optimization (PSO)-based algorithm combined with local search algorithms; specifically, the high exploration PSO (HEPSO) algorithm (which can execute global search processes) was combined with three local search algorithms (hill-climbing algorithm, greedy algorithm, and Tabu table), yielding the proposed HE-L-PSO algorithm. By using 20 known protein structures, we evaluated the performance of the HE-L-PSO algorithm in predicting protein folding in the HP model. The proposed HE-L-PSO algorithm exhibited favorable performance in predicting both short and long amino acid sequences with high reproducibility and stability, compared with seven reported algorithms. The HE-L-PSO algorithm yielded optimal solutions for all predicted protein folding structures. All HE-L-PSO-predicted protein folding structures possessed a hydrophobic core that is similar to normal protein folding.

  15. Neural Fuzzy Inference System-Based Weather Prediction Model and Its Precipitation Predicting Experiment

    Directory of Open Access Journals (Sweden)

    Jing Lu

    2014-11-01

    Full Text Available We propose a weather prediction model in this article based on neural network and fuzzy inference system (NFIS-WPM, and then apply it to predict daily fuzzy precipitation given meteorological premises for testing. The model consists of two parts: the first part is the “fuzzy rule-based neural network”, which simulates sequential relations among fuzzy sets using artificial neural network; and the second part is the “neural fuzzy inference system”, which is based on the first part, but could learn new fuzzy rules from the previous ones according to the algorithm we proposed. NFIS-WPM (High Pro and NFIS-WPM (Ave are improved versions of this model. It is well known that the need for accurate weather prediction is apparent when considering the benefits. However, the excessive pursuit of accuracy in weather prediction makes some of the “accurate” prediction results meaningless and the numerical prediction model is often complex and time-consuming. By adapting this novel model to a precipitation prediction problem, we make the predicted outcomes of precipitation more accurate and the prediction methods simpler than by using the complex numerical forecasting model that would occupy large computation resources, be time-consuming and which has a low predictive accuracy rate. Accordingly, we achieve more accurate predictive precipitation results than by using traditional artificial neural networks that have low predictive accuracy.

  16. Prediction of Gas Injection Performance for Heterogeneous Reservoirs

    Energy Technology Data Exchange (ETDEWEB)

    Blunt, Martin J.; Orr, Franklin M.

    1999-05-17

    This report describes research carried out in the Department of Petroleum Engineering at Stanford University from September 1997 - September 1998 under the second year of a three-year grant from the Department of Energy on the "Prediction of Gas Injection Performance for Heterogeneous Reservoirs." The research effort is an integrated study of the factors affecting gas injection, from the pore scale to the field scale, and involves theoretical analysis, laboratory experiments, and numerical simulation. The original proposal described research in four areas: (1) Pore scale modeling of three phase flow in porous media; (2) Laboratory experiments and analysis of factors influencing gas injection performance at the core scale with an emphasis on the fundamentals of three phase flow; (3) Benchmark simulations of gas injection at the field scale; and (4) Development of streamline-based reservoir simulator. Each state of the research is planned to provide input and insight into the next stage, such that at the end we should have an integrated understanding of the key factors affecting field scale displacements.

  17. Novel Approach for the Recognition and Prediction of Multi-Function Radar Behaviours Based on Predictive State Representations.

    Science.gov (United States)

    Ou, Jian; Chen, Yongguang; Zhao, Feng; Liu, Jin; Xiao, Shunping

    2017-03-19

    The extensive applications of multi-function radars (MFRs) have presented a great challenge to the technologies of radar countermeasures (RCMs) and electronic intelligence (ELINT). The recently proposed cognitive electronic warfare (CEW) provides a good solution, whose crux is to perceive present and future MFR behaviours, including the operating modes, waveform parameters, scheduling schemes, etc. Due to the variety and complexity of MFR waveforms, the existing approaches have the drawbacks of inefficiency and weak practicability in prediction. A novel method for MFR behaviour recognition and prediction is proposed based on predictive state representation (PSR). With the proposed approach, operating modes of MFR are recognized by accumulating the predictive states, instead of using fixed transition probabilities that are unavailable in the battlefield. It helps to reduce the dependence of MFR on prior information. And MFR signals can be quickly predicted by iteratively using the predicted observation, avoiding the very large computation brought by the uncertainty of future observations. Simulations with a hypothetical MFR signal sequence in a typical scenario are presented, showing that the proposed methods perform well and efficiently, which attests to their validity.

  18. Comparative values of medical school assessments in the prediction of internship performance.

    Science.gov (United States)

    Lee, Ming; Vermillion, Michelle

    2018-02-01

    Multiple undergraduate achievements have been used for graduate admission consideration. Their relative values in the prediction of residency performance are not clear. This study compared the contributions of major undergraduate assessments to the prediction of internship performance. Internship performance ratings of the graduates of a medical school were collected from 2012 to 2015. Hierarchical multiple regression analyses were used to examine the predictive values of undergraduate measures assessing basic and clinical sciences knowledge and clinical performances, after controlling for differences in the Medical College Admission Test (MCAT). Four hundred eighty (75%) graduates' archived data were used in the study. Analyses revealed that clinical competencies, assessed by the USMLE Step 2 CK, NBME medicine exam, and an eight-station objective structured clinical examination (OSCE), were strong predictors of internship performance. Neither the USMLE Step 1 nor the inpatient internal medicine clerkship evaluation predicted internship performance. The undergraduate assessments as a whole showed a significant collective relationship with internship performance (ΔR 2  = 0.12, p < 0.001). The study supports the use of clinical competency assessments, instead of pre-clinical measures, in graduate admission consideration. It also provides validity evidence for OSCE scores in the prediction of workplace performance.

  19. Model predictive control based on reduced order models applied to belt conveyor system.

    Science.gov (United States)

    Chen, Wei; Li, Xin

    2016-11-01

    In the paper, a model predictive controller based on reduced order model is proposed to control belt conveyor system, which is an electro-mechanics complex system with long visco-elastic body. Firstly, in order to design low-degree controller, the balanced truncation method is used for belt conveyor model reduction. Secondly, MPC algorithm based on reduced order model for belt conveyor system is presented. Because of the error bound between the full-order model and reduced order model, two Kalman state estimators are applied in the control scheme to achieve better system performance. Finally, the simulation experiments are shown that balanced truncation method can significantly reduce the model order with high-accuracy and model predictive control based on reduced-model performs well in controlling the belt conveyor system. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  20. Performance analysis of a potassium-base AMTEC cell

    International Nuclear Information System (INIS)

    Huang, C.; Hendricks, T.J.; Hunt, T.K.

    1998-01-01

    Sodium-BASE Alkali-Metal-Thermal-to-Electric-Conversion (AMTEC) cells have been receiving increased attention and funding from the Department of Energy, NASA and the United States Air Force. Recently, sodium-BASE (Na-BASE) AMTEC cells were selected for the Advanced Radioisotope Power System (ARPS) program for the next generation of deep-space missions and spacecraft. Potassium-BASE (K-BASE) AMTEC cells have not received as much attention to date, even though the vapor pressure of potassium is higher than that of sodium at the same temperature. So that, K-BASE AMTEC cells with potentially higher open circuit voltage and higher power output than Na-BASE AMTEC cells are possible. Because the surface tension of potassium is about half of the surface tension of sodium at the same temperature, the artery and evaporator design in a potassium AMTEC cell has much more challenging pore size requirements than designs using sodium. This paper uses a flexible thermal/fluid/electrical model to predict the performance of a K-BASE AMTEC cell. Pore sizes in the artery of K-BASE AMTEC cells must be smaller by an order of magnitude than in Na-BASE AMTEC cells. The performance of a K-BASE AMTEC cell was higher than a Na-BASE AMTEC cell at low voltages/high currents. K-BASE AMTEC cells also have the potential of much better electrode performance, thereby creating another avenue for potentially better performance in K-BASE AMTEC cells

  1. A polynomial based model for cell fate prediction in human diseases.

    Science.gov (United States)

    Ma, Lichun; Zheng, Jie

    2017-12-21

    Cell fate regulation directly affects tissue homeostasis and human health. Research on cell fate decision sheds light on key regulators, facilitates understanding the mechanisms, and suggests novel strategies to treat human diseases that are related to abnormal cell development. In this study, we proposed a polynomial based model to predict cell fate. This model was derived from Taylor series. As a case study, gene expression data of pancreatic cells were adopted to test and verify the model. As numerous features (genes) are available, we employed two kinds of feature selection methods, i.e. correlation based and apoptosis pathway based. Then polynomials of different degrees were used to refine the cell fate prediction function. 10-fold cross-validation was carried out to evaluate the performance of our model. In addition, we analyzed the stability of the resultant cell fate prediction model by evaluating the ranges of the parameters, as well as assessing the variances of the predicted values at randomly selected points. Results show that, within both the two considered gene selection methods, the prediction accuracies of polynomials of different degrees show little differences. Interestingly, the linear polynomial (degree 1 polynomial) is more stable than others. When comparing the linear polynomials based on the two gene selection methods, it shows that although the accuracy of the linear polynomial that uses correlation analysis outcomes is a little higher (achieves 86.62%), the one within genes of the apoptosis pathway is much more stable. Considering both the prediction accuracy and the stability of polynomial models of different degrees, the linear model is a preferred choice for cell fate prediction with gene expression data of pancreatic cells. The presented cell fate prediction model can be extended to other cells, which may be important for basic research as well as clinical study of cell development related diseases.

  2. Automatic evidence quality prediction to support evidence-based decision making.

    Science.gov (United States)

    Sarker, Abeed; Mollá, Diego; Paris, Cécile

    2015-06-01

    Evidence-based medicine practice requires practitioners to obtain the best available medical evidence, and appraise the quality of the evidence when making clinical decisions. Primarily due to the plethora of electronically available data from the medical literature, the manual appraisal of the quality of evidence is a time-consuming process. We present a fully automatic approach for predicting the quality of medical evidence in order to aid practitioners at point-of-care. Our approach extracts relevant information from medical article abstracts and utilises data from a specialised corpus to apply supervised machine learning for the prediction of the quality grades. Following an in-depth analysis of the usefulness of features (e.g., publication types of articles), they are extracted from the text via rule-based approaches and from the meta-data associated with the articles, and then applied in the supervised classification model. We propose the use of a highly scalable and portable approach using a sequence of high precision classifiers, and introduce a simple evaluation metric called average error distance (AED) that simplifies the comparison of systems. We also perform elaborate human evaluations to compare the performance of our system against human judgments. We test and evaluate our approaches on a publicly available, specialised, annotated corpus containing 1132 evidence-based recommendations. Our rule-based approach performs exceptionally well at the automatic extraction of publication types of articles, with F-scores of up to 0.99 for high-quality publication types. For evidence quality classification, our approach obtains an accuracy of 63.84% and an AED of 0.271. The human evaluations show that the performance of our system, in terms of AED and accuracy, is comparable to the performance of humans on the same data. The experiments suggest that our structured text classification framework achieves evaluation results comparable to those of human performance

  3. Predicting the performance of fingerprint similarity searching.

    Science.gov (United States)

    Vogt, Martin; Bajorath, Jürgen

    2011-01-01

    Fingerprints are bit string representations of molecular structure that typically encode structural fragments, topological features, or pharmacophore patterns. Various fingerprint designs are utilized in virtual screening and their search performance essentially depends on three parameters: the nature of the fingerprint, the active compounds serving as reference molecules, and the composition of the screening database. It is of considerable interest and practical relevance to predict the performance of fingerprint similarity searching. A quantitative assessment of the potential that a fingerprint search might successfully retrieve active compounds, if available in the screening database, would substantially help to select the type of fingerprint most suitable for a given search problem. The method presented herein utilizes concepts from information theory to relate the fingerprint feature distributions of reference compounds to screening libraries. If these feature distributions do not sufficiently differ, active database compounds that are similar to reference molecules cannot be retrieved because they disappear in the "background." By quantifying the difference in feature distribution using the Kullback-Leibler divergence and relating the divergence to compound recovery rates obtained for different benchmark classes, fingerprint search performance can be quantitatively predicted.

  4. Children's biological responsivity to acute stress predicts concurrent cognitive performance.

    Science.gov (United States)

    Roos, Leslie E; Beauchamp, Kathryn G; Giuliano, Ryan; Zalewski, Maureen; Kim, Hyoun K; Fisher, Philip A

    2018-04-10

    Although prior research has characterized stress system reactivity (i.e. hypothalamic-pituitary-adrenal axis, HPAA; autonomic nervous system, ANS) in children, it has yet to examine the extent to which biological reactivity predicts concurrent goal-directed behavior. Here, we employed a stressor paradigm that allowed concurrent assessment of both stress system reactivity and performance on a speeded-response task to investigate the links between biological reactivity and cognitive function under stress. We further investigated gender as a moderator given previous research suggesting that the ANS may be particularly predictive of behavior in males due to gender differences in socialization. In a sociodemographically diverse sample of young children (N = 58, M age = 5.38 yrs; 44% male), individual differences in sociodemographic covariates (age, household income), HPAA (i.e. cortisol), and ANS (i.e. respiratory sinus arrhythmia, RSA, indexing the parasympathetic branch; pre-ejection period, PEP, indexing the sympathetic branch) function were assessed as predictors of cognitive performance under stress. We hypothesized that higher income, older age, and greater cortisol reactivity would be associated with better performance overall, and flexible ANS responsivity (i.e. RSA withdrawal, PEP shortening) would be predictive of performance for males. Overall, females performed better than males. Two-group SEM analyses suggest that, for males, greater RSA withdrawal to the stressor was associated with better performance, while for females, older age, higher income, and greater cortisol reactivity were associated with better performance. Results highlight the relevance of stress system reactivity to cognitive performance under stress. Future research is needed to further elucidate for whom and in what situations biological reactivity predicts goal-directed behavior.

  5. A New Approach to Fatigue Life Prediction Based on Nucleation and Growth (Preprint)

    National Research Council Canada - National Science Library

    McClung, R. C; Francis, W. L; Hudak, S. J

    2006-01-01

    Prediction of total fatigue life in components is often performed by summing "initiation" and "propagation" life phases, where initiation life is based on stress-life or strain-life methods calibrated...

  6. A Compact Methodology to Understand, Evaluate, and Predict the Performance of Automatic Target Recognition

    Science.gov (United States)

    Li, Yanpeng; Li, Xiang; Wang, Hongqiang; Chen, Yiping; Zhuang, Zhaowen; Cheng, Yongqiang; Deng, Bin; Wang, Liandong; Zeng, Yonghu; Gao, Lei

    2014-01-01

    This paper offers a compacted mechanism to carry out the performance evaluation work for an automatic target recognition (ATR) system: (a) a standard description of the ATR system's output is suggested, a quantity to indicate the operating condition is presented based on the principle of feature extraction in pattern recognition, and a series of indexes to assess the output in different aspects are developed with the application of statistics; (b) performance of the ATR system is interpreted by a quality factor based on knowledge of engineering mathematics; (c) through a novel utility called “context-probability” estimation proposed based on probability, performance prediction for an ATR system is realized. The simulation result shows that the performance of an ATR system can be accounted for and forecasted by the above-mentioned measures. Compared to existing technologies, the novel method can offer more objective performance conclusions for an ATR system. These conclusions may be helpful in knowing the practical capability of the tested ATR system. At the same time, the generalization performance of the proposed method is good. PMID:24967605

  7. Hyperformance: predicting high-speed performance of a b-double

    CSIR Research Space (South Africa)

    Berman, Robert J

    2016-11-01

    Full Text Available of the vehicles. The prediction model bridges that gap in the form of a light-weight methodology to predict the PBS performance of a new vehicle design given a set of vehicle input data. Such a model was developed for typical South African 9-axle B-double PBS...

  8. A Prediction Method of Airport Noise Based on Hybrid Ensemble Learning

    Directory of Open Access Journals (Sweden)

    Tao XU

    2014-05-01

    Full Text Available Using monitoring history data to build and to train a prediction model for airport noise is a normal method in recent years. However, the single model built in different ways has various performances in the storage, efficiency and accuracy. In order to predict the noise accurately in some complex environment around airport, this paper presents a prediction method based on hybrid ensemble learning. The proposed method ensembles three algorithms: artificial neural network as an active learner, nearest neighbor as a passive leaner and nonlinear regression as a synthesized learner. The experimental results show that the three learners can meet forecast demands respectively in on- line, near-line and off-line. And the accuracy of prediction is improved by integrating these three learners’ results.

  9. Performance Prediction Toolkit

    Energy Technology Data Exchange (ETDEWEB)

    2017-09-25

    The Performance Prediction Toolkit (PPT), is a scalable co-design tool that contains the hardware and middle-ware models, which accept proxy applications as input in runtime prediction. PPT relies on Simian, a parallel discrete event simulation engine in Python or Lua, that uses the process concept, where each computing unit (host, node, core) is a Simian entity. Processes perform their task through message exchanges to remain active, sleep, wake-up, begin and end. The PPT hardware model of a compute core (such as a Haswell core) consists of a set of parameters, such as clock speed, memory hierarchy levels, their respective sizes, cache-lines, access times for different cache levels, average cycle counts of ALU operations, etc. These parameters are ideally read off a spec sheet or are learned using regression models learned from hardware counters (PAPI) data. The compute core model offers an API to the software model, a function called time_compute(), which takes as input a tasklist. A tasklist is an unordered set of ALU, and other CPU-type operations (in particular virtual memory loads and stores). The PPT application model mimics the loop structure of the application and replaces the computational kernels with a call to the hardware model's time_compute() function giving tasklists as input that model the compute kernel. A PPT application model thus consists of tasklists representing kernels and the high-er level loop structure that we like to think of as pseudo code. The key challenge for the hardware model's time_compute-function is to translate virtual memory accesses into actual cache hierarchy level hits and misses.PPT also contains another CPU core level hardware model, Analytical Memory Model (AMM). The AMM solves this challenge soundly, where our previous alternatives explicitly include the L1,L2,L3 hit-rates as inputs to the tasklists. Explicit hit-rates inevitably only reflect the application modeler's best guess, perhaps informed by a few

  10. Link Prediction in Evolving Networks Based on Popularity of Nodes.

    Science.gov (United States)

    Wang, Tong; He, Xing-Sheng; Zhou, Ming-Yang; Fu, Zhong-Qian

    2017-08-02

    Link prediction aims to uncover the underlying relationship behind networks, which could be utilized to predict missing edges or identify the spurious edges. The key issue of link prediction is to estimate the likelihood of potential links in networks. Most classical static-structure based methods ignore the temporal aspects of networks, limited by the time-varying features, such approaches perform poorly in evolving networks. In this paper, we propose a hypothesis that the ability of each node to attract links depends not only on its structural importance, but also on its current popularity (activeness), since active nodes have much more probability to attract future links. Then a novel approach named popularity based structural perturbation method (PBSPM) and its fast algorithm are proposed to characterize the likelihood of an edge from both existing connectivity structure and current popularity of its two endpoints. Experiments on six evolving networks show that the proposed methods outperform state-of-the-art methods in accuracy and robustness. Besides, visual results and statistical analysis reveal that the proposed methods are inclined to predict future edges between active nodes, rather than edges between inactive nodes.

  11. Machine-learning scoring functions to improve structure-based binding affinity prediction and virtual screening.

    Science.gov (United States)

    Ain, Qurrat Ul; Aleksandrova, Antoniya; Roessler, Florian D; Ballester, Pedro J

    2015-01-01

    Docking tools to predict whether and how a small molecule binds to a target can be applied if a structural model of such target is available. The reliability of docking depends, however, on the accuracy of the adopted scoring function (SF). Despite intense research over the years, improving the accuracy of SFs for structure-based binding affinity prediction or virtual screening has proven to be a challenging task for any class of method. New SFs based on modern machine-learning regression models, which do not impose a predetermined functional form and thus are able to exploit effectively much larger amounts of experimental data, have recently been introduced. These machine-learning SFs have been shown to outperform a wide range of classical SFs at both binding affinity prediction and virtual screening. The emerging picture from these studies is that the classical approach of using linear regression with a small number of expert-selected structural features can be strongly improved by a machine-learning approach based on nonlinear regression allied with comprehensive data-driven feature selection. Furthermore, the performance of classical SFs does not grow with larger training datasets and hence this performance gap is expected to widen as more training data becomes available in the future. Other topics covered in this review include predicting the reliability of a SF on a particular target class, generating synthetic data to improve predictive performance and modeling guidelines for SF development. WIREs Comput Mol Sci 2015, 5:405-424. doi: 10.1002/wcms.1225 For further resources related to this article, please visit the WIREs website.

  12. Patient Similarity in Prediction Models Based on Health Data: A Scoping Review

    Science.gov (United States)

    Sharafoddini, Anis; Dubin, Joel A

    2017-01-01

    Background Physicians and health policy makers are required to make predictions during their decision making in various medical problems. Many advances have been made in predictive modeling toward outcome prediction, but these innovations target an average patient and are insufficiently adjustable for individual patients. One developing idea in this field is individualized predictive analytics based on patient similarity. The goal of this approach is to identify patients who are similar to an index patient and derive insights from the records of similar patients to provide personalized predictions.. Objective The aim is to summarize and review published studies describing computer-based approaches for predicting patients’ future health status based on health data and patient similarity, identify gaps, and provide a starting point for related future research. Methods The method involved (1) conducting the review by performing automated searches in Scopus, PubMed, and ISI Web of Science, selecting relevant studies by first screening titles and abstracts then analyzing full-texts, and (2) documenting by extracting publication details and information on context, predictors, missing data, modeling algorithm, outcome, and evaluation methods into a matrix table, synthesizing data, and reporting results. Results After duplicate removal, 1339 articles were screened in abstracts and titles and 67 were selected for full-text review. In total, 22 articles met the inclusion criteria. Within included articles, hospitals were the main source of data (n=10). Cardiovascular disease (n=7) and diabetes (n=4) were the dominant patient diseases. Most studies (n=18) used neighborhood-based approaches in devising prediction models. Two studies showed that patient similarity-based modeling outperformed population-based predictive methods. Conclusions Interest in patient similarity-based predictive modeling for diagnosis and prognosis has been growing. In addition to raw/coded health

  13. Contribution of temporal data to predictive performance in 30-day readmission of morbidly obese patients

    Directory of Open Access Journals (Sweden)

    Petra Povalej Brzan

    2017-04-01

    Full Text Available Background Reduction of readmissions after discharge represents an important challenge for many hospitals and has attracted the interest of many researchers in the past few years. Most of the studies in this field focus on building cross-sectional predictive models that aim to predict the occurrence of readmission within 30-days based on information from the current hospitalization. The aim of this study is demonstration of predictive performance gain obtained by inclusion of information from historical hospitalization records among morbidly obese patients. Methods The California Statewide inpatient database was used to build regularized logistic regression models for prediction of readmission in morbidly obese patients (n = 18,881. Temporal features were extracted from historical patient hospitalization records in a one-year timeframe. Five different datasets of patients were prepared based on the number of available hospitalizations per patient. Sample size of the five datasets ranged from 4,787 patients with more than five hospitalizations to 20,521 patients with at least two hospitalization records in one year. A 10-fold cross validation was repeted 100 times to assess the variability of the results. Additionally, random forest and extreme gradient boosting were used to confirm the results. Results Area under the ROC curve increased significantly when including information from up to three historical records on all datasets. The inclusion of more than three historical records was not efficient. Similar results can be observed for Brier score and PPV value. The number of selected predictors corresponded to the complexity of the dataset ranging from an average of 29.50 selected features on the smallest dataset to 184.96 on the largest dataset based on 100 repetitions of 10-fold cross-validation. Discussion The results show positive influence of adding information from historical hospitalization records on predictive performance using all

  14. Predicting adsorptive removal of chlorophenol from aqueous solution using artificial intelligence based modeling approaches.

    Science.gov (United States)

    Singh, Kunwar P; Gupta, Shikha; Ojha, Priyanka; Rai, Premanjali

    2013-04-01

    The research aims to develop artificial intelligence (AI)-based model to predict the adsorptive removal of 2-chlorophenol (CP) in aqueous solution by coconut shell carbon (CSC) using four operational variables (pH of solution, adsorbate concentration, temperature, and contact time), and to investigate their effects on the adsorption process. Accordingly, based on a factorial design, 640 batch experiments were conducted. Nonlinearities in experimental data were checked using Brock-Dechert-Scheimkman (BDS) statistics. Five nonlinear models were constructed to predict the adsorptive removal of CP in aqueous solution by CSC using four variables as input. Performances of the constructed models were evaluated and compared using statistical criteria. BDS statistics revealed strong nonlinearity in experimental data. Performance of all the models constructed here was satisfactory. Radial basis function network (RBFN) and multilayer perceptron network (MLPN) models performed better than generalized regression neural network, support vector machines, and gene expression programming models. Sensitivity analysis revealed that the contact time had highest effect on adsorption followed by the solution pH, temperature, and CP concentration. The study concluded that all the models constructed here were capable of capturing the nonlinearity in data. A better generalization and predictive performance of RBFN and MLPN models suggested that these can be used to predict the adsorption of CP in aqueous solution using CSC.

  15. Predicting students' intention to use stimulants for academic performance enhancement.

    Science.gov (United States)

    Ponnet, Koen; Wouters, Edwin; Walrave, Michel; Heirman, Wannes; Van Hal, Guido

    2015-02-01

    The non-medical use of stimulants for academic performance enhancement is becoming a more common practice among college and university students. The objective of this study is to gain a better understanding of students' intention to use stimulant medication for the purpose of enhancing their academic performance. Based on an extended model of Ajzen's theory of planned behavior, we examined the predictive value of attitude, subjective norm, perceived behavioral control, psychological distress, procrastination, substance use, and alcohol use on students' intention to use stimulants to improve their academic performance. The sample consisted of 3,589 Flemish university and college students (mean age: 21.59, SD: 4.09), who participated anonymously in an online survey conducted in March and April 2013. Structural equation modeling was used to investigate the relationships among the study variables. Our results indicate that subjective norm is the strongest predictor of students' intention to use stimulant medication, followed by attitude and perceived behavioral control. To a lesser extent, procrastinating tendencies, psychological distress, and substance abuse contribute to students' intention. Conclusions/ Importance: Based on these findings, we provide several recommendations on how to curtail students' intention to use stimulant medication for the purpose of improving their academic performance. In addition, we urge researchers to identify other psychological variables that might be related to students' intention.

  16. Detecting determinism with improved sensitivity in time series: rank-based nonlinear predictability score.

    Science.gov (United States)

    Naro, Daniel; Rummel, Christian; Schindler, Kaspar; Andrzejak, Ralph G

    2014-09-01

    The rank-based nonlinear predictability score was recently introduced as a test for determinism in point processes. We here adapt this measure to time series sampled from time-continuous flows. We use noisy Lorenz signals to compare this approach against a classical amplitude-based nonlinear prediction error. Both measures show an almost identical robustness against Gaussian white noise. In contrast, when the amplitude distribution of the noise has a narrower central peak and heavier tails than the normal distribution, the rank-based nonlinear predictability score outperforms the amplitude-based nonlinear prediction error. For this type of noise, the nonlinear predictability score has a higher sensitivity for deterministic structure in noisy signals. It also yields a higher statistical power in a surrogate test of the null hypothesis of linear stochastic correlated signals. We show the high relevance of this improved performance in an application to electroencephalographic (EEG) recordings from epilepsy patients. Here the nonlinear predictability score again appears of higher sensitivity to nonrandomness. Importantly, it yields an improved contrast between signals recorded from brain areas where the first ictal EEG signal changes were detected (focal EEG signals) versus signals recorded from brain areas that were not involved at seizure onset (nonfocal EEG signals).

  17. Comparison of short term rainfall forecasts for model based flow prediction in urban drainage systems

    DEFF Research Database (Denmark)

    Thorndahl, Søren; Poulsen, Troels Sander; Bøvith, Thomas

    2012-01-01

    Forecast based flow prediction in drainage systems can be used to implement real time control of drainage systems. This study compares two different types of rainfall forecasts – a radar rainfall extrapolation based nowcast model and a numerical weather prediction model. The models are applied...... performance of the system is found using the radar nowcast for the short leadtimes and weather model for larger lead times....

  18. Predictive Validity of National Basketball Association Draft Combine on Future Performance.

    Science.gov (United States)

    Teramoto, Masaru; Cross, Chad L; Rieger, Randall H; Maak, Travis G; Willick, Stuart E

    2018-02-01

    Teramoto, M, Cross, CL, Rieger, RH, Maak, TG, and Willick, SE. Predictive validity of national basketball association draft combine on future performance. J Strength Cond Res 32(2): 396-408, 2018-The National Basketball Association (NBA) Draft Combine is an annual event where prospective players are evaluated in terms of their athletic abilities and basketball skills. Data collected at the Combine should help NBA teams select right the players for the upcoming NBA draft; however, its value for predicting future performance of players has not been examined. This study investigated predictive validity of the NBA Draft Combine on future performance of basketball players. We performed a principal component analysis (PCA) on the 2010-2015 Combine data to reduce correlated variables (N = 234), a correlation analysis on the Combine data and future on-court performance to examine relationships (maximum pairwise N = 217), and a robust principal component regression (PCR) analysis to predict first-year and 3-year on-court performance from the Combine measures (N = 148 and 127, respectively). Three components were identified within the Combine data through PCA (= Combine subscales): length-size, power-quickness, and upper-body strength. As per the correlation analysis, the individual Combine items for anthropometrics, including height without shoes, standing reach, weight, wingspan, and hand length, as well as the Combine subscale of length-size, had positive, medium-to-large-sized correlations (r = 0.313-0.545) with defensive performance quantified by Defensive Box Plus/Minus. The robust PCR analysis showed that the Combine subscale of length-size was a predictor most significantly associated with future on-court performance (p ≤ 0.05), including Win Shares, Box Plus/Minus, and Value Over Replacement Player, followed by upper-body strength. In conclusion, the NBA Draft Combine has value for predicting future performance of players.

  19. Parallel Performance Optimizations on Unstructured Mesh-based Simulations

    Energy Technology Data Exchange (ETDEWEB)

    Sarje, Abhinav; Song, Sukhyun; Jacobsen, Douglas; Huck, Kevin; Hollingsworth, Jeffrey; Malony, Allen; Williams, Samuel; Oliker, Leonid

    2015-01-01

    © The Authors. Published by Elsevier B.V. This paper addresses two key parallelization challenges the unstructured mesh-based ocean modeling code, MPAS-Ocean, which uses a mesh based on Voronoi tessellations: (1) load imbalance across processes, and (2) unstructured data access patterns, that inhibit intra- and inter-node performance. Our work analyzes the load imbalance due to naive partitioning of the mesh, and develops methods to generate mesh partitioning with better load balance and reduced communication. Furthermore, we present methods that minimize both inter- and intranode data movement and maximize data reuse. Our techniques include predictive ordering of data elements for higher cache efficiency, as well as communication reduction approaches. We present detailed performance data when running on thousands of cores using the Cray XC30 supercomputer and show that our optimization strategies can exceed the original performance by over 2×. Additionally, many of these solutions can be broadly applied to a wide variety of unstructured grid-based computations.

  20. Composite control for raymond mill based on model predictive control and disturbance observer

    Directory of Open Access Journals (Sweden)

    Dan Niu

    2016-03-01

    Full Text Available In the raymond mill grinding process, precise control of operating load is vital for the high product quality. However, strong external disturbances, such as variations of ore size and ore hardness, usually cause great performance degradation. It is not easy to control the current of raymond mill constant. Several control strategies have been proposed. However, most of them (such as proportional–integral–derivative and model predictive control reject disturbances just through feedback regulation, which may lead to poor control performance in the presence of strong disturbances. For improving disturbance rejection, a control method based on model predictive control and disturbance observer is put forward in this article. The scheme employs disturbance observer as feedforward compensation and model predictive control controller as feedback regulation. The test results illustrate that compared with model predictive control method, the proposed disturbance observer–model predictive control method can obtain significant superiority in disturbance rejection, such as shorter settling time and smaller peak overshoot under strong disturbances.

  1. Phenobarbital in intensive care unit pediatric population: predictive performances of population pharmacokinetic model.

    Science.gov (United States)

    Marsot, Amélie; Michel, Fabrice; Chasseloup, Estelle; Paut, Olivier; Guilhaumou, Romain; Blin, Olivier

    2017-10-01

    An external evaluation of phenobarbital population pharmacokinetic model described by Marsot et al. was performed in pediatric intensive care unit. Model evaluation is an important issue for dose adjustment. This external evaluation should allow confirming the proposed dosage adaptation and extending these recommendations to the entire intensive care pediatric population. External evaluation of phenobarbital published population pharmacokinetic model of Marsot et al. was realized in a new retrospective dataset of 35 patients hospitalized in a pediatric intensive care unit. The published population pharmacokinetic model was implemented in nonmem 7.3. Predictive performance was assessed by quantifying bias and inaccuracy of model prediction. Normalized prediction distribution errors (NPDE) and visual predictive check (VPC) were also evaluated. A total of 35 infants were studied with a mean age of 33.5 weeks (range: 12 days-16 years) and a mean weight of 12.6 kg (range: 2.7-70.0 kg). The model predicted the observed phenobarbital concentrations with a reasonable bias and inaccuracy. The median prediction error was 3.03% (95% CI: -8.52 to 58.12%), and the median absolute prediction error was 26.20% (95% CI: 13.07-75.59%). No trends in NPDE and VPC were observed. The model previously proposed by Marsot et al. in neonates hospitalized in intensive care unit was externally validated for IV infusion administration. The model-based dosing regimen was extended in all pediatric intensive care unit to optimize treatment. Due to inter- and intravariability in pharmacokinetic model, this dosing regimen should be combined with therapeutic drug monitoring. © 2017 Société Française de Pharmacologie et de Thérapeutique.

  2. A Bipartite Network-based Method for Prediction of Long Non-coding RNA–protein Interactions

    Directory of Open Access Journals (Sweden)

    Mengqu Ge

    2016-02-01

    Full Text Available As one large class of non-coding RNAs (ncRNAs, long ncRNAs (lncRNAs have gained considerable attention in recent years. Mutations and dysfunction of lncRNAs have been implicated in human disorders. Many lncRNAs exert their effects through interactions with the corresponding RNA-binding proteins. Several computational approaches have been developed, but only few are able to perform the prediction of these interactions from a network-based point of view. Here, we introduce a computational method named lncRNA–protein bipartite network inference (LPBNI. LPBNI aims to identify potential lncRNA–interacting proteins, by making full use of the known lncRNA–protein interactions. Leave-one-out cross validation (LOOCV test shows that LPBNI significantly outperforms other network-based methods, including random walk (RWR and protein-based collaborative filtering (ProCF. Furthermore, a case study was performed to demonstrate the performance of LPBNI using real data in predicting potential lncRNA–interacting proteins.

  3. Do workaholism and work engagement predict employee well-being and performance in opposite directions?

    Science.gov (United States)

    Shimazu, Akihito; Schaufeli, Wilmar B; Kubota, Kazumi; Kawakami, Norito

    2012-01-01

    This study investigated the distinctiveness between workaholism and work engagement by examining their longitudinal relationships (measurement interval=7 months) with well-being and performance in a sample of 1,967 Japanese employees from various occupations. Based on a previous cross-sectional study (Shimazu & Schaufeli, 2009), we expected that workaholism predicts future unwell-being (i.e., high ill-health and low life satisfaction) and poor job performance, whereas work engagement predicts future well-being (i.e., low ill-health and high life satisfaction) and superior job performance. T1-T2 changes in ill-health, life satisfaction and job performance were measured as residual scores that were then included in the structural equation model. Results showed that workaholism and work engagement were weakly and positively related to each other. In addition, workaholism was related to an increase in ill-health and to a decrease in life satisfaction. In contrast, work engagement was related to a decrease in ill-health and to increases in both life satisfaction and job performance. These findings suggest that workaholism and work engagement are two different kinds of concepts that are oppositely related to well-being and performance.

  4. Multi performance option in direct displacement based design

    Directory of Open Access Journals (Sweden)

    Muljati Ima

    2017-01-01

    Full Text Available Compare to traditional method, direct displacement based design (DDBD offers the more rational design choice due to its compatibility with performance based design which is controlled by the targeted displacement in design. The objectives of this study are: 1 to explore the performance of DDBD for design Level-1, -2 and -3; 2 to determine the most appropriate design level based on material efficiency and damage risk; and 3 to verify the chosen design in order to check its performance under small-, moderate- and severe earthquake. As case study, it uses regular concrete frame structures consists of fourand eight-story with typical plan, located in low- and high-risk seismicity area. The study shows that design Level-2 (repairable damage is the most appropriate choice. Nonlinear time history analysis is run for each case study in order to verify their performance based on parameter: story drift, damage indices, and plastic mechanism. It can be concluded that DDBD performed very well in predicting seismic demand of the observed structures. Design Level-2 can be chosen as the most appropriate design level. Structures are in safe plastic mechanism under all level of seismicity although some plastic hinges formed at some unexpected locations.

  5. Reliable Prediction of Insulin Resistance by a School-Based Fitness Test in Middle-School Children

    Directory of Open Access Journals (Sweden)

    Allen DavidB

    2009-09-01

    Full Text Available Objectives. (1 Determine the predictive value of a school-based test of cardiovascular fitness (CVF for insulin resistance (IR; (2 compare a "school-based" prediction of IR to a "laboratory-based" prediction, using various measures of fitness and body composition. Methods. Middle school children ( performed the Progressive Aerobic Cardiovascular Endurance Run (PACER, a school-based CVF test, and underwent evaluation of maximal oxygen consumption treadmill testing ( max, body composition (percent body fat and BMI z score, and IR (derived homeostasis model assessment index []. Results. PACER showed a strong correlation with max/kg ( = 0.83, and with ( = , . Multivariate regression analysis revealed that a school-based model (using PACER and BMI z score predicted IR similar to a laboratory-based model (using max/kg of lean body mass and percent body fat. Conclusions. The PACER is a valid school-based test of CVF, is predictive of IR, and has a similar relationship to IR when compared to complex laboratory-based testing. Simple school-based measures of childhood fitness (PACER and fatness (BMI z score could be used to identify childhood risk for IR and evaluate interventions.

  6. Variability, Predictability, and Race Factors Affecting Performance in Elite Biathlon.

    Science.gov (United States)

    Skattebo, Øyvind; Losnegard, Thomas

    2018-03-01

    To investigate variability, predictability, and smallest worthwhile performance enhancement in elite biathlon sprint events. In addition, the effects of race factors on performance were assessed. Data from 2005 to 2015 including >10,000 and >1000 observations for each sex for all athletes and annual top-10 athletes, respectively, were included. Generalized linear mixed models were constructed based on total race time, skiing time, shooting time, and proportions of targets hit. Within-athlete race-to-race variability was expressed as coefficient of variation of performance times and standard deviation (SD) in proportion units (%) of targets hit. The models were adjusted for random and fixed effects of subject identity, season, event identity, and race factors. The within-athlete variability was independent of sex and performance standard of athletes: 2.5-3.2% for total race time, 1.5-1.8% for skiing time, and 11-15% for shooting times. The SD of the proportion of hits was ∼10% in both shootings combined (meaning ±1 hit in 10 shots). The predictability in total race time was very high to extremely high for all athletes (ICC .78-.84) but trivial for top-10 athletes (ICC .05). Race times during World Championships and Olympics were ∼2-3% faster than in World Cups. Moreover, race time increased by ∼2% per 1000 m of altitude, by ∼5% per 1% of gradient, by 1-2% per 1 m/s of wind speed, and by ∼2-4% on soft vs hard tracks. Researchers and practitioners should focus on strategies that improve biathletes' performance by at least 0.8-0.9%, corresponding to the smallest worthwhile enhancement (0.3 × within-athlete variability).

  7. The use of repassivation potential in predicting the performance of high-level nuclear waste container materials

    International Nuclear Information System (INIS)

    Sridhar, N.; Dunn, D.; Cragnolino, G.

    1995-01-01

    Localized corrosion in aqueous environments forms an important bounding condition for the performance assessment of high-level waste (HLW) container materials. A predictive methodology using repassivation potential is examined in this paper. It is shown, based on long-term (continuing for over 11 months) testing of alloy 825, that repassivation potential of deep pits or crevices is a conservative and robust parameter for the prediction of localized corrosion. In contrast, initiation potentials measured by short-term tests are non-conservative and highly sensitive to several surface and environmental factors. Corrosion data from various field tests and plant equipment performance are analyzed in terms of the applicability of repassivation potential. The applicability of repassivation potential for predicting the occurrence of stress corrosion cracking (SCC) and intergranular corrosion in chloride containing environments is also examined

  8. Performance Evaluation of Hadoop-based Large-scale Network Traffic Analysis Cluster

    Directory of Open Access Journals (Sweden)

    Tao Ran

    2016-01-01

    Full Text Available As Hadoop has gained popularity in big data era, it is widely used in various fields. The self-design and self-developed large-scale network traffic analysis cluster works well based on Hadoop, with off-line applications running on it to analyze the massive network traffic data. On purpose of scientifically and reasonably evaluating the performance of analysis cluster, we propose a performance evaluation system. Firstly, we set the execution times of three benchmark applications as the benchmark of the performance, and pick 40 metrics of customized statistical resource data. Then we identify the relationship between the resource data and the execution times by a statistic modeling analysis approach, which is composed of principal component analysis and multiple linear regression. After training models by historical data, we can predict the execution times by current resource data. Finally, we evaluate the performance of analysis cluster by the validated predicting of execution times. Experimental results show that the predicted execution times by trained models are within acceptable error range, and the evaluation results of performance are accurate and reliable.

  9. Intrinsic motivation and extrinsic incentives jointly predict performance: a 40-year meta-analysis.

    Science.gov (United States)

    Cerasoli, Christopher P; Nicklin, Jessica M; Ford, Michael T

    2014-07-01

    More than 4 decades of research and 9 meta-analyses have focused on the undermining effect: namely, the debate over whether the provision of extrinsic incentives erodes intrinsic motivation. This review and meta-analysis builds on such previous reviews by focusing on the interrelationship among intrinsic motivation, extrinsic incentives, and performance, with reference to 2 moderators: performance type (quality vs. quantity) and incentive contingency (directly performance-salient vs. indirectly performance-salient), which have not been systematically reviewed to date. Based on random-effects meta-analytic methods, findings from school, work, and physical domains (k = 183, N = 212,468) indicate that intrinsic motivation is a medium to strong predictor of performance (ρ = .21-45). The importance of intrinsic motivation to performance remained in place whether incentives were presented. In addition, incentive salience influenced the predictive validity of intrinsic motivation for performance: In a "crowding out" fashion, intrinsic motivation was less important to performance when incentives were directly tied to performance and was more important when incentives were indirectly tied to performance. Considered simultaneously through meta-analytic regression, intrinsic motivation predicted more unique variance in quality of performance, whereas incentives were a better predictor of quantity of performance. With respect to performance, incentives and intrinsic motivation are not necessarily antagonistic and are best considered simultaneously. Future research should consider using nonperformance criteria (e.g., well-being, job satisfaction) as well as applying the percent-of-maximum-possible (POMP) method in meta-analyses. PsycINFO Database Record (c) 2014 APA, all rights reserved.

  10. EMD-Based Predictive Deep Belief Network for Time Series Prediction: An Application to Drought Forecasting

    Directory of Open Access Journals (Sweden)

    Norbert A. Agana

    2018-02-01

    Full Text Available Drought is a stochastic natural feature that arises due to intense and persistent shortage of precipitation. Its impact is mostly manifested as agricultural and hydrological droughts following an initial meteorological phenomenon. Drought prediction is essential because it can aid in the preparedness and impact-related management of its effects. This study considers the drought forecasting problem by developing a hybrid predictive model using a denoised empirical mode decomposition (EMD and a deep belief network (DBN. The proposed method first decomposes the data into several intrinsic mode functions (IMFs using EMD, and a reconstruction of the original data is obtained by considering only relevant IMFs. Detrended fluctuation analysis (DFA was applied to each IMF to determine the threshold for robust denoising performance. Based on their scaling exponents, irrelevant intrinsic mode functions are identified and suppressed. The proposed method was applied to predict different time scale drought indices across the Colorado River basin using a standardized streamflow index (SSI as the drought index. The results obtained using the proposed method was compared with standard methods such as multilayer perceptron (MLP and support vector regression (SVR. The proposed hybrid model showed improvement in prediction accuracy, especially for multi-step ahead predictions.

  11. Improving local clustering based top-L link prediction methods via asymmetric link clustering information

    Science.gov (United States)

    Wu, Zhihao; Lin, Youfang; Zhao, Yiji; Yan, Hongyan

    2018-02-01

    Networks can represent a wide range of complex systems, such as social, biological and technological systems. Link prediction is one of the most important problems in network analysis, and has attracted much research interest recently. Many link prediction methods have been proposed to solve this problem with various techniques. We can note that clustering information plays an important role in solving the link prediction problem. In previous literatures, we find node clustering coefficient appears frequently in many link prediction methods. However, node clustering coefficient is limited to describe the role of a common-neighbor in different local networks, because it cannot distinguish different clustering abilities of a node to different node pairs. In this paper, we shift our focus from nodes to links, and propose the concept of asymmetric link clustering (ALC) coefficient. Further, we improve three node clustering based link prediction methods via the concept of ALC. The experimental results demonstrate that ALC-based methods outperform node clustering based methods, especially achieving remarkable improvements on food web, hamster friendship and Internet networks. Besides, comparing with other methods, the performance of ALC-based methods are very stable in both globalized and personalized top-L link prediction tasks.

  12. Towards artificial intelligence based diesel engine performance control under varying operating conditions using support vector regression

    Directory of Open Access Journals (Sweden)

    Naradasu Kumar Ravi

    2013-01-01

    Full Text Available Diesel engine designers are constantly on the look-out for performance enhancement through efficient control of operating parameters. In this paper, the concept of an intelligent engine control system is proposed that seeks to ensure optimized performance under varying operating conditions. The concept is based on arriving at the optimum engine operating parameters to ensure the desired output in terms of efficiency. In addition, a Support Vector Machines based prediction model has been developed to predict the engine performance under varying operating conditions. Experiments were carried out at varying loads, compression ratios and amounts of exhaust gas recirculation using a variable compression ratio diesel engine for data acquisition. It was observed that the SVM model was able to predict the engine performance accurately.

  13. Predicting Performance with Contextualized Inventories, No Frame-of-reference Effect?

    NARCIS (Netherlands)

    Holtrop, D.J.; Born, M.P.; de Vries, R.E.

    2014-01-01

    A recent meta-analysis showed that contextualized personality inventories have incremental predictive validity over generic personality inventories when predicting job performance. This study aimed to investigate the differences between two types of contextualization of items: Adding an 'at work'

  14. Predicting Protein-Protein Interaction Sites with a Novel Membership Based Fuzzy SVM Classifier.

    Science.gov (United States)

    Sriwastava, Brijesh K; Basu, Subhadip; Maulik, Ujjwal

    2015-01-01

    Predicting residues that participate in protein-protein interactions (PPI) helps to identify, which amino acids are located at the interface. In this paper, we show that the performance of the classical support vector machine (SVM) algorithm can further be improved with the use of a custom-designed fuzzy membership function, for the partner-specific PPI interface prediction problem. We evaluated the performances of both classical SVM and fuzzy SVM (F-SVM) on the PPI databases of three different model proteomes of Homo sapiens, Escherichia coli and Saccharomyces Cerevisiae and calculated the statistical significance of the developed F-SVM over classical SVM algorithm. We also compared our performance with the available state-of-the-art fuzzy methods in this domain and observed significant performance improvements. To predict interaction sites in protein complexes, local composition of amino acids together with their physico-chemical characteristics are used, where the F-SVM based prediction method exploits the membership function for each pair of sequence fragments. The average F-SVM performance (area under ROC curve) on the test samples in 10-fold cross validation experiment are measured as 77.07, 78.39, and 74.91 percent for the aforementioned organisms respectively. Performances on independent test sets are obtained as 72.09, 73.24 and 82.74 percent respectively. The software is available for free download from http://code.google.com/p/cmater-bioinfo.

  15. Sexual victimization history predicts academic performance in college women.

    Science.gov (United States)

    Baker, Majel R; Frazier, Patricia A; Greer, Christiaan; Paulsen, Jacob A; Howard, Kelli; Meredith, Liza N; Anders, Samantha L; Shallcross, Sandra L

    2016-11-01

    College women frequently report having experienced sexual victimization (SV) in their lifetime, including child sexual abuse and adolescent/adult sexual assault. Although the harmful mental health sequelae of SV have been extensively studied, recent research suggests that SV is also a risk factor for poorer college academic performance. The current studies examined whether exposure to SV uniquely predicted poorer college academic performance, even beyond contributions from three well-established predictors of academic performance: high school rank, composite standardized test scores (i.e., American College Testing [ACT]), and conscientiousness. Study 1 analyzed longitudinal data from a sample of female college students (N = 192) who were assessed at the beginning and end of one semester. SV predicted poorer cumulative end-of-semester grade point average (GPA) while controlling for well-established predictors of academic performance. Study 2 replicated these findings in a second longitudinal study of female college students (N = 390) and extended the analyses to include follow-up data on the freshmen and sophomore students (n = 206) 4 years later. SV predicted students' GPA in their final term at the university above the contributions of well-established academic predictors, and it was the only factor related to leaving college. These findings highlight the importance of expanding the scope of outcomes of SV to include academic performance, and they underscore the need to assess SV and other adverse experiences on college campuses to target students who may be at risk of poor performance or leaving college. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  16. Comparison and Prediction of Preclinical Students' Performance in ...

    African Journals Online (AJOL)

    olayemitoyin

    The data support the hypothesis that students who performed well in one discipline were likely to .... predict success in the clinical curriculum (Baciewicz,. 1990). Similarly ... the International Association of Medical Science. Educators. 17-20.

  17. Comparing theories' performance in predicting violence.

    Science.gov (United States)

    Haas, Henriette; Cusson, Maurice

    2015-01-01

    The stakes of choosing the best theory as a basis for violence prevention and offender rehabilitation are high. However, no single theory of violence has ever been universally accepted by a majority of established researchers. Psychiatry, psychology and sociology are each subdivided into different schools relying upon different premises. All theories can produce empirical evidence for their validity, some of them stating the opposite of each other. Calculating different models with multivariate logistic regression on a dataset of N = 21,312 observations and ninety-two influences allowed a direct comparison of the performance of operationalizations of some of the most important schools. The psychopathology model ranked as the best model in terms of predicting violence right after the comprehensive interdisciplinary model. Next came the rational choice and lifestyle model and third the differential association and learning theory model. Other models namely the control theory model, the childhood-trauma model and the social conflict and reaction model turned out to have low sensitivities for predicting violence. Nevertheless, all models produced acceptable results in predictions of a non-violent outcome. Copyright © 2015. Published by Elsevier Ltd.

  18. Base Oils Biodegradability Prediction with Data Mining Techniques

    Directory of Open Access Journals (Sweden)

    Malika Trabelsi

    2010-02-01

    Full Text Available In this paper, we apply various data mining techniques including continuous numeric and discrete classification prediction models of base oils biodegradability, with emphasis on improving prediction accuracy. The results show that highly biodegradable oils can be better predicted through numeric models. In contrast, classification models did not uncover a similar dichotomy. With the exception of Memory Based Reasoning and Decision Trees, tested classification techniques achieved high classification prediction. However, the technique of Decision Trees helped uncover the most significant predictors. A simple classification rule derived based on this predictor resulted in good classification accuracy. The application of this rule enables efficient classification of base oils into either low or high biodegradability classes with high accuracy. For the latter, a higher precision biodegradability prediction can be obtained using continuous modeling techniques.

  19. Improved prediction of residue flexibility by embedding optimized amino acid grouping into RSA-based linear models.

    Science.gov (United States)

    Zhang, Hua; Kurgan, Lukasz

    2014-12-01

    Knowledge of protein flexibility is vital for deciphering the corresponding functional mechanisms. This knowledge would help, for instance, in improving computational drug design and refinement in homology-based modeling. We propose a new predictor of the residue flexibility, which is expressed by B-factors, from protein chains that use local (in the chain) predicted (or native) relative solvent accessibility (RSA) and custom-derived amino acid (AA) alphabets. Our predictor is implemented as a two-stage linear regression model that uses RSA-based space in a local sequence window in the first stage and a reduced AA pair-based space in the second stage as the inputs. This method is easy to comprehend explicit linear form in both stages. Particle swarm optimization was used to find an optimal reduced AA alphabet to simplify the input space and improve the prediction performance. The average correlation coefficients between the native and predicted B-factors measured on a large benchmark dataset are improved from 0.65 to 0.67 when using the native RSA values and from 0.55 to 0.57 when using the predicted RSA values. Blind tests that were performed on two independent datasets show consistent improvements in the average correlation coefficients by a modest value of 0.02 for both native and predicted RSA-based predictions.

  20. Are performance-based measures predictive of work participation in patients with musculoskeletal disorders? A systematic review

    NARCIS (Netherlands)

    Kuijer, P. P. F. M.; Gouttebarge, V.; Brouwer, S.; Reneman, M. F.; Frings-Dresen, M. H. W.

    Assessments of whether patients with musculoskeletal disorders (MSDs) can participate in work mainly consist of case history, physical examinations, and self-reports. Performance-based measures might add value in these assessments. This study answers the question: how well do performance-based

  1. A Copula Based Approach for Design of Multivariate Random Forests for Drug Sensitivity Prediction.

    Science.gov (United States)

    Haider, Saad; Rahman, Raziur; Ghosh, Souparno; Pal, Ranadip

    2015-01-01

    Modeling sensitivity to drugs based on genetic characterizations is a significant challenge in the area of systems medicine. Ensemble based approaches such as Random Forests have been shown to perform well in both individual sensitivity prediction studies and team science based prediction challenges. However, Random Forests generate a deterministic predictive model for each drug based on the genetic characterization of the cell lines and ignores the relationship between different drug sensitivities during model generation. This application motivates the need for generation of multivariate ensemble learning techniques that can increase prediction accuracy and improve variable importance ranking by incorporating the relationships between different output responses. In this article, we propose a novel cost criterion that captures the dissimilarity in the output response structure between the training data and node samples as the difference in the two empirical copulas. We illustrate that copulas are suitable for capturing the multivariate structure of output responses independent of the marginal distributions and the copula based multivariate random forest framework can provide higher accuracy prediction and improved variable selection. The proposed framework has been validated on genomics of drug sensitivity for cancer and cancer cell line encyclopedia database.

  2. Entity versus incremental theories predict older adults' memory performance.

    Science.gov (United States)

    Plaks, Jason E; Chasteen, Alison L

    2013-12-01

    The authors examined whether older adults' implicit theories regarding the modifiability of memory in particular (Studies 1 and 3) and abilities in general (Study 2) would predict memory performance. In Study 1, individual differences in older adults' endorsement of the "entity theory" (a belief that one's ability is fixed) or "incremental theory" (a belief that one's ability is malleable) of memory were measured using a version of the Implicit Theories Measure (Dweck, 1999). Memory performance was assessed with a free-recall task. Results indicated that the higher the endorsement of the incremental theory, the better the free recall. In Study 2, older and younger adults' theories were measured using a more general version of the Implicit Theories Measure that focused on the modifiability of abilities in general. Again, for older adults, the higher the incremental endorsement, the better the free recall. Moreover, as predicted, implicit theories did not predict younger adults' memory performance. In Study 3, participants read mock news articles reporting evidence in favor of either the entity or incremental theory. Those in the incremental condition outperformed those in the entity condition on reading span and free-recall tasks. These effects were mediated by pretask worry such that, for those in the entity condition, higher worry was associated with lower performance. Taken together, these studies suggest that variation in entity versus incremental endorsement represents a key predictor of older adults' memory performance. PsycINFO Database Record (c) 2013 APA, all rights reserved.

  3. Advanced management of pipe wall thinning based on prediction-monitor fusion

    International Nuclear Information System (INIS)

    Kojima, Fumio; Uchida, Shunsuke

    2012-01-01

    This article is concerned with pipe wall thinning management system by means of hybrid use of simulation and monitoring. First, the computer-aided simulation for predicting wear rate of piping system is developed based on elucidation of thinning mechanism such as flow-accelerated corrosion (FAC). The accurate prediction of wear rate allows us the useful information on region of interest of inspection. Secondly, several monitoring methods are considered in accordance with interest of inspection. Thirdly, probability of detection (POD) is considered for the reliability of inspection data. The final part of this article is devoted to how to improve safety performance under the hybrid use of predicting and monitoring on the proposed pipe wall management. (author)

  4. Predicting microRNA-disease associations using label propagation based on linear neighborhood similarity.

    Science.gov (United States)

    Li, Guanghui; Luo, Jiawei; Xiao, Qiu; Liang, Cheng; Ding, Pingjian

    2018-05-12

    Interactions between microRNAs (miRNAs) and diseases can yield important information for uncovering novel prognostic markers. Since experimental determination of disease-miRNA associations is time-consuming and costly, attention has been given to designing efficient and robust computational techniques for identifying undiscovered interactions. In this study, we present a label propagation model with linear neighborhood similarity, called LPLNS, to predict unobserved miRNA-disease associations. Additionally, a preprocessing step is performed to derive new interaction likelihood profiles that will contribute to the prediction since new miRNAs and diseases lack known associations. Our results demonstrate that the LPLNS model based on the known disease-miRNA associations could achieve impressive performance with an AUC of 0.9034. Furthermore, we observed that the LPLNS model based on new interaction likelihood profiles could improve the performance to an AUC of 0.9127. This was better than other comparable methods. In addition, case studies also demonstrated our method's outstanding performance for inferring undiscovered interactions between miRNAs and diseases, especially for novel diseases. Copyright © 2018. Published by Elsevier Inc.

  5. Performance of Firth-and logF-type penalized methods in risk prediction for small or sparse binary data.

    Science.gov (United States)

    Rahman, M Shafiqur; Sultana, Mahbuba

    2017-02-23

    When developing risk models for binary data with small or sparse data sets, the standard maximum likelihood estimation (MLE) based logistic regression faces several problems including biased or infinite estimate of the regression coefficient and frequent convergence failure of the likelihood due to separation. The problem of separation occurs commonly even if sample size is large but there is sufficient number of strong predictors. In the presence of separation, even if one develops the model, it produces overfitted model with poor predictive performance. Firth-and logF-type penalized regression methods are popular alternative to MLE, particularly for solving separation-problem. Despite the attractive advantages, their use in risk prediction is very limited. This paper evaluated these methods in risk prediction in comparison with MLE and other commonly used penalized methods such as ridge. The predictive performance of the methods was evaluated through assessing calibration, discrimination and overall predictive performance using an extensive simulation study. Further an illustration of the methods were provided using a real data example with low prevalence of outcome. The MLE showed poor performance in risk prediction in small or sparse data sets. All penalized methods offered some improvements in calibration, discrimination and overall predictive performance. Although the Firth-and logF-type methods showed almost equal amount of improvement, Firth-type penalization produces some bias in the average predicted probability, and the amount of bias is even larger than that produced by MLE. Of the logF(1,1) and logF(2,2) penalization, logF(2,2) provides slight bias in the estimate of regression coefficient of binary predictor and logF(1,1) performed better in all aspects. Similarly, ridge performed well in discrimination and overall predictive performance but it often produces underfitted model and has high rate of convergence failure (even the rate is higher than that

  6. Novel Approach for the Recognition and Prediction of Multi-Function Radar Behaviours Based on Predictive State Representations

    Directory of Open Access Journals (Sweden)

    Jian Ou

    2017-03-01

    Full Text Available The extensive applications of multi-function radars (MFRs have presented a great challenge to the technologies of radar countermeasures (RCMs and electronic intelligence (ELINT. The recently proposed cognitive electronic warfare (CEW provides a good solution, whose crux is to perceive present and future MFR behaviours, including the operating modes, waveform parameters, scheduling schemes, etc. Due to the variety and complexity of MFR waveforms, the existing approaches have the drawbacks of inefficiency and weak practicability in prediction. A novel method for MFR behaviour recognition and prediction is proposed based on predictive state representation (PSR. With the proposed approach, operating modes of MFR are recognized by accumulating the predictive states, instead of using fixed transition probabilities that are unavailable in the battlefield. It helps to reduce the dependence of MFR on prior information. And MFR signals can be quickly predicted by iteratively using the predicted observation, avoiding the very large computation brought by the uncertainty of future observations. Simulations with a hypothetical MFR signal sequence in a typical scenario are presented, showing that the proposed methods perform well and efficiently, which attests to their validity.

  7. Psychomotor testing predicts rate of skill acquisition for proficiency-based laparoscopic skills training.

    Science.gov (United States)

    Stefanidis, Dimitrios; Korndorffer, James R; Black, F William; Dunne, J Bruce; Sierra, Rafael; Touchard, Cheri L; Rice, David A; Markert, Ronald J; Kastl, Peter R; Scott, Daniel J

    2006-08-01

    Laparoscopic simulator training translates into improved operative performance. Proficiency-based curricula maximize efficiency by tailoring training to meet the needs of each individual; however, because rates of skill acquisition vary widely, such curricula may be difficult to implement. We hypothesized that psychomotor testing would predict baseline performance and training duration in a proficiency-based laparoscopic simulator curriculum. Residents (R1, n = 20) were enrolled in an IRB-approved prospective study at the beginning of the academic year. All completed the following: a background information survey, a battery of 12 innate ability measures (5 motor, and 7 visual-spatial), and baseline testing on 3 validated simulators (5 videotrainer [VT] tasks, 12 virtual reality [minimally invasive surgical trainer-virtual reality, MIST-VR] tasks, and 2 laparoscopic camera navigation [LCN] tasks). Participants trained to proficiency, and training duration and number of repetitions were recorded. Baseline test scores were correlated to skill acquisition rate. Cutoff scores for each predictive test were calculated based on a receiver operator curve, and their sensitivity and specificity were determined in identifying slow learners. Only the Cards Rotation test correlated with baseline simulator ability on VT and LCN. Curriculum implementation required 347 man-hours (6-person team) and 795,000 dollars of capital equipment. With an attendance rate of 75%, 19 of 20 residents (95%) completed the curriculum by the end of the academic year. To complete training, a median of 12 hours (range, 5.5-21), and 325 repetitions (range, 171-782) were required. Simulator score improvement was 50%. Training duration and repetitions correlated with prior video game and billiard exposure, grooved pegboard, finger tap, map planning, Rey Figure Immediate Recall score, and baseline performance on VT and LCN. The map planning cutoff score proved most specific in identifying slow learners

  8. Hybrid ANN–PLS approach to scroll compressor thermodynamic performance prediction

    International Nuclear Information System (INIS)

    Tian, Z.; Gu, B.; Yang, L.; Lu, Y.

    2015-01-01

    In this paper, a scroll compressor thermodynamic performance prediction was carried out by applying a hybrid ANN–PLS model. Firstly, an experimental platform with second-refrigeration calorimeter was set up and steady-state scroll compressor data sets were collected from experiments. Then totally 148 data sets were introduced to train and verify the validity of the ANN–PLS model for predicting the scroll compressor parameters such as volumetric efficiency, refrigerant mass flow rate, discharge temperature and power consumption. The ANN–PLS model was determined with 5 hidden neurons and 7 latent variables through the training process. Ultimately, the ANN–PLS model showed better performance than the ANN model and the PLS model working separately. ANN–PLS predictions agree well with the experimental values with mean relative errors (MREs) in the range of 0.34–1.96%, correlation coefficients (R 2 ) in the range of 0.9703–0.9999 and very low root mean square errors (RMSEs). - Highlights: • Hybrid ANN–PLS is utilized to predict the thermodynamic performance of scroll compressor. • ANN–PLS model is determined with 5 hidden neurons and 7 latent variables. • ANN–PLS model demonstrates better performance than ANN and PLS working separately. • The values of MRE and RMSE are in the range of 0.34–1.96% and 0.9703–0.9999, respectively

  9. Aqua/Aura Updated Inclination Adjust Maneuver Performance Prediction Model

    Science.gov (United States)

    Boone, Spencer

    2017-01-01

    This presentation will discuss the updated Inclination Adjust Maneuver (IAM) performance prediction model that was developed for Aqua and Aura following the 2017 IAM series. This updated model uses statistical regression methods to identify potential long-term trends in maneuver parameters, yielding improved predictions when re-planning past maneuvers. The presentation has been reviewed and approved by Eric Moyer, ESMO Deputy Project Manager.

  10. Deep Belief Network Based Hybrid Model for Building Energy Consumption Prediction

    Directory of Open Access Journals (Sweden)

    Chengdong Li

    2018-01-01

    Full Text Available To enhance the prediction performance for building energy consumption, this paper presents a modified deep belief network (DBN based hybrid model. The proposed hybrid model combines the outputs from the DBN model with the energy-consuming pattern to yield the final prediction results. The energy-consuming pattern in this study represents the periodicity property of building energy consumption and can be extracted from the observed historical energy consumption data. The residual data generated by removing the energy-consuming pattern from the original data are utilized to train the modified DBN model. The training of the modified DBN includes two steps, the first one of which adopts the contrastive divergence (CD algorithm to optimize the hidden parameters in a pre-train way, while the second one determines the output weighting vector by the least squares method. The proposed hybrid model is applied to two kinds of building energy consumption data sets that have different energy-consuming patterns (daily-periodicity and weekly-periodicity. In order to examine the advantages of the proposed model, four popular artificial intelligence methods—the backward propagation neural network (BPNN, the generalized radial basis function neural network (GRBFNN, the extreme learning machine (ELM, and the support vector regressor (SVR are chosen as the comparative approaches. Experimental results demonstrate that the proposed DBN based hybrid model has the best performance compared with the comparative techniques. Another thing to be mentioned is that all the predictors constructed by utilizing the energy-consuming patterns perform better than those designed only by the original data. This verifies the usefulness of the incorporation of the energy-consuming patterns. The proposed approach can also be extended and applied to some other similar prediction problems that have periodicity patterns, e.g., the traffic flow forecasting and the electricity consumption

  11. Burst muscle performance predicts the speed, acceleration, and turning performance of Anna’s hummingbirds

    Science.gov (United States)

    Segre, Paolo S; Dakin, Roslyn; Zordan, Victor B; Dickinson, Michael H; Straw, Andrew D; Altshuler, Douglas L

    2015-01-01

    Despite recent advances in the study of animal flight, the biomechanical determinants of maneuverability are poorly understood. It is thought that maneuverability may be influenced by intrinsic body mass and wing morphology, and by physiological muscle capacity, but this hypothesis has not yet been evaluated because it requires tracking a large number of free flight maneuvers from known individuals. We used an automated tracking system to record flight sequences from 20 Anna's hummingbirds flying solo and in competition in a large chamber. We found that burst muscle capacity predicted most performance metrics. Hummingbirds with higher burst capacity flew with faster velocities, accelerations, and rotations, and they used more demanding complex turns. In contrast, body mass did not predict variation in maneuvering performance, and wing morphology predicted only the use of arcing turns and high centripetal accelerations. Collectively, our results indicate that burst muscle capacity is a key predictor of maneuverability. DOI: http://dx.doi.org/10.7554/eLife.11159.001 PMID:26583753

  12. Free Recall Episodic Memory Performance Predicts Dementia Ten Years prior to Clinical Diagnosis: Findings from the Betula Longitudinal Study

    Directory of Open Access Journals (Sweden)

    Carl-Johan Boraxbekk

    2015-05-01

    Full Text Available Background/Aims: Early dementia diagnosis is a considerable challenge. The present study examined the predictive value of cognitive performance for a future clinical diagnosis of late-onset Alzheimer's disease or vascular dementia in a random population sample. Methods: Cognitive performance was retrospectively compared between three groups of participants from the Betula longitudinal cohort. Group 1 developed dementia 11-22 years after baseline testing (n = 111 and group 2 after 1-10 years (n = 280; group 3 showed no deterioration towards dementia during the study period (n = 2,855. Multinomial logistic regression analysis was used to investigate the predictive value of tests reflecting episodic memory performance, semantic memory performance, visuospatial ability, and prospective memory performance. Results: Age- and education-corrected performance on two free recall episodic memory tests significantly predicted dementia 10 years prior to clinical diagnosis. Free recall performance also predicted dementia 11-22 years prior to diagnosis when controlling for education, but not when age was added to the model. Conclusion: The present results support the suggestion that two free recall-based tests of episodic memory function may be useful for detecting individuals at risk of developing dementia 10 years prior to clinical diagnosis.

  13. Visuo-motor coordination ability predicts performance with brain-computer interfaces controlled by modulation of sensorimotor rhythms (SMR

    Directory of Open Access Journals (Sweden)

    Eva Maria Hammer

    2014-08-01

    Full Text Available Modulation of sensorimotor rhythms (SMR was suggested as a control signal for brain-computer interfaces (BCI. Yet, there is a population of users estimated between 10 to 50% not able to achieve reliable control and only about 20% of users achieve high (80-100% performance. Predicting performance prior to BCI use would facilitate selection of the most feasible system for an individual, thus constitute a practical benefit for the user, and increase our knowledge about the correlates of BCI control. In a recent study, we predicted SMR-BCI performance from psychological variables that were assessed prior to the BCI sessions and BCI control was supported with machine-learning techniques. We described two significant psychological predictors, namely the visuo-motor coordination ability and the ability to concentrate on the task. The purpose of the current study was to replicate these results thereby validating these predictors within a neurofeedback based SMR-BCI that involved no machine learning. Thirty-three healthy BCI novices participated in a calibration session and three further neurofeedback training sessions. Two variables were related with mean SMR-BCI performance: (1 A measure for the accuracy of fine motor skills, i.e. a trade for a person’s visuo-motor control ability and (2 subject’s attentional impulsivity. In a linear regression they accounted for almost 20% in variance of SMR-BCI performance, but predictor (1 failed significance. Nevertheless, on the basis of our prior regression model for sensorimotor control ability we could predict current SMR-BCI performance with an average prediction error of M = 12.07%. In more than 50% of the participants, the prediction error was smaller than 10%. Hence, psychological variables played a moderate role in predicting SMR-BCI performance in a neurofeedback approach that involved no machine learning. Future studies are needed to further consolidate (or reject the present predictors.

  14. Dst Prediction Based on Solar Wind Parameters

    Directory of Open Access Journals (Sweden)

    Yoon-Kyung Park

    2009-12-01

    Full Text Available We reevaluate the Burton equation (Burton et al. 1975 of predicting Dst index using high quality hourly solar wind data supplied by the ACE satellite for the period from 1998 to 2006. Sixty magnetic storms with monotonously decreasing main phase are selected. In order to determine the injection term (Q and the decay time (tau of the equation, we examine the relationships between Dst* and VB_s, Delta Dst* and VB_s, and Delta Dst* and Dst* during the magnetic storms. For this analysis, we take into account one hour of the propagation time from the ACE satellite to the magnetopause, and a half hour of the response time of the magnetosphere/ring current to the solar wind forcing. The injection term is found to be Q({nT}/h=-3.56VB_s for VB_s>0.5mV/m and Q({nT}/h=0 for VB_s leq0.5mV/m. The tau (hour is estimated as 0.060 Dst* + 16.65 for Dst*>-175nT and 6.15 hours for Dst* leq -175nT. Based on these empirical relationships, we predict the 60 magnetic storms and find that the correlation coefficient between the observed and predicted Dst* is 0.88. To evaluate the performance of our prediction scheme, the 60 magnetic storms are predicted again using the models by Burton et al. (1975 and O'Brien & McPherron (2000a. The correlation coefficients thus obtained are 0.85, the same value for both of the two models. In this respect, our model is slightly improved over the other two models as far as the correlation coefficients is concerned. Particularly our model does a better job than the other two models in predicting intense magnetic storms (Dst* lesssim -200nT.

  15. Parametric investigation to enhance the performance of a PBI-based high-temperature PEMFC

    International Nuclear Information System (INIS)

    Ferng, Y.M.; Su, A.; Hou, J.

    2014-01-01

    Highlights: • A in-house PBI PEMFC is prepared by the Fuel Cell Center of Yuan Ze University. • Parametric effects to enhance the PBI based PEMFC performance are investigated. • Experiments and simulations are performed to study these parametric effects. • Cell performance is enhanced with the lower PBI loading and higher temperature. • Thinner CL thickness and higher acid doping benefit to the cell performance also. - Abstract: With the advantages of simpler heat and water management, lower CO poisoning, and higher reaction kinetics, the high-temperature polybenzimidazole (PBI)-based proton exchange membrane fuel cell (PEMFC) can be considered as one of the commercialized energy generators in the near future. This paper experimentally and analytically investigates different design and operating parameters to enhance the performance of a PBI-based PEMFC, an in-house cell prepared in the Fuel Cell Center of Yuan Ze University. These parameters studied include PBI loading, operating temperature, gas flowrate, electrode thickness and porosity, and acid doping level. Experiments are performed to study the effects of PBI loading, operating temperature, and gas flowrate on the cell performance. Validated against the measured data of polarization and power curves, a simplified two-dimensional model for this PBI-based PEMFC is also developed to help the experiments to investigate other parameters. Based on the experimental data and the model predictions, the cell performance can be enhanced as the PBI loading is reduced, the operating temperature is elevated. Thinner electrode thickness, smaller porosity, and higher acid doping level are also predicted to benefit to the performance of the PBI-based PEMFC

  16. Prediction of shallow landslide occurrence: Validation of a physically-based approach through a real case study.

    Science.gov (United States)

    Schilirò, Luca; Montrasio, Lorella; Scarascia Mugnozza, Gabriele

    2016-11-01

    In recent years, physically-based numerical models have frequently been used in the framework of early-warning systems devoted to rainfall-induced landslide hazard monitoring and mitigation. For this reason, in this work we describe the potential of SLIP (Shallow Landslides Instability Prediction), a simplified physically-based model for the analysis of shallow landslide occurrence. In order to test the reliability of this model, a back analysis of recent landslide events occurred in the study area (located SW of Messina, northeastern Sicily, Italy) on October 1st, 2009 was performed. The simulation results have been compared with those obtained for the same event by using TRIGRS, another well-established model for shallow landslide prediction. Afterwards, a simulation over a 2-year span period has been performed for the same area, with the aim of evaluating the performance of SLIP as early warning tool. The results confirm the good predictive capability of the model, both in terms of spatial and temporal prediction of the instability phenomena. For this reason, we recommend an operating procedure for the real-time definition of shallow landslide triggering scenarios at the catchment scale, which is based on the use of SLIP calibrated through a specific multi-methodological approach. Copyright © 2016 Elsevier B.V. All rights reserved.

  17. Predicting law enforcement officer job performance with the Personality Assessment Inventory.

    Science.gov (United States)

    Lowmaster, Sara E; Morey, Leslie C

    2012-01-01

    This study examined the descriptive and predictive characteristics of the Personality Assessment Inventory (PAI; Morey, 1991) in a sample of 85 law enforcement officer candidates. Descriptive results indicate that mean PAI full-scale and subscale scores are consistently lower than normative community sample scores, with some exceptions noted typically associated with defensive responding. Predictive validity was examined by relating PAI full-scale and subscale scores to supervisor ratings in the areas of job performance, integrity problems, and abuse of disability status. Modest correlations were observed for all domains; however, predictive validity was moderated by defensive response style, with greater predictive validity observed among less defensive responders. These results suggest that the PAI's full scales and subscales are able to predict law enforcement officers' performance, but their utility is appreciably improved when taken in the context of indicators of defensive responding.

  18. Predicting Document Retrieval System Performance: An Expected Precision Measure.

    Science.gov (United States)

    Losee, Robert M., Jr.

    1987-01-01

    Describes an expected precision (EP) measure designed to predict document retrieval performance. Highlights include decision theoretic models; precision and recall as measures of system performance; EP graphs; relevance feedback; and computing the retrieval status value of a document for two models, the Binary Independent Model and the Two Poisson…

  19. Long‐Term Post‐CABG Survival: Performance of Clinical Risk Models Versus Actuarial Predictions

    Science.gov (United States)

    Carr, Brendan M.; Romeiser, Jamie; Ruan, Joyce; Gupta, Sandeep; Seifert, Frank C.; Zhu, Wei

    2015-01-01

    Abstract Background/aim Clinical risk models are commonly used to predict short‐term coronary artery bypass grafting (CABG) mortality but are less commonly used to predict long‐term mortality. The added value of long‐term mortality clinical risk models over traditional actuarial models has not been evaluated. To address this, the predictive performance of a long‐term clinical risk model was compared with that of an actuarial model to identify the clinical variable(s) most responsible for any differences observed. Methods Long‐term mortality for 1028 CABG patients was estimated using the Hannan New York State clinical risk model and an actuarial model (based on age, gender, and race/ethnicity). Vital status was assessed using the Social Security Death Index. Observed/expected (O/E) ratios were calculated, and the models' predictive performances were compared using a nested c‐index approach. Linear regression analyses identified the subgroup of risk factors driving the differences observed. Results Mortality rates were 3%, 9%, and 17% at one‐, three‐, and five years, respectively (median follow‐up: five years). The clinical risk model provided more accurate predictions. Greater divergence between model estimates occurred with increasing long‐term mortality risk, with baseline renal dysfunction identified as a particularly important driver of these differences. Conclusions Long‐term mortality clinical risk models provide enhanced predictive power compared to actuarial models. Using the Hannan risk model, a patient's long‐term mortality risk can be accurately assessed and subgroups of higher‐risk patients can be identified for enhanced follow‐up care. More research appears warranted to refine long‐term CABG clinical risk models. doi: 10.1111/jocs.12665 (J Card Surg 2016;31:23–30) PMID:26543019

  20. A New Hybrid Method for Improving the Performance of Myocardial Infarction Prediction

    Directory of Open Access Journals (Sweden)

    Hojatollah Hamidi

    2016-06-01

    Full Text Available Abstract Introduction: Myocardial Infarction, also known as heart attack, normally occurs due to such causes as smoking, family history, diabetes, and so on. It is recognized as one of the leading causes of death in the world. Therefore, the present study aimed to evaluate the performance of classification models in order to predict Myocardial Infarction, using a feature selection method that includes Forward Selection and Genetic Algorithm. Materials & Methods: The Myocardial Infarction data set used in this study contains the information related to 519 visitors to Shahid Madani Specialized Hospital of Khorramabad, Iran. This data set includes 33 features. The proposed method includes a hybrid feature selection method in order to enhance the performance of classification algorithms. The first step of this method selects the features using Forward Selection. At the second step, the selected features were given to a genetic algorithm, in order to select the best features. Classification algorithms entail Ada Boost, Naïve Bayes, J48 decision tree and simpleCART are applied to the data set with selected features, for predicting Myocardial Infarction. Results: The best results have been achieved after applying the proposed feature selection method, which were obtained via simpleCART and J48 algorithms with the accuracies of 96.53% and 96.34%, respectively. Conclusion: Based on the results, the performances of classification algorithms are improved. So, applying the proposed feature selection method, along with classification algorithms seem to be considered as a confident method with respect to predicting the Myocardial Infarction.

  1. EP-DNN: A Deep Neural Network-Based Global Enhancer Prediction Algorithm.

    Science.gov (United States)

    Kim, Seong Gon; Harwani, Mrudul; Grama, Ananth; Chaterji, Somali

    2016-12-08

    We present EP-DNN, a protocol for predicting enhancers based on chromatin features, in different cell types. Specifically, we use a deep neural network (DNN)-based architecture to extract enhancer signatures in a representative human embryonic stem cell type (H1) and a differentiated lung cell type (IMR90). We train EP-DNN using p300 binding sites, as enhancers, and TSS and random non-DHS sites, as non-enhancers. We perform same-cell and cross-cell predictions to quantify the validation rate and compare against two state-of-the-art methods, DEEP-ENCODE and RFECS. We find that EP-DNN has superior accuracy with a validation rate of 91.6%, relative to 85.3% for DEEP-ENCODE and 85.5% for RFECS, for a given number of enhancer predictions and also scales better for a larger number of enhancer predictions. Moreover, our H1 → IMR90 predictions turn out to be more accurate than IMR90 → IMR90, potentially because H1 exhibits a richer signature set and our EP-DNN model is expressive enough to extract these subtleties. Our work shows how to leverage the full expressivity of deep learning models, using multiple hidden layers, while avoiding overfitting on the training data. We also lay the foundation for exploration of cross-cell enhancer predictions, potentially reducing the need for expensive experimentation.

  2. EP-DNN: A Deep Neural Network-Based Global Enhancer Prediction Algorithm

    Science.gov (United States)

    Kim, Seong Gon; Harwani, Mrudul; Grama, Ananth; Chaterji, Somali

    2016-12-01

    We present EP-DNN, a protocol for predicting enhancers based on chromatin features, in different cell types. Specifically, we use a deep neural network (DNN)-based architecture to extract enhancer signatures in a representative human embryonic stem cell type (H1) and a differentiated lung cell type (IMR90). We train EP-DNN using p300 binding sites, as enhancers, and TSS and random non-DHS sites, as non-enhancers. We perform same-cell and cross-cell predictions to quantify the validation rate and compare against two state-of-the-art methods, DEEP-ENCODE and RFECS. We find that EP-DNN has superior accuracy with a validation rate of 91.6%, relative to 85.3% for DEEP-ENCODE and 85.5% for RFECS, for a given number of enhancer predictions and also scales better for a larger number of enhancer predictions. Moreover, our H1 → IMR90 predictions turn out to be more accurate than IMR90 → IMR90, potentially because H1 exhibits a richer signature set and our EP-DNN model is expressive enough to extract these subtleties. Our work shows how to leverage the full expressivity of deep learning models, using multiple hidden layers, while avoiding overfitting on the training data. We also lay the foundation for exploration of cross-cell enhancer predictions, potentially reducing the need for expensive experimentation.

  3. Evaluation of Design & Analysis Code, CACTUS, for Predicting Crossflow Hydrokinetic Turbine Performance

    Energy Technology Data Exchange (ETDEWEB)

    Wosnik, Martin [Univ. of New Hampshire, Durham, NH (United States). Center for Ocean Renewable Energy; Bachant, Pete [Univ. of New Hampshire, Durham, NH (United States). Center for Ocean Renewable Energy; Neary, Vincent Sinclair [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Murphy, Andrew W. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2016-09-01

    CACTUS, developed by Sandia National Laboratories, is an open-source code for the design and analysis of wind and hydrokinetic turbines. While it has undergone extensive validation for both vertical axis and horizontal axis wind turbines, and it has been demonstrated to accurately predict the performance of horizontal (axial-flow) hydrokinetic turbines, its ability to predict the performance of crossflow hydrokinetic turbines has yet to be tested. The present study addresses this problem by comparing the predicted performance curves derived from CACTUS simulations of the U.S. Department of Energy’s 1:6 scale reference model crossflow turbine to those derived by experimental measurements in a tow tank using the same model turbine at the University of New Hampshire. It shows that CACTUS cannot accurately predict the performance of this crossflow turbine, raising concerns on its application to crossflow hydrokinetic turbines generally. The lack of quality data on NACA 0021 foil aerodynamic (hydrodynamic) characteristics over the wide range of angles of attack (AoA) and Reynolds numbers is identified as the main cause for poor model prediction. A comparison of several different NACA 0021 foil data sources, derived using both physical and numerical modeling experiments, indicates significant discrepancies at the high AoA experienced by foils on crossflow turbines. Users of CACTUS for crossflow hydrokinetic turbines are, therefore, advised to limit its application to higher tip speed ratios (lower AoA), and to carefully verify the reliability and accuracy of their foil data. Accurate empirical data on the aerodynamic characteristics of the foil is the greatest limitation to predicting performance for crossflow turbines with semi-empirical models like CACTUS. Future improvements of CACTUS for crossflow turbine performance prediction will require the development of accurate foil aerodynamic characteristic data sets within the appropriate ranges of Reynolds numbers and AoA.

  4. Deep learning predictions of survival based on MRI in amyotrophic lateral sclerosis.

    Science.gov (United States)

    van der Burgh, Hannelore K; Schmidt, Ruben; Westeneng, Henk-Jan; de Reus, Marcel A; van den Berg, Leonard H; van den Heuvel, Martijn P

    2017-01-01

    Amyotrophic lateral sclerosis (ALS) is a progressive neuromuscular disease, with large variation in survival between patients. Currently, it remains rather difficult to predict survival based on clinical parameters alone. Here, we set out to use clinical characteristics in combination with MRI data to predict survival of ALS patients using deep learning, a machine learning technique highly effective in a broad range of big-data analyses. A group of 135 ALS patients was included from whom high-resolution diffusion-weighted and T1-weighted images were acquired at the first visit to the outpatient clinic. Next, each of the patients was monitored carefully and survival time to death was recorded. Patients were labeled as short, medium or long survivors, based on their recorded time to death as measured from the time of disease onset. In the deep learning procedure, the total group of 135 patients was split into a training set for deep learning (n = 83 patients), a validation set (n = 20) and an independent evaluation set (n = 32) to evaluate the performance of the obtained deep learning networks. Deep learning based on clinical characteristics predicted survival category correctly in 68.8% of the cases. Deep learning based on MRI predicted 62.5% correctly using structural connectivity and 62.5% using brain morphology data. Notably, when we combined the three sources of information, deep learning prediction accuracy increased to 84.4%. Taken together, our findings show the added value of MRI with respect to predicting survival in ALS, demonstrating the advantage of deep learning in disease prognostication.

  5. SVM-Based System for Prediction of Epileptic Seizures from iEEG Signal

    Science.gov (United States)

    Cherkassky, Vladimir; Lee, Jieun; Veber, Brandon; Patterson, Edward E.; Brinkmann, Benjamin H.; Worrell, Gregory A.

    2017-01-01

    Objective This paper describes a data-analytic modeling approach for prediction of epileptic seizures from intracranial electroencephalogram (iEEG) recording of brain activity. Even though it is widely accepted that statistical characteristics of iEEG signal change prior to seizures, robust seizure prediction remains a challenging problem due to subject-specific nature of data-analytic modeling. Methods Our work emphasizes understanding of clinical considerations important for iEEG-based seizure prediction, and proper translation of these clinical considerations into data-analytic modeling assumptions. Several design choices during pre-processing and post-processing are considered and investigated for their effect on seizure prediction accuracy. Results Our empirical results show that the proposed SVM-based seizure prediction system can achieve robust prediction of preictal and interictal iEEG segments from dogs with epilepsy. The sensitivity is about 90–100%, and the false-positive rate is about 0–0.3 times per day. The results also suggest good prediction is subject-specific (dog or human), in agreement with earlier studies. Conclusion Good prediction performance is possible only if the training data contain sufficiently many seizure episodes, i.e., at least 5–7 seizures. Significance The proposed system uses subject-specific modeling and unbalanced training data. This system also utilizes three different time scales during training and testing stages. PMID:27362758

  6. Clinical Prediction Performance of Glaucoma Progression Using a 2-Dimensional Continuous-Time Hidden Markov Model with Structural and Functional Measurements.

    Science.gov (United States)

    Song, Youngseok; Ishikawa, Hiroshi; Wu, Mengfei; Liu, Yu-Ying; Lucy, Katie A; Lavinsky, Fabio; Liu, Mengling; Wollstein, Gadi; Schuman, Joel S

    2018-03-20

    Previously, we introduced a state-based 2-dimensional continuous-time hidden Markov model (2D CT HMM) to model the pattern of detected glaucoma changes using structural and functional information simultaneously. The purpose of this study was to evaluate the detected glaucoma change prediction performance of the model in a real clinical setting using a retrospective longitudinal dataset. Longitudinal, retrospective study. One hundred thirty-four eyes from 134 participants diagnosed with glaucoma or as glaucoma suspects (average follow-up, 4.4±1.2 years; average number of visits, 7.1±1.8). A 2D CT HMM model was trained using OCT (Cirrus HD-OCT; Zeiss, Dublin, CA) average circumpapillary retinal nerve fiber layer (cRNFL) thickness and visual field index (VFI) or mean deviation (MD; Humphrey Field Analyzer; Zeiss). The model was trained using a subset of the data (107 of 134 eyes [80%]) including all visits except for the last visit, which was used to test the prediction performance (training set). Additionally, the remaining 27 eyes were used for secondary performance testing as an independent group (validation set). The 2D CT HMM predicts 1 of 4 possible detected state changes based on 1 input state. Prediction accuracy was assessed as the percentage of correct prediction against the patient's actual recorded state. In addition, deviations of the predicted long-term detected change paths from the actual detected change paths were measured. Baseline mean ± standard deviation age was 61.9±11.4 years, VFI was 90.7±17.4, MD was -3.50±6.04 dB, and cRNFL thickness was 74.9±12.2 μm. The accuracy of detected glaucoma change prediction using the training set was comparable with the validation set (57.0% and 68.0%, respectively). Prediction deviation from the actual detected change path showed stability throughout patient follow-up. The 2D CT HMM demonstrated promising prediction performance in detecting glaucoma change performance in a simulated clinical setting

  7. Firm Culture and Leadership as Firm Performance Predictors : a Resource-Based Perspective

    NARCIS (Netherlands)

    Wilderom, C.P.M.; van den Berg, P.

    2000-01-01

    In this study, we tested part of the resource-based view of the firm by examining two 'soft' resources, firm culture and top leadership, as predictors of 'hard' or bottom-line firm performance.Transformational top leadership was found to predict firm performance directly while the link between firm

  8. Incorporating information on predicted solvent accessibility to the co-evolution-based study of protein interactions.

    Science.gov (United States)

    Ochoa, David; García-Gutiérrez, Ponciano; Juan, David; Valencia, Alfonso; Pazos, Florencio

    2013-01-27

    A widespread family of methods for studying and predicting protein interactions using sequence information is based on co-evolution, quantified as similarity of phylogenetic trees. Part of the co-evolution observed between interacting proteins could be due to co-adaptation caused by inter-protein contacts. In this case, the co-evolution is expected to be more evident when evaluated on the surface of the proteins or the internal layers close to it. In this work we study the effect of incorporating information on predicted solvent accessibility to three methods for predicting protein interactions based on similarity of phylogenetic trees. We evaluate the performance of these methods in predicting different types of protein associations when trees based on positions with different characteristics of predicted accessibility are used as input. We found that predicted accessibility improves the results of two recent versions of the mirrortree methodology in predicting direct binary physical interactions, while it neither improves these methods, nor the original mirrortree method, in predicting other types of interactions. That improvement comes at no cost in terms of applicability since accessibility can be predicted for any sequence. We also found that predictions of protein-protein interactions are improved when multiple sequence alignments with a richer representation of sequences (including paralogs) are incorporated in the accessibility prediction.

  9. Transcription factor binding sites prediction based on modified nucleosomes.

    Directory of Open Access Journals (Sweden)

    Mohammad Talebzadeh

    Full Text Available In computational methods, position weight matrices (PWMs are commonly applied for transcription factor binding site (TFBS prediction. Although these matrices are more accurate than simple consensus sequences to predict actual binding sites, they usually produce a large number of false positive (FP predictions and so are impoverished sources of information. Several studies have employed additional sources of information such as sequence conservation or the vicinity to transcription start sites to distinguish true binding regions from random ones. Recently, the spatial distribution of modified nucleosomes has been shown to be associated with different promoter architectures. These aligned patterns can facilitate DNA accessibility for transcription factors. We hypothesize that using data from these aligned and periodic patterns can improve the performance of binding region prediction. In this study, we propose two effective features, "modified nucleosomes neighboring" and "modified nucleosomes occupancy", to decrease FP in binding site discovery. Based on these features, we designed a logistic regression classifier which estimates the probability of a region as a TFBS. Our model learned each feature based on Sp1 binding sites on Chromosome 1 and was tested on the other chromosomes in human CD4+T cells. In this work, we investigated 21 histone modifications and found that only 8 out of 21 marks are strongly correlated with transcription factor binding regions. To prove that these features are not specific to Sp1, we combined the logistic regression classifier with the PWM, and created a new model to search TFBSs on the genome. We tested the model using transcription factors MAZ, PU.1 and ELF1 and compared the results to those using only the PWM. The results show that our model can predict Transcription factor binding regions more successfully. The relative simplicity of the model and capability of integrating other features make it a superior method

  10. Rotary engine performance limits predicted by a zero-dimensional model

    Science.gov (United States)

    Bartrand, Timothy A.; Willis, Edward A.

    1992-01-01

    A parametric study was performed to determine the performance limits of a rotary combustion engine. This study shows how well increasing the combustion rate, insulating, and turbocharging increase brake power and decrease fuel consumption. Several generalizations can be made from the findings. First, it was shown that the fastest combustion rate is not necessarily the best combustion rate. Second, several engine insulation schemes were employed for a turbocharged engine. Performance improved only for a highly insulated engine. Finally, the variability of turbocompounding and the influence of exhaust port shape were calculated. Rotary engines performance was predicted by an improved zero-dimensional computer model based on a model developed at the Massachusetts Institute of Technology in the 1980's. Independent variables in the study include turbocharging, manifold pressures, wall thermal properties, leakage area, and exhaust port geometry. Additions to the computer programs since its results were last published include turbocharging, manifold modeling, and improved friction power loss calculation. The baseline engine for this study is a single rotor 650 cc direct-injection stratified-charge engine with aluminum housings and a stainless steel rotor. Engine maps are provided for the baseline and turbocharged versions of the engine.

  11. The prediction of swimming performance in competition from behavioral information.

    Science.gov (United States)

    Rushall, B S; Leet, D

    1979-06-01

    The swimming performances of the Canadian Team at the 1976 Olympic Games were categorized as being improved or worse than previous best times in the events contested. The two groups had been previously assessed on the Psychological Inventories for Competitive Swimmers. A stepwise multiple-discriminant analysis of the inventory responses revealed that 13 test questions produced a perfect discrimination of group membership. The resultant discriminant functions for predicting performance classification were applied to the test responses of 157 swimmers at the 1977 Canadian Winter National Swimming Championships. Using the same performance classification criteria the accuracy of prediction was not better than chance in three of four sex by performance classifications. This yielded a failure to locate a set of behavioral factors which determine swimming performance improvements in elite competitive circumstances. The possibility of sets of factors which do not discriminate between performances in similar environments or between similar groups of swimmers was raised.

  12. Playing off the curve - testing quantitative predictions of skill acquisition theories in development of chess performance.

    Science.gov (United States)

    Gaschler, Robert; Progscha, Johanna; Smallbone, Kieran; Ram, Nilam; Bilalić, Merim

    2014-01-01

    Learning curves have been proposed as an adequate description of learning processes, no matter whether the processes manifest within minutes or across years. Different mechanisms underlying skill acquisition can lead to differences in the shape of learning curves. In the current study, we analyze the tournament performance data of 1383 chess players who begin competing at young age and play tournaments for at least 10 years. We analyze the performance development with the goal to test the adequacy of learning curves, and the skill acquisition theories they are based on, for describing and predicting expertise acquisition. On the one hand, we show that the skill acquisition theories implying a negative exponential learning curve do a better job in both describing early performance gains and predicting later trajectories of chess performance than those theories implying a power function learning curve. On the other hand, the learning curves of a large proportion of players show systematic qualitative deviations from the predictions of either type of skill acquisition theory. While skill acquisition theories predict larger performance gains in early years and smaller gains in later years, a substantial number of players begin to show substantial improvements with a delay of several years (and no improvement in the first years), deviations not fully accounted for by quantity of practice. The current work adds to the debate on how learning processes on a small time scale combine to large-scale changes.

  13. Predictive control strategies for wind turbine system based on permanent magnet synchronous generator.

    Science.gov (United States)

    Maaoui-Ben Hassine, Ikram; Naouar, Mohamed Wissem; Mrabet-Bellaaj, Najiba

    2016-05-01

    In this paper, Model Predictive Control and Dead-beat predictive control strategies are proposed for the control of a PMSG based wind energy system. The proposed MPC considers the model of the converter-based system to forecast the possible future behavior of the controlled variables. It allows selecting the voltage vector to be applied that leads to a minimum error by minimizing a predefined cost function. The main features of the MPC are low current THD and robustness against parameters variations. The Dead-beat predictive control is based on the system model to compute the optimum voltage vector that ensures zero-steady state error. The optimum voltage vector is then applied through Space Vector Modulation (SVM) technique. The main advantages of the Dead-beat predictive control are low current THD and constant switching frequency. The proposed control techniques are presented and detailed for the control of back-to-back converter in a wind turbine system based on PMSG. Simulation results (under Matlab-Simulink software environment tool) and experimental results (under developed prototyping platform) are presented in order to show the performances of the considered control strategies. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.

  14. Essays on predictability of emerging markets growth and financial performance

    OpenAIRE

    Banegas, Maria Ayelen

    2011-01-01

    This dissertation seeks to better understand the underlying factors driving financial performance and economic activity in international markets. The first chapter "Predictability of Growth in Emerging Markets: Information in Financial Aggregates" tests for predictability of output growth in a panel of twenty-two emerging market economies. I use pooled panel data methods that control for endogeneity and persistence in the predictor variables to test the predictive power of a large set of fina...

  15. The joint effects of personality and workplace social exchange relationships in predicting task performance and citizenship performance.

    Science.gov (United States)

    Kamdar, Dishan; Van Dyne, Linn

    2007-09-01

    This field study examines the joint effects of social exchange relationships at work (leader-member exchange and team-member exchange) and employee personality (conscientiousness and agreeableness) in predicting task performance and citizenship performance. Consistent with trait activation theory, matched data on 230 employees, their coworkers, and their supervisors demonstrated interactions in which high quality social exchange relationships weakened the positive relationships between personality and performance. Results demonstrate the benefits of consonant predictions in which predictors and outcomes are matched on the basis of specific targets. We discuss theoretical and practical implications. (c) 2007 APA.

  16. Kernel density estimation-based real-time prediction for respiratory motion

    International Nuclear Information System (INIS)

    Ruan, Dan

    2010-01-01

    Effective delivery of adaptive radiotherapy requires locating the target with high precision in real time. System latency caused by data acquisition, streaming, processing and delivery control necessitates prediction. Prediction is particularly challenging for highly mobile targets such as thoracic and abdominal tumors undergoing respiration-induced motion. The complexity of the respiratory motion makes it difficult to build and justify explicit models. In this study, we honor the intrinsic uncertainties in respiratory motion and propose a statistical treatment of the prediction problem. Instead of asking for a deterministic covariate-response map and a unique estimate value for future target position, we aim to obtain a distribution of the future target position (response variable) conditioned on the observed historical sample values (covariate variable). The key idea is to estimate the joint probability distribution (pdf) of the covariate and response variables using an efficient kernel density estimation method. Then, the problem of identifying the distribution of the future target position reduces to identifying the section in the joint pdf based on the observed covariate. Subsequently, estimators are derived based on this estimated conditional distribution. This probabilistic perspective has some distinctive advantages over existing deterministic schemes: (1) it is compatible with potentially inconsistent training samples, i.e., when close covariate variables correspond to dramatically different response values; (2) it is not restricted by any prior structural assumption on the map between the covariate and the response; (3) the two-stage setup allows much freedom in choosing statistical estimates and provides a full nonparametric description of the uncertainty for the resulting estimate. We evaluated the prediction performance on ten patient RPM traces, using the root mean squared difference between the prediction and the observed value normalized by the

  17. Post processing of protein-compound docking for fragment-based drug discovery (FBDD): in-silico structure-based drug screening and ligand-binding pose prediction.

    Science.gov (United States)

    Fukunishi, Yoshifumi

    2010-01-01

    For fragment-based drug development, both hit (active) compound prediction and docking-pose (protein-ligand complex structure) prediction of the hit compound are important, since chemical modification (fragment linking, fragment evolution) subsequent to the hit discovery must be performed based on the protein-ligand complex structure. However, the naïve protein-compound docking calculation shows poor accuracy in terms of docking-pose prediction. Thus, post-processing of the protein-compound docking is necessary. Recently, several methods for the post-processing of protein-compound docking have been proposed. In FBDD, the compounds are smaller than those for conventional drug screening. This makes it difficult to perform the protein-compound docking calculation. A method to avoid this problem has been reported. Protein-ligand binding free energy estimation is useful to reduce the procedures involved in the chemical modification of the hit fragment. Several prediction methods have been proposed for high-accuracy estimation of protein-ligand binding free energy. This paper summarizes the various computational methods proposed for docking-pose prediction and their usefulness in FBDD.

  18. Evaluation of a Nutrition Model in Predicting Performance of Vietnamese Cattle

    Directory of Open Access Journals (Sweden)

    David Parsons

    2012-09-01

    level 1 solution can predict DMI reasonably well for this type of animal, but it was not entirely clear if animals consumed at their voluntary intake and/or if the roughness of the diet decreased DMI. A deficit of ruminally-undegradable protein and/or a lack of microbial protein may have limited the performance of these animals. Based on these evaluations, the LRNS level 1 solution may be an alternative to predict animal performance when, under specific circumstances, the fractional degradation rates of the carbohydrate and protein fractions are not known.

  19. STUDENT ACADEMIC PERFORMANCE PREDICTION USING SUPPORT VECTOR MACHINE

    OpenAIRE

    S.A. Oloruntoba1 ,J.L.Akinode2

    2017-01-01

    This paper investigates the relationship between students' preadmission academic profile and final academic performance. Data Sample of students in one of the Federal Polytechnic in south West part of Nigeria was used. The preadmission academic profile used for this study is the 'O' level grades(terminal high school results).The academic performance is defined using student's Grade Point Average(GPA). This research focused on using data mining technique to develop a model for predicting stude...

  20. Prediction of Rowing Ergometer Performance from Functional Anaerobic Power, Strength and Anthropometric Components

    Directory of Open Access Journals (Sweden)

    Akça Firat

    2014-07-01

    Full Text Available The aim of this research was to develop different regression models to predict 2000 m rowing ergometer performance with the use of anthropometric, anaerobic and strength variables and to determine how precisely the prediction models constituted by different variables predict performance, when conducted together in the same equation or individually. 38 male collegiate rowers (20.17 ± 1.22 years participated in this study. Anthropometric, strength, 2000 m maximal rowing ergometer and rowing anaerobic power tests were applied. Multiple linear regression procedures were employed in SPSS 16 to constitute five different regression formulas using a different group of variables. The reliability of the regression models was expressed by R2 and the standard error of estimate (SEE. Relationships of all parameters with performance were investigated through Pearson correlation coefficients. The prediction model using a combination of anaerobic, strength and anthropometric variables was found to be the most reliable equation to predict 2000 m rowing ergometer performance (R2 = 0.92, SEE= 3.11 s. Besides, the equation that used rowing anaerobic and strength test results also provided a reliable prediction (R2 = 0.85, SEE= 4.27 s. As a conclusion, it seems clear that physiological determinants which are affected by anaerobic energy pathways should also get involved in the processes and models used for performance prediction and talent identification in rowing.

  1. A link prediction method for heterogeneous networks based on BP neural network

    Science.gov (United States)

    Li, Ji-chao; Zhao, Dan-ling; Ge, Bing-Feng; Yang, Ke-Wei; Chen, Ying-Wu

    2018-04-01

    Most real-world systems, composed of different types of objects connected via many interconnections, can be abstracted as various complex heterogeneous networks. Link prediction for heterogeneous networks is of great significance for mining missing links and reconfiguring networks according to observed information, with considerable applications in, for example, friend and location recommendations and disease-gene candidate detection. In this paper, we put forward a novel integrated framework, called MPBP (Meta-Path feature-based BP neural network model), to predict multiple types of links for heterogeneous networks. More specifically, the concept of meta-path is introduced, followed by the extraction of meta-path features for heterogeneous networks. Next, based on the extracted meta-path features, a supervised link prediction model is built with a three-layer BP neural network. Then, the solution algorithm of the proposed link prediction model is put forward to obtain predicted results by iteratively training the network. Last, numerical experiments on the dataset of examples of a gene-disease network and a combat network are conducted to verify the effectiveness and feasibility of the proposed MPBP. It shows that the MPBP with very good performance is superior to the baseline methods.

  2. Quantitative Prediction of Coalbed Gas Content Based on Seismic Multiple-Attribute Analyses

    Directory of Open Access Journals (Sweden)

    Renfang Pan

    2015-09-01

    Full Text Available Accurate prediction of gas planar distribution is crucial to selection and development of new CBM exploration areas. Based on seismic attributes, well logging and testing data we found that seismic absorption attenuation, after eliminating the effects of burial depth, shows an evident correlation with CBM gas content; (positive structure curvature has a negative correlation with gas content; and density has a negative correlation with gas content. It is feasible to use the hydrocarbon index (P*G and pseudo-Poisson ratio attributes for detection of gas enrichment zones. Based on seismic multiple-attribute analyses, a multiple linear regression equation was established between the seismic attributes and gas content at the drilling wells. Application of this equation to the seismic attributes at locations other than the drilling wells yielded a quantitative prediction of planar gas distribution. Prediction calculations were performed for two different models, one using pre-stack inversion and the other one disregarding pre-stack inversion. A comparison of the results indicates that both models predicted a similar trend for gas content distribution, except that the model using pre-stack inversion yielded a prediction result with considerably higher precision than the other model.

  3. Operational Numerical Weather Prediction systems based on Linux cluster architectures

    International Nuclear Information System (INIS)

    Pasqui, M.; Baldi, M.; Gozzini, B.; Maracchi, G.; Giuliani, G.; Montagnani, S.

    2005-01-01

    The progress in weather forecast and atmospheric science has been always closely linked to the improvement of computing technology. In order to have more accurate weather forecasts and climate predictions, more powerful computing resources are needed, in addition to more complex and better-performing numerical models. To overcome such a large computing request, powerful workstations or massive parallel systems have been used. In the last few years, parallel architectures, based on the Linux operating system, have been introduced and became popular, representing real high performance-low cost systems. In this work the Linux cluster experience achieved at the Laboratory far Meteorology and Environmental Analysis (LaMMA-CNR-IBIMET) is described and tips and performances analysed

  4. A control method for agricultural greenhouses heating based on computational fluid dynamics and energy prediction model

    International Nuclear Information System (INIS)

    Chen, Jiaoliao; Xu, Fang; Tan, Dapeng; Shen, Zheng; Zhang, Libin; Ai, Qinglin

    2015-01-01

    Highlights: • A novel control method for the heating greenhouse with SWSHPS is proposed. • CFD is employed to predict the priorities of FCU loops for thermal performance. • EPM is act as an on-line tool to predict the total energy demand of greenhouse. • The CFD–EPM-based method can save energy and improve control accuracy. • The energy savings potential is between 8.7% and 15.1%. - Abstract: As energy heating is one of the main production costs, many efforts have been made to reduce the energy consumption of agricultural greenhouses. Herein, a novel control method of greenhouse heating using computational fluid dynamics (CFD) and energy prediction model (EPM) is proposed for energy savings and system performance. Based on the low-Reynolds number k–ε turbulence principle, a CFD model of heating greenhouse is developed, applying the discrete ordinates model for the radiative heat transfers and porous medium approach for plants considering plants sensible and latent heat exchanges. The CFD simulations have been validated, and used to analyze the greenhouse thermal performance and the priority of fan coil units (FCU) loops under the various heating conditions. According to the heating efficiency and temperature uniformity, the priorities of each FCU loop can be predicted to generate a database with priorities for control system. EPM is built up based on the thermal balance, and used to predict and optimize the energy demand of the greenhouse online. Combined with the priorities of FCU loops from CFD simulations offline, we have developed the CFD–EPM-based heating control system of greenhouse with surface water source heat pumps system (SWSHPS). Compared with conventional multi-zone independent control (CMIC) method, the energy savings potential is between 8.7% and 15.1%, and the control temperature deviation is decreased to between 0.1 °C and 0.6 °C in the investigated greenhouse. These results show the CFD–EPM-based method can improve system

  5. Mastery and Performance Goals Predict Epistemic and Relational Conflict Regulation

    Science.gov (United States)

    Darnon, Celine; Muller, Dominique; Schrager, Sheree M.; Pannuzzo, Nelly; Butera, Fabrizio

    2006-01-01

    The present research examines whether mastery and performance goals predict different ways of reacting to a sociocognitive conflict with another person over materials to be learned, an issue not yet addressed by the achievement goal literature. Results from 2 studies showed that mastery goals predicted epistemic conflict regulation (a conflict…

  6. Human Posture and Movement Prediction based on Musculoskeletal Modeling

    DEFF Research Database (Denmark)

    Farahani, Saeed Davoudabadi

    2014-01-01

    Abstract This thesis explores an optimization-based formulation, so-called inverse-inverse dynamics, for the prediction of human posture and motion dynamics performing various tasks. It is explained how this technique enables us to predict natural kinematic and kinetic patterns for human posture...... and motion using AnyBody Modeling System (AMS). AMS uses inverse dynamics to analyze musculoskeletal systems and is, therefore, limited by its dependency on input kinematics. We propose to alleviate this dependency by assuming that voluntary postures and movement strategies in humans are guided by a desire...... expenditure, joint forces and other physiological properties derived from the detailed musculoskeletal analysis. Several attempts have been made to uncover the principles underlying motion control strategies in the literature. In case of some movements, like human squat jumping, there is almost no doubt...

  7. PatchSurfers: Two methods for local molecular property-based binding ligand prediction.

    Science.gov (United States)

    Shin, Woong-Hee; Bures, Mark Gregory; Kihara, Daisuke

    2016-01-15

    Protein function prediction is an active area of research in computational biology. Function prediction can help biologists make hypotheses for characterization of genes and help interpret biological assays, and thus is a productive area for collaboration between experimental and computational biologists. Among various function prediction methods, predicting binding ligand molecules for a target protein is an important class because ligand binding events for a protein are usually closely intertwined with the proteins' biological function, and also because predicted binding ligands can often be directly tested by biochemical assays. Binding ligand prediction methods can be classified into two types: those which are based on protein-protein (or pocket-pocket) comparison, and those that compare a target pocket directly to ligands. Recently, our group proposed two computational binding ligand prediction methods, Patch-Surfer, which is a pocket-pocket comparison method, and PL-PatchSurfer, which compares a pocket to ligand molecules. The two programs apply surface patch-based descriptions to calculate similarity or complementarity between molecules. A surface patch is characterized by physicochemical properties such as shape, hydrophobicity, and electrostatic potentials. These properties on the surface are represented using three-dimensional Zernike descriptors (3DZD), which are based on a series expansion of a 3 dimensional function. Utilizing 3DZD for describing the physicochemical properties has two main advantages: (1) rotational invariance and (2) fast comparison. Here, we introduce Patch-Surfer and PL-PatchSurfer with an emphasis on PL-PatchSurfer, which is more recently developed. Illustrative examples of PL-PatchSurfer performance on binding ligand prediction as well as virtual drug screening are also provided. Copyright © 2015 Elsevier Inc. All rights reserved.

  8. Prediction of Protein Structural Class Based on Gapped-Dipeptides and a Recursive Feature Selection Approach

    Directory of Open Access Journals (Sweden)

    Taigang Liu

    2015-12-01

    Full Text Available The prior knowledge of protein structural class may offer useful clues on understanding its functionality as well as its tertiary structure. Though various significant efforts have been made to find a fast and effective computational approach to address this problem, it is still a challenging topic in the field of bioinformatics. The position-specific score matrix (PSSM profile has been shown to provide a useful source of information for improving the prediction performance of protein structural class. However, this information has not been adequately explored. To this end, in this study, we present a feature extraction technique which is based on gapped-dipeptides composition computed directly from PSSM. Then, a careful feature selection technique is performed based on support vector machine-recursive feature elimination (SVM-RFE. These optimal features are selected to construct a final predictor. The results of jackknife tests on four working datasets show that our method obtains satisfactory prediction accuracies by extracting features solely based on PSSM and could serve as a very promising tool to predict protein structural class.

  9. Adaptive Granulation-Based Prediction for Energy System of Steel Industry.

    Science.gov (United States)

    Wang, Tianyu; Han, Zhongyang; Zhao, Jun; Wang, Wei

    2018-01-01

    The flow variation tendency of byproduct gas plays a crucial role for energy scheduling in steel industry. An accurate prediction of its future trends will be significantly beneficial for the economic profits of steel enterprise. In this paper, a long-term prediction model for the energy system is proposed by providing an adaptive granulation-based method that considers the production semantics involved in the fluctuation tendency of the energy data, and partitions them into a series of information granules. To fully reflect the corresponding data characteristics of the formed unequal-length temporal granules, a 3-D feature space consisting of the timespan, the amplitude and the linetype is designed as linguistic descriptors. In particular, a collaborative-conditional fuzzy clustering method is proposed to granularize the tendency-based feature descriptors and specifically measure the amplitude variation of industrial data which plays a dominant role in the feature space. To quantify the performance of the proposed method, a series of real-world industrial data coming from the energy data center of a steel plant is employed to conduct the comparative experiments. The experimental results demonstrate that the proposed method successively satisfies the requirements of the practically viable prediction.

  10. Reliable Prediction of Insulin Resistance by a School-Based Fitness Test in Middle-School Children

    Directory of Open Access Journals (Sweden)

    Todd Varness

    2009-01-01

    Full Text Available Objectives. (1 Determine the predictive value of a school-based test of cardiovascular fitness (CVF for insulin resistance (IR; (2 compare a “school-based” prediction of IR to a “laboratory-based” prediction, using various measures of fitness and body composition. Methods. Middle school children (n=82 performed the Progressive Aerobic Cardiovascular Endurance Run (PACER, a school-based CVF test, and underwent evaluation of maximal oxygen consumption treadmill testing (VO2 max, body composition (percent body fat and BMI z score, and IR (derived homeostasis model assessment index [HOMAIR]. Results. PACER showed a strong correlation with VO2 max/kg (rs = 0.83, P<.001 and with HOMAIR (rs = −0.60, P<.001. Multivariate regression analysis revealed that a school-based model (using PACER and BMI z score predicted IR similar to a laboratory-based model (using VO2 max/kg of lean body mass and percent body fat. Conclusions. The PACER is a valid school-based test of CVF, is predictive of IR, and has a similar relationship to IR when compared to complex laboratory-based testing. Simple school-based measures of childhood fitness (PACER and fatness (BMI z score could be used to identify childhood risk for IR and evaluate interventions.

  11. Axisymmetric thrust-vectoring nozzle performance prediction

    International Nuclear Information System (INIS)

    Wilson, E. A.; Adler, D.; Bar-Yoseph, P.Z

    1998-01-01

    Throat-hinged geometrically variable converging-diverging thrust-vectoring nozzles directly affect the jet flow geometry and rotation angle at the nozzle exit as a function of the nozzle geometry, the nozzle pressure ratio and flight velocity. The consideration of nozzle divergence in the effective-geometric nozzle relation is theoretically considered here for the first time. In this study, an explicit calculation procedure is presented as a function of nozzle geometry at constant nozzle pressure ratio, zero velocity and altitude, and compared with experimental results in a civil thrust-vectoring scenario. This procedure may be used in dynamic thrust-vectoring nozzle design performance predictions or analysis for civil and military nozzles as well as in the definition of initial jet flow conditions in future numerical VSTOL/TV jet performance studies

  12. Power capability prediction for lithium-ion batteries based on multiple constraints analysis

    International Nuclear Information System (INIS)

    Pan, Rui; Wang, Yujie; Zhang, Xu; Yang, Duo; Chen, Zonghai

    2017-01-01

    Highlights: • Multiple constraints for peak power capability prediction are deeply analyzed. • Multi-limited method is proposed for the peak power capability prediction of LIBs. • The EKF is used for the model based peak power capability prediction. • The FUDS and UDDS profiles are executed to evaluate the proposed method. - Abstract: The power capability of the lithium-ion battery is a key performance indicator for electric vehicle, and it is intimately correlated with the acceleration, regenerative braking and gradient climbing power requirements. Therefore, an accurate power capability or state-of-power prediction is critical to a battery management system, which can help the battery to work in suitable area and prevent the battery from over-charging and over-discharging. However, the power capability is easily affected by dynamic load, voltage variation and temperature. In this paper, three different constraints in power capability prediction are introduced, and the advantages and disadvantages of the three methods are deeply analyzed. Furthermore, a multi-limited approach for the power capability prediction is proposed, which can overcome the drawbacks of the three methods. Subsequently, the extended Kalman filter algorithm is employed for model based state-of-power prediction. In order to verify the proposed method, diverse experiments are executed to explore the efficiency, robustness, and precision. The results indicate that the proposed method can improve the precision and robustness obviously.

  13. Churn prediction on huge telecom data using hybrid firefly based classification

    Directory of Open Access Journals (Sweden)

    Ammar A.Q. Ahmed

    2017-11-01

    Full Text Available Churn prediction in telecom has become a major requirement due to the increase in the number of telecom providers. However due to the hugeness, sparsity and imbalanced nature of the data, churn prediction in telecom has always been a complex task. This paper presents a metaheuristic based churn prediction technique that performs churn prediction on huge telecom data. A hybridized form of Firefly algorithm is used as the classifier. It has been identified that the compute intensive component of the Firefly algorithm is the comparison block, where every firefly is compared with every other firefly to identify the one with the highest light intensity. This component is replaced by Simulated Annealing and the classification process is carried out. Experiments were conducted on the Orange dataset. It was observed that Firefly algorithm works best on churn data and the hybridized Firefly algorithm provides effective and faster results.

  14. Comparison of nuclear safety research reactor (TRIGA-ACPR) performance with analytical prediction

    International Nuclear Information System (INIS)

    West, G.B.; Whittemore, W.L.

    1976-01-01

    The NSRR was taken critical on June 30, 1975 at the Japan Atomic Energy Research Institute - Tokai-mura, Japan. Following initial core loading and control rod calibration, a series of pulsing tests was performed to characterize the performance of the reactor. A comparison has been made of performance parameters actually measured in the 157 element core versus predicted values based upon design analyses. The nuclear parameters measured were quite close to prediction. A $4.70 pulse produced a minimum period of 1.12 msec, a peak power of 20,500 MW and yielded a prompt energy release of 103 MW-sec. Pulse tests with experimental UO 2 fuel pins in the central irradiation cavity have produced 320 cal/gm, averaged at the axial center of 10% enriched UO 2 , for a 100 MW-sec pulse. The pulse rods for the NSRR contain B 4 C enriched to about 93 percent in Boron-10 in order to achieve maximum design performance with only three pulse rods. The total worth for the three transient rods was measured to be about $5.05 (vs $5.07 calculated for the 165 element core), thus verifying the effectiveness of the Boron-10 enrichment to achieve the desired result. Analysis of fuel temperature measurements made in the NSRR show that, for fuel temperatures produced during pulsing greater than 900 deg. C, heat transfer in the 0.010-inch gap between fuel and clad is enhanced by the minor outgassing of hydrogen which is characteristic of that temperature region. The hydrogen is normally all reabsorbed within about 100 sec of maximum temperature, at which time the heat transfer is characteristic of air (or argon) in the gap. In some of the temperature-instrumented elements, however, all of the hydrogen was not reabsorbed and as a result these elements gave significantly lower temperatures for high power steady state operation than were recorded prior to pulsing. In general, the NSRR parameters measured during startup were quite close to analytical prediction and the overall performance of the

  15. Predicting the Resiliency in Parents with Exceptional Children Based on Their Mindfulness

    Science.gov (United States)

    Jabbari, Sosan; Firoozabadi, Somayeh Sadati; Rostami, Sedighe

    2016-01-01

    The purpose of the present study was to predict the resiliency in parents with exceptional children based on their mindfulness. This descriptive correlational study was performed on 260 parents of student (105 male and 159 female) that were selected by cluster sampling method. Family resiliency questionnaire (Sickby, 2005) and five aspect…

  16. Genome-Wide Polygenic Scores Predict Reading Performance Throughout the School Years.

    Science.gov (United States)

    Selzam, Saskia; Dale, Philip S; Wagner, Richard K; DeFries, John C; Cederlöf, Martin; O'Reilly, Paul F; Krapohl, Eva; Plomin, Robert

    2017-07-04

    It is now possible to create individual-specific genetic scores, called genome-wide polygenic scores (GPS). We used a GPS for years of education ( EduYears ) to predict reading performance assessed at UK National Curriculum Key Stages 1 (age 7), 2 (age 12) and 3 (age 14) and on reading tests administered at ages 7 and 12 in a UK sample of 5,825 unrelated individuals. EduYears GPS accounts for up to 5% of the variance in reading performance at age 14. GPS predictions remained significant after accounting for general cognitive ability and family socioeconomic status. Reading performance of children in the lowest and highest 12.5% of the EduYears GPS distribution differed by a mean growth in reading ability of approximately two school years. It seems certain that polygenic scores will be used to predict strengths and weaknesses in education.

  17. Energy based prediction models for building acoustics

    DEFF Research Database (Denmark)

    Brunskog, Jonas

    2012-01-01

    In order to reach robust and simplified yet accurate prediction models, energy based principle are commonly used in many fields of acoustics, especially in building acoustics. This includes simple energy flow models, the framework of statistical energy analysis (SEA) as well as more elaborated...... principles as, e.g., wave intensity analysis (WIA). The European standards for building acoustic predictions, the EN 12354 series, are based on energy flow and SEA principles. In the present paper, different energy based prediction models are discussed and critically reviewed. Special attention is placed...... on underlying basic assumptions, such as diffuse fields, high modal overlap, resonant field being dominant, etc., and the consequences of these in terms of limitations in the theory and in the practical use of the models....

  18. RANS Based Methodology for Predicting the Influence of Leading Edge Erosion on Airfoil Performance

    Energy Technology Data Exchange (ETDEWEB)

    Langel, Christopher M. [Univ. of California, Davis, CA (United States). Dept. of Mechanical and Aerospace Engineering; Chow, Raymond C. [Univ. of California, Davis, CA (United States). Dept. of Mechanical and Aerospace Engineering; van Dam, C. P. [Univ. of California, Davis, CA (United States). Dept. of Mechanical and Aerospace Engineering; Maniaci, David Charles [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States). Wind Energy Technologies Dept.

    2017-10-01

    The impact of surface roughness on flows over aerodynamically designed surfaces is of interested in a number of different fields. It has long been known the surface roughness will likely accelerate the laminar- turbulent transition process by creating additional disturbances in the boundary layer. However, there are very few tools available to predict the effects surface roughness will have on boundary layer flow. There are numerous implications of the premature appearance of a turbulent boundary layer. Increases in local skin friction, boundary layer thickness, and turbulent mixing can impact global flow properties compounding the effects of surface roughness. With this motivation, an investigation into the effects of surface roughness on boundary layer transition has been conducted. The effort involved both an extensive experimental campaign, and the development of a high fidelity roughness model implemented in a R ANS solver. Vast a mounts of experimental data was generated at the Texas A&M Oran W. Nicks Low Speed Wind Tunnel for the calibration and validation of the roughness model described in this work, as well as future efforts. The present work focuses on the development of the computational model including a description of the calibration process. The primary methodology presented introduces a scalar field variable and associated transport equation that interacts with a correlation based transition model. The additional equation allows for non-local effects of surface roughness to be accounted for downstream of rough wall sections while maintaining a "local" formulation. The scalar field is determined through a boundary condition function that has been calibrated to flat plate cases with sand grain roughness. The model was initially tested on a NACA 0012 airfoil with roughness strips applied to the leading edge. Further calibration of the roughness model was performed using results from the companion experimental study on a NACA 633 -418 airfoil

  19. Bridge Structure Deformation Prediction Based on GNSS Data Using Kalman-ARIMA-GARCH Model.

    Science.gov (United States)

    Xin, Jingzhou; Zhou, Jianting; Yang, Simon X; Li, Xiaoqing; Wang, Yu

    2018-01-19

    Bridges are an essential part of the ground transportation system. Health monitoring is fundamentally important for the safety and service life of bridges. A large amount of structural information is obtained from various sensors using sensing technology, and the data processing has become a challenging issue. To improve the prediction accuracy of bridge structure deformation based on data mining and to accurately evaluate the time-varying characteristics of bridge structure performance evolution, this paper proposes a new method for bridge structure deformation prediction, which integrates the Kalman filter, autoregressive integrated moving average model (ARIMA), and generalized autoregressive conditional heteroskedasticity (GARCH). Firstly, the raw deformation data is directly pre-processed using the Kalman filter to reduce the noise. After that, the linear recursive ARIMA model is established to analyze and predict the structure deformation. Finally, the nonlinear recursive GARCH model is introduced to further improve the accuracy of the prediction. Simulation results based on measured sensor data from the Global Navigation Satellite System (GNSS) deformation monitoring system demonstrated that: (1) the Kalman filter is capable of denoising the bridge deformation monitoring data; (2) the prediction accuracy of the proposed Kalman-ARIMA-GARCH model is satisfactory, where the mean absolute error increases only from 3.402 mm to 5.847 mm with the increment of the prediction step; and (3) in comparision to the Kalman-ARIMA model, the Kalman-ARIMA-GARCH model results in superior prediction accuracy as it includes partial nonlinear characteristics (heteroscedasticity); the mean absolute error of five-step prediction using the proposed model is improved by 10.12%. This paper provides a new way for structural behavior prediction based on data processing, which can lay a foundation for the early warning of bridge health monitoring system based on sensor data using sensing

  20. Bridge Structure Deformation Prediction Based on GNSS Data Using Kalman-ARIMA-GARCH Model

    Directory of Open Access Journals (Sweden)

    Jingzhou Xin

    2018-01-01

    Full Text Available Bridges are an essential part of the ground transportation system. Health monitoring is fundamentally important for the safety and service life of bridges. A large amount of structural information is obtained from various sensors using sensing technology, and the data processing has become a challenging issue. To improve the prediction accuracy of bridge structure deformation based on data mining and to accurately evaluate the time-varying characteristics of bridge structure performance evolution, this paper proposes a new method for bridge structure deformation prediction, which integrates the Kalman filter, autoregressive integrated moving average model (ARIMA, and generalized autoregressive conditional heteroskedasticity (GARCH. Firstly, the raw deformation data is directly pre-processed using the Kalman filter to reduce the noise. After that, the linear recursive ARIMA model is established to analyze and predict the structure deformation. Finally, the nonlinear recursive GARCH model is introduced to further improve the accuracy of the prediction. Simulation results based on measured sensor data from the Global Navigation Satellite System (GNSS deformation monitoring system demonstrated that: (1 the Kalman filter is capable of denoising the bridge deformation monitoring data; (2 the prediction accuracy of the proposed Kalman-ARIMA-GARCH model is satisfactory, where the mean absolute error increases only from 3.402 mm to 5.847 mm with the increment of the prediction step; and (3 in comparision to the Kalman-ARIMA model, the Kalman-ARIMA-GARCH model results in superior prediction accuracy as it includes partial nonlinear characteristics (heteroscedasticity; the mean absolute error of five-step prediction using the proposed model is improved by 10.12%. This paper provides a new way for structural behavior prediction based on data processing, which can lay a foundation for the early warning of bridge health monitoring system based on sensor data

  1. Preschoolers' precision of the approximate number system predicts later school mathematics performance.

    Science.gov (United States)

    Mazzocco, Michèle M M; Feigenson, Lisa; Halberda, Justin

    2011-01-01

    The Approximate Number System (ANS) is a primitive mental system of nonverbal representations that supports an intuitive sense of number in human adults, children, infants, and other animal species. The numerical approximations produced by the ANS are characteristically imprecise and, in humans, this precision gradually improves from infancy to adulthood. Throughout development, wide ranging individual differences in ANS precision are evident within age groups. These individual differences have been linked to formal mathematics outcomes, based on concurrent, retrospective, or short-term longitudinal correlations observed during the school age years. However, it remains unknown whether this approximate number sense actually serves as a foundation for these school mathematics abilities. Here we show that ANS precision measured at preschool, prior to formal instruction in mathematics, selectively predicts performance on school mathematics at 6 years of age. In contrast, ANS precision does not predict non-numerical cognitive abilities. To our knowledge, these results provide the first evidence for early ANS precision, measured before the onset of formal education, predicting later mathematical abilities.

  2. Predicting Taxi-Out Time at Congested Airports with Optimization-Based Support Vector Regression Methods

    Directory of Open Access Journals (Sweden)

    Guan Lian

    2018-01-01

    Full Text Available Accurate prediction of taxi-out time is significant precondition for improving the operationality of the departure process at an airport, as well as reducing the long taxi-out time, congestion, and excessive emission of greenhouse gases. Unfortunately, several of the traditional methods of predicting taxi-out time perform unsatisfactorily at congested airports. This paper describes and tests three of those conventional methods which include Generalized Linear Model, Softmax Regression Model, and Artificial Neural Network method and two improved Support Vector Regression (SVR approaches based on swarm intelligence algorithm optimization, which include Particle Swarm Optimization (PSO and Firefly Algorithm. In order to improve the global searching ability of Firefly Algorithm, adaptive step factor and Lévy flight are implemented simultaneously when updating the location function. Six factors are analysed, of which delay is identified as one significant factor in congested airports. Through a series of specific dynamic analyses, a case study of Beijing International Airport (PEK is tested with historical data. The performance measures show that the proposed two SVR approaches, especially the Improved Firefly Algorithm (IFA optimization-based SVR method, not only perform as the best modelling measures and accuracy rate compared with the representative forecast models, but also can achieve a better predictive performance when dealing with abnormal taxi-out time states.

  3. Markers of preparatory attention predict visual short-term memory performance.

    Science.gov (United States)

    Murray, Alexandra M; Nobre, Anna C; Stokes, Mark G

    2011-05-01

    Visual short-term memory (VSTM) is limited in capacity. Therefore, it is important to encode only visual information that is most likely to be relevant to behaviour. Here we asked which aspects of selective biasing of VSTM encoding predict subsequent memory-based performance. We measured EEG during a selective VSTM encoding task, in which we varied parametrically the memory load and the precision of recall required to compare a remembered item to a subsequent probe item. On half the trials, a spatial cue indicated that participants only needed to encode items from one hemifield. We observed a typical sequence of markers of anticipatory spatial attention: early attention directing negativity (EDAN), anterior attention directing negativity (ADAN), late directing attention positivity (LDAP); as well as of VSTM maintenance: contralateral delay activity (CDA). We found that individual differences in preparatory brain activity (EDAN/ADAN) predicted cue-related changes in recall accuracy, indexed by memory-probe discrimination sensitivity (d'). Importantly, our parametric manipulation of memory-probe similarity also allowed us to model the behavioural data for each participant, providing estimates for the quality of the memory representation and the probability that an item could be retrieved. We found that selective encoding primarily increased the probability of accurate memory recall; that ERP markers of preparatory attention predicted the cue-related changes in recall probability. Copyright © 2011. Published by Elsevier Ltd.

  4. Occupational-Specific Strength Predicts Astronaut-Related Task Performance in a Weighted Suit.

    Science.gov (United States)

    Taylor, Andrew; Kotarsky, Christopher J; Bond, Colin W; Hackney, Kyle J

    2018-01-01

    Future space missions beyond low Earth orbit will require deconditioned astronauts to perform occupationally relevant tasks within a planetary spacesuit. The prediction of time-to-completion (TTC) of astronaut tasks will be critical for crew safety, autonomous operations, and mission success. This exploratory study determined if the addition of task-specific strength testing to current standard lower body testing would enhance the prediction of TTC in a 1-G test battery. Eight healthy participants completed NASA lower body strength tests, occupationally specific strength tests, and performed six task simulations (hand drilling, construction wrenching, incline walking, collecting weighted samples, and dragging an unresponsive crewmember to safety) in a 48-kg weighted suit. The TTC for each task was recorded and summed to obtain a total TTC for the test battery. Linear regression was used to predict total TTC with two models: 1) NASA lower body strength tests; and 2) NASA lower body strength tests + occupationally specific strength tests. Total TTC of the test battery ranged from 20.2-44.5 min. The lower body strength test alone accounted for 61% of the variability in total TTC. The addition of hand drilling and wrenching strength tests accounted for 99% of the variability in total TTC. Adding occupationally specific strength tests (hand drilling and wrenching) to standard lower body strength tests successfully predicted total TTC in a performance test battery within a weighted suit. Future research should couple these strength tests with higher fidelity task simulations to determine the utility and efficacy of task performance prediction.Taylor A, Kotarsky CJ, Bond CW, Hackney KJ. Occupational-specific strength predicts astronaut-related task performance in a weighted suit. Aerosp Med Hum Perform. 2018; 89(1):58-62.

  5. A Unified Model of Performance for Predicting the Effects of Sleep and Caffeine

    Science.gov (United States)

    Ramakrishnan, Sridhar; Wesensten, Nancy J.; Kamimori, Gary H.; Moon, James E.; Balkin, Thomas J.; Reifman, Jaques

    2016-01-01

    Study Objectives: Existing mathematical models of neurobehavioral performance cannot predict the beneficial effects of caffeine across the spectrum of sleep loss conditions, limiting their practical utility. Here, we closed this research gap by integrating a model of caffeine effects with the recently validated unified model of performance (UMP) into a single, unified modeling framework. We then assessed the accuracy of this new UMP in predicting performance across multiple studies. Methods: We hypothesized that the pharmacodynamics of caffeine vary similarly during both wakefulness and sleep, and that caffeine has a multiplicative effect on performance. Accordingly, to represent the effects of caffeine in the UMP, we multiplied a dose-dependent caffeine factor (which accounts for the pharmacokinetics and pharmacodynamics of caffeine) to the performance estimated in the absence of caffeine. We assessed the UMP predictions in 14 distinct laboratory- and field-study conditions, including 7 different sleep-loss schedules (from 5 h of sleep per night to continuous sleep loss for 85 h) and 6 different caffeine doses (from placebo to repeated 200 mg doses to a single dose of 600 mg). Results: The UMP accurately predicted group-average psychomotor vigilance task performance data across the different sleep loss and caffeine conditions (6% caffeine resulted in improved predictions (after caffeine consumption) by up to 70%. Conclusions: The UMP provides the first comprehensive tool for accurate selection of combinations of sleep schedules and caffeine countermeasure strategies to optimize neurobehavioral performance. Citation: Ramakrishnan S, Wesensten NJ, Kamimori GH, Moon JE, Balkin TJ, Reifman J. A unified model of performance for predicting the effects of sleep and caffeine. SLEEP 2016;39(10):1827–1841. PMID:27397562

  6. Calibration between Undergraduate Students' Prediction of and Actual Performance: The Role of Gender and Performance Attributions

    Science.gov (United States)

    Gutierrez, Antonio P.; Price, Addison F.

    2017-01-01

    This study investigated changes in male and female students' prediction and postdiction calibration accuracy and bias scores, and the predictive effects of explanatory styles on these variables beyond gender. Seventy undergraduate students rated their confidence in performance before and after a 40-item exam. There was an improvement in students'…

  7. On predicting student performance using low-rank matrix factorization techniques

    DEFF Research Database (Denmark)

    Lorenzen, Stephan Sloth; Pham, Dang Ninh; Alstrup, Stephen

    2017-01-01

    Predicting the score of a student is one of the important problems in educational data mining. The scores given by an individual student reflect how a student understands and applies the knowledge conveyed in class. A reliable performance prediction enables teachers to identify weak students...... that require remedial support, generate adaptive hints, and improve the learning of students. This work focuses on predicting the score of students in the quiz system of the Clio Online learning platform, the largest Danish supplier of online learning materials, covering 90% of Danish elementary schools...... and the current version of the data set is very sparse, the very low-rank approximation can capture enough information. This means that the simple baseline approach achieves similar performance compared to other advanced methods. In future work, we will restrict the quiz data set, e.g. only including quizzes...

  8. Evaluating the Predictive Power of Multivariate Tensor-based Morphometry in Alzheimers Disease Progression via Convex Fused Sparse Group Lasso.

    Science.gov (United States)

    Tsao, Sinchai; Gajawelli, Niharika; Zhou, Jiayu; Shi, Jie; Ye, Jieping; Wang, Yalin; Lepore, Natasha

    2014-03-21

    Prediction of Alzheimers disease (AD) progression based on baseline measures allows us to understand disease progression and has implications in decisions concerning treatment strategy. To this end we combine a predictive multi-task machine learning method 1 with novel MR-based multivariate morphometric surface map of the hippocampus 2 to predict future cognitive scores of patients. Previous work by Zhou et al. 1 has shown that a multi-task learning framework that performs prediction of all future time points (or tasks) simultaneously can be used to encode both sparsity as well as temporal smoothness. They showed that this can be used in predicting cognitive outcomes of Alzheimers Disease Neuroimaging Initiative (ADNI) subjects based on FreeSurfer-based baseline MRI features, MMSE score demographic information and ApoE status. Whilst volumetric information may hold generalized information on brain status, we hypothesized that hippocampus specific information may be more useful in predictive modeling of AD. To this end, we applied Shi et al. 2 s recently developed multivariate tensor-based (mTBM) parametric surface analysis method to extract features from the hippocampal surface. We show that by combining the power of the multi-task framework with the sensitivity of mTBM features of the hippocampus surface, we are able to improve significantly improve predictive performance of ADAS cognitive scores 6, 12, 24, 36 and 48 months from baseline.

  9. Evaluating the predictive power of multivariate tensor-based morphometry in Alzheimer's disease progression via convex fused sparse group Lasso

    Science.gov (United States)

    Tsao, Sinchai; Gajawelli, Niharika; Zhou, Jiayu; Shi, Jie; Ye, Jieping; Wang, Yalin; Lepore, Natasha

    2014-03-01

    Prediction of Alzheimers disease (AD) progression based on baseline measures allows us to understand disease progression and has implications in decisions concerning treatment strategy. To this end we combine a predictive multi-task machine learning method1 with novel MR-based multivariate morphometric surface map of the hippocampus2 to predict future cognitive scores of patients. Previous work by Zhou et al.1 has shown that a multi-task learning framework that performs prediction of all future time points (or tasks) simultaneously can be used to encode both sparsity as well as temporal smoothness. They showed that this can be used in predicting cognitive outcomes of Alzheimers Disease Neuroimaging Initiative (ADNI) subjects based on FreeSurfer-based baseline MRI features, MMSE score demographic information and ApoE status. Whilst volumetric information may hold generalized information on brain status, we hypothesized that hippocampus specific information may be more useful in predictive modeling of AD. To this end, we applied Shi et al.2s recently developed multivariate tensor-based (mTBM) parametric surface analysis method to extract features from the hippocampal surface. We show that by combining the power of the multi-task framework with the sensitivity of mTBM features of the hippocampus surface, we are able to improve significantly improve predictive performance of ADAS cognitive scores 6, 12, 24, 36 and 48 months from baseline.

  10. Extending Theory-Based Quantitative Predictions to New Health Behaviors.

    Science.gov (United States)

    Brick, Leslie Ann D; Velicer, Wayne F; Redding, Colleen A; Rossi, Joseph S; Prochaska, James O

    2016-04-01

    Traditional null hypothesis significance testing suffers many limitations and is poorly adapted to theory testing. A proposed alternative approach, called Testing Theory-based Quantitative Predictions, uses effect size estimates and confidence intervals to directly test predictions based on theory. This paper replicates findings from previous smoking studies and extends the approach to diet and sun protection behaviors using baseline data from a Transtheoretical Model behavioral intervention (N = 5407). Effect size predictions were developed using two methods: (1) applying refined effect size estimates from previous smoking research or (2) using predictions developed by an expert panel. Thirteen of 15 predictions were confirmed for smoking. For diet, 7 of 14 predictions were confirmed using smoking predictions and 6 of 16 using expert panel predictions. For sun protection, 3 of 11 predictions were confirmed using smoking predictions and 5 of 19 using expert panel predictions. Expert panel predictions and smoking-based predictions poorly predicted effect sizes for diet and sun protection constructs. Future studies should aim to use previous empirical data to generate predictions whenever possible. The best results occur when there have been several iterations of predictions for a behavior, such as with smoking, demonstrating that expected values begin to converge on the population effect size. Overall, the study supports necessity in strengthening and revising theory with empirical data.

  11. Assessing Therapist Competence: Development of a Performance-Based Measure and Its Comparison With a Web-Based Measure.

    Science.gov (United States)

    Cooper, Zafra; Doll, Helen; Bailey-Straebler, Suzanne; Bohn, Kristin; de Vries, Dian; Murphy, Rebecca; O'Connor, Marianne E; Fairburn, Christopher G

    2017-10-31

    Recent research interest in how best to train therapists to deliver psychological treatments has highlighted the need for rigorous, but scalable, means of measuring therapist competence. There are at least two components involved in assessing therapist competence: the assessment of their knowledge of the treatment concerned, including how and when to use its strategies and procedures, and an evaluation of their ability to apply such knowledge skillfully in practice. While the assessment of therapists' knowledge has the potential to be completed efficiently on the Web, the assessment of skill has generally involved a labor-intensive process carried out by clinicians, and as such, may not be suitable for assessing training outcome in certain circumstances. The aims of this study were to develop and evaluate a role-play-based measure of skill suitable for assessing training outcome and to compare its performance with a highly scalable Web-based measure of applied knowledge. Using enhanced cognitive behavioral therapy (CBT-E) for eating disorders as an exemplar, clinical scenarios for role-play assessment were developed and piloted together with a rating scheme for assessing trainee therapists' performance. These scenarios were evaluated by examining the performance of 93 therapists from different professional backgrounds and at different levels of training in implementing CBT-E. These therapists also completed a previously developed Web-based measure of applied knowledge, and the ability of the Web-based measure to efficiently predict competence on the role-play measure was investigated. The role-play measure assessed performance at implementing a range of CBT-E procedures. The majority of the therapists rated their performance as moderately or closely resembling their usual clinical performance. Trained raters were able to achieve good-to-excellent reliability for averaged competence, with intraclass correlation coefficients ranging from .653 to 909. The measure was

  12. Background-Modeling-Based Adaptive Prediction for Surveillance Video Coding.

    Science.gov (United States)

    Zhang, Xianguo; Huang, Tiejun; Tian, Yonghong; Gao, Wen

    2014-02-01

    The exponential growth of surveillance videos presents an unprecedented challenge for high-efficiency surveillance video coding technology. Compared with the existing coding standards that were basically developed for generic videos, surveillance video coding should be designed to make the best use of the special characteristics of surveillance videos (e.g., relative static background). To do so, this paper first conducts two analyses on how to improve the background and foreground prediction efficiencies in surveillance video coding. Following the analysis results, we propose a background-modeling-based adaptive prediction (BMAP) method. In this method, all blocks to be encoded are firstly classified into three categories. Then, according to the category of each block, two novel inter predictions are selectively utilized, namely, the background reference prediction (BRP) that uses the background modeled from the original input frames as the long-term reference and the background difference prediction (BDP) that predicts the current data in the background difference domain. For background blocks, the BRP can effectively improve the prediction efficiency using the higher quality background as the reference; whereas for foreground-background-hybrid blocks, the BDP can provide a better reference after subtracting its background pixels. Experimental results show that the BMAP can achieve at least twice the compression ratio on surveillance videos as AVC (MPEG-4 Advanced Video Coding) high profile, yet with a slightly additional encoding complexity. Moreover, for the foreground coding performance, which is crucial to the subjective quality of moving objects in surveillance videos, BMAP also obtains remarkable gains over several state-of-the-art methods.

  13. Football league win prediction based on online and league table data

    Science.gov (United States)

    Par, Prateek; Gupt, Ankit Kumar; Singh, Samarth; Khare, Neelu; Bhattachrya, Sweta

    2017-11-01

    As we are proceeding towards an internet driven world, the impact of internet is increasing in our day to lives. This not only gives impact on the virtual world but also leave a mark in the real world. The social media sites contains huge amount of information, the only thing is to collect the relevant data and analyse the data to form a real world prediction and it can do far more than that. In this paper we study the relationship between the twitter data and the normal data analysis to predict the winning team in the NFL (National Football League).The prediction is based on the data collected on the on-going league which includes performance of each player and their previous statistics. Alongside with the data available online we are combining the twitter data which we extracted by the tweets pertaining to specific teams and games in the NFL season and use them alongside statistical game data to build predictive models for future or the outcome of the game i.e. which team will lose or win depending upon the statistical data available. Specifically the tweets within the 24 hours of match will be considered and the main focus of twitter data will be upon the last hours of tweets i.e. pre-match twitter data and post-match twitter data. We are experimenting on the data and using twitter data we are trying to increase the performance of the existing predictive models that uses only the game stats to predict the future.

  14. A Predictive-Control-Based Over-Modulation Method for Conventional Matrix Converters

    DEFF Research Database (Denmark)

    Zhang, Guanguan; Yang, Jian; Sun, Yao

    2018-01-01

    To increase the voltage transfer ratio of the matrix converter and improve the input/output current performance simultaneously, an over-modulation method based on predictive control is proposed in this paper, where the weighting factor is selected by an automatic adjusting mechanism, which is able...... to further enhance the system performance promptly. This method has advantages like the maximum voltage transfer ratio can reach 0.987 in the experiments; the total harmonic distortion of the input and output current are reduced, and the losses in the matrix converter are decreased. Moreover, the specific...

  15. Fast reconstruction and prediction of frozen flow turbulence based on structured Kalman filtering

    NARCIS (Netherlands)

    Fraanje, P.R.; Rice, J.; Verhaegen, M.; Doelman, N.J.

    2010-01-01

    Efficient and optimal prediction of frozen flow turbulence using the complete observation history of the wavefront sensor is an important issue in adaptive optics for large ground-based telescopes. At least for the sake of error budgeting and algorithm performance, the evaluation of an accurate

  16. Predictability of depression severity based on posterior alpha oscillations.

    Science.gov (United States)

    Jiang, H; Popov, T; Jylänki, P; Bi, K; Yao, Z; Lu, Q; Jensen, O; van Gerven, M A J

    2016-04-01

    We aimed to integrate neural data and an advanced machine learning technique to predict individual major depressive disorder (MDD) patient severity. MEG data was acquired from 22 MDD patients and 22 healthy controls (HC) resting awake with eyes closed. Individual power spectra were calculated by a Fourier transform. Sources were reconstructed via beamforming technique. Bayesian linear regression was applied to predict depression severity based on the spatial distribution of oscillatory power. In MDD patients, decreased theta (4-8 Hz) and alpha (8-14 Hz) power was observed in fronto-central and posterior areas respectively, whereas increased beta (14-30 Hz) power was observed in fronto-central regions. In particular, posterior alpha power was negatively related to depression severity. The Bayesian linear regression model showed significant depression severity prediction performance based on the spatial distribution of both alpha (r=0.68, p=0.0005) and beta power (r=0.56, p=0.007) respectively. Our findings point to a specific alteration of oscillatory brain activity in MDD patients during rest as characterized from MEG data in terms of spectral and spatial distribution. The proposed model yielded a quantitative and objective estimation for the depression severity, which in turn has a potential for diagnosis and monitoring of the recovery process. Copyright © 2016 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.

  17. Influence of covariate distribution on the predictive performance of pharmacokinetic models in paediatric research

    Science.gov (United States)

    Piana, Chiara; Danhof, Meindert; Della Pasqua, Oscar

    2014-01-01

    Aims The accuracy of model-based predictions often reported in paediatric research has not been thoroughly characterized. The aim of this exercise is therefore to evaluate the role of covariate distributions when a pharmacokinetic model is used for simulation purposes. Methods Plasma concentrations of a hypothetical drug were simulated in a paediatric population using a pharmacokinetic model in which body weight was correlated with clearance and volume of distribution. Two subgroups of children were then selected from the overall population according to a typical study design, in which pre-specified body weight ranges (10–15 kg and 30–40 kg) were used as inclusion criteria. The simulated data sets were then analyzed using non-linear mixed effects modelling. Model performance was assessed by comparing the accuracy of AUC predictions obtained for each subgroup, based on the model derived from the overall population and by extrapolation of the model parameters across subgroups. Results Our findings show that systemic exposure as well as pharmacokinetic parameters cannot be accurately predicted from the pharmacokinetic model obtained from a population with a different covariate range from the one explored during model building. Predictions were accurate only when a model was used for prediction in a subgroup of the initial population. Conclusions In contrast to current practice, the use of pharmacokinetic modelling in children should be limited to interpolations within the range of values observed during model building. Furthermore, the covariate point estimate must be kept in the model even when predictions refer to a subset different from the original population. PMID:24433411

  18. Comparison of predictive performance of data mining algorithms in predicting body weight in Mengali rams of Pakistan

    Directory of Open Access Journals (Sweden)

    Senol Celik

    Full Text Available ABSTRACT The present study aimed at comparing predictive performance of some data mining algorithms (CART, CHAID, Exhaustive CHAID, MARS, MLP, and RBF in biometrical data of Mengali rams. To compare the predictive capability of the algorithms, the biometrical data regarding body (body length, withers height, and heart girth and testicular (testicular length, scrotal length, and scrotal circumference measurements of Mengali rams in predicting live body weight were evaluated by most goodness of fit criteria. In addition, age was considered as a continuous independent variable. In this context, MARS data mining algorithm was used for the first time to predict body weight in two forms, without (MARS_1 and with interaction (MARS_2 terms. The superiority order in the predictive accuracy of the algorithms was found as CART > CHAID ≈ Exhaustive CHAID > MARS_2 > MARS_1 > RBF > MLP. Moreover, all tested algorithms provided a strong predictive accuracy for estimating body weight. However, MARS is the only algorithm that generated a prediction equation for body weight. Therefore, it is hoped that the available results might present a valuable contribution in terms of predicting body weight and describing the relationship between the body weight and body and testicular measurements in revealing breed standards and the conservation of indigenous gene sources for Mengali sheep breeding. Therefore, it will be possible to perform more profitable and productive sheep production. Use of data mining algorithms is useful for revealing the relationship between body weight and testicular traits in describing breed standards of Mengali sheep.

  19. An Entropy-Based Kernel Learning Scheme toward Efficient Data Prediction in Cloud-Assisted Network Environments

    Directory of Open Access Journals (Sweden)

    Xiong Luo

    2016-07-01

    Full Text Available With the recent emergence of wireless sensor networks (WSNs in the cloud computing environment, it is now possible to monitor and gather physical information via lots of sensor nodes to meet the requirements of cloud services. Generally, those sensor nodes collect data and send data to sink node where end-users can query all the information and achieve cloud applications. Currently, one of the main disadvantages in the sensor nodes is that they are with limited physical performance relating to less memory for storage and less source of power. Therefore, in order to avoid such limitation, it is necessary to develop an efficient data prediction method in WSN. To serve this purpose, by reducing the redundant data transmission between sensor nodes and sink node while maintaining the required acceptable errors, this article proposes an entropy-based learning scheme for data prediction through the use of kernel least mean square (KLMS algorithm. The proposed scheme called E-KLMS develops a mechanism to maintain the predicted data synchronous at both sides. Specifically, the kernel-based method is able to adjust the coefficients adaptively in accordance with every input, which will achieve a better performance with smaller prediction errors, while employing information entropy to remove these data which may cause relatively large errors. E-KLMS can effectively solve the tradeoff problem between prediction accuracy and computational efforts while greatly simplifying the training structure compared with some other data prediction approaches. What’s more, the kernel-based method and entropy technique could ensure the prediction effect by both improving the accuracy and reducing errors. Experiments with some real data sets have been carried out to validate the efficiency and effectiveness of E-KLMS learning scheme, and the experiment results show advantages of the our method in prediction accuracy and computational time.

  20. Performance samples on academic tasks : improving prediction of academic performance

    NARCIS (Netherlands)

    Tanilon, Jenny

    2011-01-01

    This thesis is about the development and validation of a performance-based test, labeled as Performance Samples on academic tasks in Education and Child Studies (PSEd). PSEd is designed to identify students who are most able to perform the academic tasks involved in an Education and Child Studies

  1. Do physiological measures predict selected CrossFit(®) benchmark performance?

    Science.gov (United States)

    Butcher, Scotty J; Neyedly, Tyler J; Horvey, Karla J; Benko, Chad R

    2015-01-01

    CrossFit(®) is a new but extremely popular method of exercise training and competition that involves constantly varied functional movements performed at high intensity. Despite the popularity of this training method, the physiological determinants of CrossFit performance have not yet been reported. The purpose of this study was to determine whether physiological and/or muscle strength measures could predict performance on three common CrossFit "Workouts of the Day" (WODs). Fourteen CrossFit Open or Regional athletes completed, on separate days, the WODs "Grace" (30 clean and jerks for time), "Fran" (three rounds of thrusters and pull-ups for 21, 15, and nine repetitions), and "Cindy" (20 minutes of rounds of five pull-ups, ten push-ups, and 15 bodyweight squats), as well as the "CrossFit Total" (1 repetition max [1RM] back squat, overhead press, and deadlift), maximal oxygen consumption (VO2max), and Wingate anaerobic power/capacity testing. Performance of Grace and Fran was related to whole-body strength (CrossFit Total) (r=-0.88 and -0.65, respectively) and anaerobic threshold (r=-0.61 and -0.53, respectively); however, whole-body strength was the only variable to survive the prediction regression for both of these WODs (R (2)=0.77 and 0.42, respectively). There were no significant associations or predictors for Cindy. CrossFit benchmark WOD performance cannot be predicted by VO2max, Wingate power/capacity, or either respiratory compensation or anaerobic thresholds. Of the data measured, only whole-body strength can partially explain performance on Grace and Fran, although anaerobic threshold also exhibited association with performance. Along with their typical training, CrossFit athletes should likely ensure an adequate level of strength and aerobic endurance to optimize performance on at least some benchmark WODs.

  2. Do physiological measures predict selected CrossFit® benchmark performance?

    Science.gov (United States)

    Butcher, Scotty J; Neyedly, Tyler J; Horvey, Karla J; Benko, Chad R

    2015-01-01

    Purpose CrossFit® is a new but extremely popular method of exercise training and competition that involves constantly varied functional movements performed at high intensity. Despite the popularity of this training method, the physiological determinants of CrossFit performance have not yet been reported. The purpose of this study was to determine whether physiological and/or muscle strength measures could predict performance on three common CrossFit “Workouts of the Day” (WODs). Materials and methods Fourteen CrossFit Open or Regional athletes completed, on separate days, the WODs “Grace” (30 clean and jerks for time), “Fran” (three rounds of thrusters and pull-ups for 21, 15, and nine repetitions), and “Cindy” (20 minutes of rounds of five pull-ups, ten push-ups, and 15 bodyweight squats), as well as the “CrossFit Total” (1 repetition max [1RM] back squat, overhead press, and deadlift), maximal oxygen consumption (VO2max), and Wingate anaerobic power/capacity testing. Results Performance of Grace and Fran was related to whole-body strength (CrossFit Total) (r=−0.88 and −0.65, respectively) and anaerobic threshold (r=−0.61 and −0.53, respectively); however, whole-body strength was the only variable to survive the prediction regression for both of these WODs (R2=0.77 and 0.42, respectively). There were no significant associations or predictors for Cindy. Conclusion CrossFit benchmark WOD performance cannot be predicted by VO2max, Wingate power/capacity, or either respiratory compensation or anaerobic thresholds. Of the data measured, only whole-body strength can partially explain performance on Grace and Fran, although anaerobic threshold also exhibited association with performance. Along with their typical training, CrossFit athletes should likely ensure an adequate level of strength and aerobic endurance to optimize performance on at least some benchmark WODs. PMID:26261428

  3. Web of Objects Based Ambient Assisted Living Framework for Emergency Psychiatric State Prediction

    Science.gov (United States)

    Alam, Md Golam Rabiul; Abedin, Sarder Fakhrul; Al Ameen, Moshaddique; Hong, Choong Seon

    2016-01-01

    Ambient assisted living can facilitate optimum health and wellness by aiding physical, mental and social well-being. In this paper, patients’ psychiatric symptoms are collected through lightweight biosensors and web-based psychiatric screening scales in a smart home environment and then analyzed through machine learning algorithms to provide ambient intelligence in a psychiatric emergency. The psychiatric states are modeled through a Hidden Markov Model (HMM), and the model parameters are estimated using a Viterbi path counting and scalable Stochastic Variational Inference (SVI)-based training algorithm. The most likely psychiatric state sequence of the corresponding observation sequence is determined, and an emergency psychiatric state is predicted through the proposed algorithm. Moreover, to enable personalized psychiatric emergency care, a service a web of objects-based framework is proposed for a smart-home environment. In this framework, the biosensor observations and the psychiatric rating scales are objectified and virtualized in the web space. Then, the web of objects of sensor observations and psychiatric rating scores are used to assess the dweller’s mental health status and to predict an emergency psychiatric state. The proposed psychiatric state prediction algorithm reported 83.03 percent prediction accuracy in an empirical performance study. PMID:27608023

  4. A sequence-based dynamic ensemble learning system for protein ligand-binding site prediction

    KAUST Repository

    Chen, Peng

    2015-12-03

    Background: Proteins have the fundamental ability to selectively bind to other molecules and perform specific functions through such interactions, such as protein-ligand binding. Accurate prediction of protein residues that physically bind to ligands is important for drug design and protein docking studies. Most of the successful protein-ligand binding predictions were based on known structures. However, structural information is not largely available in practice due to the huge gap between the number of known protein sequences and that of experimentally solved structures

  5. A sequence-based dynamic ensemble learning system for protein ligand-binding site prediction

    KAUST Repository

    Chen, Peng; Hu, ShanShan; Zhang, Jun; Gao, Xin; Li, Jinyan; Xia, Junfeng; Wang, Bing

    2015-01-01

    Background: Proteins have the fundamental ability to selectively bind to other molecules and perform specific functions through such interactions, such as protein-ligand binding. Accurate prediction of protein residues that physically bind to ligands is important for drug design and protein docking studies. Most of the successful protein-ligand binding predictions were based on known structures. However, structural information is not largely available in practice due to the huge gap between the number of known protein sequences and that of experimentally solved structures

  6. Modeling Students' Problem Solving Performance in the Computer-Based Mathematics Learning Environment

    Science.gov (United States)

    Lee, Young-Jin

    2017-01-01

    Purpose: The purpose of this paper is to develop a quantitative model of problem solving performance of students in the computer-based mathematics learning environment. Design/methodology/approach: Regularized logistic regression was used to create a quantitative model of problem solving performance of students that predicts whether students can…

  7. State-Space Modeling and Performance Analysis of Variable-Speed Wind Turbine Based on a Model Predictive Control Approach

    Directory of Open Access Journals (Sweden)

    H. Bassi

    2017-04-01

    Full Text Available Advancements in wind energy technologies have led wind turbines from fixed speed to variable speed operation. This paper introduces an innovative version of a variable-speed wind turbine based on a model predictive control (MPC approach. The proposed approach provides maximum power point tracking (MPPT, whose main objective is to capture the maximum wind energy in spite of the variable nature of the wind’s speed. The proposed MPC approach also reduces the constraints of the two main functional parts of the wind turbine: the full load and partial load segments. The pitch angle for full load and the rotating force for the partial load have been fixed concurrently in order to balance power generation as well as to reduce the operations of the pitch angle. A mathematical analysis of the proposed system using state-space approach is introduced. The simulation results using MATLAB/SIMULINK show that the performance of the wind turbine with the MPC approach is improved compared to the traditional PID controller in both low and high wind speeds.

  8. A Unified Model of Performance for Predicting the Effects of Sleep and Caffeine.

    Science.gov (United States)

    Ramakrishnan, Sridhar; Wesensten, Nancy J; Kamimori, Gary H; Moon, James E; Balkin, Thomas J; Reifman, Jaques

    2016-10-01

    Existing mathematical models of neurobehavioral performance cannot predict the beneficial effects of caffeine across the spectrum of sleep loss conditions, limiting their practical utility. Here, we closed this research gap by integrating a model of caffeine effects with the recently validated unified model of performance (UMP) into a single, unified modeling framework. We then assessed the accuracy of this new UMP in predicting performance across multiple studies. We hypothesized that the pharmacodynamics of caffeine vary similarly during both wakefulness and sleep, and that caffeine has a multiplicative effect on performance. Accordingly, to represent the effects of caffeine in the UMP, we multiplied a dose-dependent caffeine factor (which accounts for the pharmacokinetics and pharmacodynamics of caffeine) to the performance estimated in the absence of caffeine. We assessed the UMP predictions in 14 distinct laboratory- and field-study conditions, including 7 different sleep-loss schedules (from 5 h of sleep per night to continuous sleep loss for 85 h) and 6 different caffeine doses (from placebo to repeated 200 mg doses to a single dose of 600 mg). The UMP accurately predicted group-average psychomotor vigilance task performance data across the different sleep loss and caffeine conditions (6% caffeine resulted in improved predictions (after caffeine consumption) by up to 70%. The UMP provides the first comprehensive tool for accurate selection of combinations of sleep schedules and caffeine countermeasure strategies to optimize neurobehavioral performance. © 2016 Associated Professional Sleep Societies, LLC.

  9. Capacity Prediction Model Based on Limited Priority Gap-Acceptance Theory at Multilane Roundabouts

    Directory of Open Access Journals (Sweden)

    Zhaowei Qu

    2014-01-01

    Full Text Available Capacity is an important design parameter for roundabouts, and it is the premise of computing their delay and queue. Roundabout capacity has been studied for decades, and empirical regression model and gap-acceptance model are the two main methods to predict it. Based on gap-acceptance theory, by considering the effect of limited priority, especially the relationship between limited priority factor and critical gap, a modified model was built to predict the roundabout capacity. We then compare the results between Raff’s method and maximum likelihood estimation (MLE method, and the MLE method was used to predict the critical gaps. Finally, the predicted capacities from different models were compared, with the observed capacity by field surveys, which verifies the performance of the proposed model.

  10. Dataset size and composition impact the reliability of performance benchmarks for peptide-MHC binding predictions

    DEFF Research Database (Denmark)

    Kim, Yohan; Sidney, John; Buus, Søren

    2014-01-01

    Background: It is important to accurately determine the performance of peptide: MHC binding predictions, as this enables users to compare and choose between different prediction methods and provides estimates of the expected error rate. Two common approaches to determine prediction performance...... are cross-validation, in which all available data are iteratively split into training and testing data, and the use of blind sets generated separately from the data used to construct the predictive method. In the present study, we have compared cross-validated prediction performances generated on our last...

  11. Remaining useful life prediction based on the Wiener process for an aviation axial piston pump

    Directory of Open Access Journals (Sweden)

    Xingjian Wang

    2016-06-01

    Full Text Available An aviation hydraulic axial piston pump’s degradation from comprehensive wear is a typical gradual failure model. Accurate wear prediction is difficult as random and uncertain characteristics must be factored into the estimation. The internal wear status of the axial piston pump is characterized by the return oil flow based on fault mechanism analysis of the main frictional pairs in the pump. The performance degradation model is described by the Wiener process to predict the remaining useful life (RUL of the pump. Maximum likelihood estimation (MLE is performed by utilizing the expectation maximization (EM algorithm to estimate the initial parameters of the Wiener process while recursive estimation is conducted utilizing the Kalman filter method to estimate the drift coefficient of the Wiener process. The RUL of the pump is then calculated according to the performance degradation model based on the Wiener process. Experimental results indicate that the return oil flow is a suitable characteristic for reflecting the internal wear status of the axial piston pump, and thus the Wiener process-based method may effectively predicate the RUL of the pump.

  12. Evoked EMG-based torque prediction under muscle fatigue in implanted neural stimulation

    Science.gov (United States)

    Hayashibe, Mitsuhiro; Zhang, Qin; Guiraud, David; Fattal, Charles

    2011-10-01

    In patients with complete spinal cord injury, fatigue occurs rapidly and there is no proprioceptive feedback regarding the current muscle condition. Therefore, it is essential to monitor the muscle state and assess the expected muscle response to improve the current FES system toward adaptive force/torque control in the presence of muscle fatigue. Our team implanted neural and epimysial electrodes in a complete paraplegic patient in 1999. We carried out a case study, in the specific case of implanted stimulation, in order to verify the corresponding torque prediction based on stimulus evoked EMG (eEMG) when muscle fatigue is occurring during electrical stimulation. Indeed, in implanted stimulation, the relationship between stimulation parameters and output torques is more stable than external stimulation in which the electrode location strongly affects the quality of the recruitment. Thus, the assumption that changes in the stimulation-torque relationship would be mainly due to muscle fatigue can be made reasonably. The eEMG was proved to be correlated to the generated torque during the continuous stimulation while the frequency of eEMG also decreased during fatigue. The median frequency showed a similar variation trend to the mean absolute value of eEMG. Torque prediction during fatigue-inducing tests was performed based on eEMG in model cross-validation where the model was identified using recruitment test data. The torque prediction, apart from the potentiation period, showed acceptable tracking performances that would enable us to perform adaptive closed-loop control through implanted neural stimulation in the future.

  13. A wavelet-based technique to predict treatment outcome for Major Depressive Disorder

    Science.gov (United States)

    Xia, Likun; Mohd Yasin, Mohd Azhar; Azhar Ali, Syed Saad

    2017-01-01

    Treatment management for Major Depressive Disorder (MDD) has been challenging. However, electroencephalogram (EEG)-based predictions of antidepressant’s treatment outcome may help during antidepressant’s selection and ultimately improve the quality of life for MDD patients. In this study, a machine learning (ML) method involving pretreatment EEG data was proposed to perform such predictions for Selective Serotonin Reuptake Inhibitor (SSRIs). For this purpose, the acquisition of experimental data involved 34 MDD patients and 30 healthy controls. Consequently, a feature matrix was constructed involving time-frequency decomposition of EEG data based on wavelet transform (WT) analysis, termed as EEG data matrix. However, the resultant EEG data matrix had high dimensionality. Therefore, dimension reduction was performed based on a rank-based feature selection method according to a criterion, i.e., receiver operating characteristic (ROC). As a result, the most significant features were identified and further be utilized during the training and testing of a classification model, i.e., the logistic regression (LR) classifier. Finally, the LR model was validated with 100 iterations of 10-fold cross-validation (10-CV). The classification results were compared with short-time Fourier transform (STFT) analysis, and empirical mode decompositions (EMD). The wavelet features extracted from frontal and temporal EEG data were found statistically significant. In comparison with other time-frequency approaches such as the STFT and EMD, the WT analysis has shown highest classification accuracy, i.e., accuracy = 87.5%, sensitivity = 95%, and specificity = 80%. In conclusion, significant wavelet coefficients extracted from frontal and temporal pre-treatment EEG data involving delta and theta frequency bands may predict antidepressant’s treatment outcome for the MDD patients. PMID:28152063

  14. A wavelet-based technique to predict treatment outcome for Major Depressive Disorder.

    Science.gov (United States)

    Mumtaz, Wajid; Xia, Likun; Mohd Yasin, Mohd Azhar; Azhar Ali, Syed Saad; Malik, Aamir Saeed

    2017-01-01

    Treatment management for Major Depressive Disorder (MDD) has been challenging. However, electroencephalogram (EEG)-based predictions of antidepressant's treatment outcome may help during antidepressant's selection and ultimately improve the quality of life for MDD patients. In this study, a machine learning (ML) method involving pretreatment EEG data was proposed to perform such predictions for Selective Serotonin Reuptake Inhibitor (SSRIs). For this purpose, the acquisition of experimental data involved 34 MDD patients and 30 healthy controls. Consequently, a feature matrix was constructed involving time-frequency decomposition of EEG data based on wavelet transform (WT) analysis, termed as EEG data matrix. However, the resultant EEG data matrix had high dimensionality. Therefore, dimension reduction was performed based on a rank-based feature selection method according to a criterion, i.e., receiver operating characteristic (ROC). As a result, the most significant features were identified and further be utilized during the training and testing of a classification model, i.e., the logistic regression (LR) classifier. Finally, the LR model was validated with 100 iterations of 10-fold cross-validation (10-CV). The classification results were compared with short-time Fourier transform (STFT) analysis, and empirical mode decompositions (EMD). The wavelet features extracted from frontal and temporal EEG data were found statistically significant. In comparison with other time-frequency approaches such as the STFT and EMD, the WT analysis has shown highest classification accuracy, i.e., accuracy = 87.5%, sensitivity = 95%, and specificity = 80%. In conclusion, significant wavelet coefficients extracted from frontal and temporal pre-treatment EEG data involving delta and theta frequency bands may predict antidepressant's treatment outcome for the MDD patients.

  15. Goal orientation and work role performance: predicting adaptive and proactive work role performance through self-leadership strategies.

    Science.gov (United States)

    Marques-Quinteiro, Pedro; Curral, Luís Alberto

    2012-01-01

    This article explores the relationship between goal orientation, self-leadership dimensions, and adaptive and proactive work role performances. The authors hypothesize that learning orientation, in contrast to performance orientation, positively predicts proactive and adaptive work role performances and that this relationship is mediated by self-leadership behavior-focused strategies. It is posited that self-leadership natural reward strategies and thought pattern strategies are expected to moderate this relationship. Workers (N = 108) from a software company participated in this study. As expected, learning orientation did predict adaptive and proactive work role performance. Moreover, in the relationship between learning orientation and proactive work role performance through self-leadership behavior-focused strategies, a moderated mediation effect was found for self-leadership natural reward and thought pattern strategies. In the end, the authors discuss the results and implications are discussed and future research directions are proposed.

  16. Degradation Prediction Model Based on a Neural Network with Dynamic Windows

    Science.gov (United States)

    Zhang, Xinghui; Xiao, Lei; Kang, Jianshe

    2015-01-01

    Tracking degradation of mechanical components is very critical for effective maintenance decision making. Remaining useful life (RUL) estimation is a widely used form of degradation prediction. RUL prediction methods when enough run-to-failure condition monitoring data can be used have been fully researched, but for some high reliability components, it is very difficult to collect run-to-failure condition monitoring data, i.e., from normal to failure. Only a certain number of condition indicators in certain period can be used to estimate RUL. In addition, some existing prediction methods have problems which block RUL estimation due to poor extrapolability. The predicted value converges to a certain constant or fluctuates in certain range. Moreover, the fluctuant condition features also have bad effects on prediction. In order to solve these dilemmas, this paper proposes a RUL prediction model based on neural network with dynamic windows. This model mainly consists of three steps: window size determination by increasing rate, change point detection and rolling prediction. The proposed method has two dominant strengths. One is that the proposed approach does not need to assume the degradation trajectory is subject to a certain distribution. The other is it can adapt to variation of degradation indicators which greatly benefits RUL prediction. Finally, the performance of the proposed RUL prediction model is validated by real field data and simulation data. PMID:25806873

  17. Feature-Based and String-Based Models for Predicting RNA-Protein Interaction

    Directory of Open Access Journals (Sweden)

    Donald Adjeroh

    2018-03-01

    Full Text Available In this work, we study two approaches for the problem of RNA-Protein Interaction (RPI. In the first approach, we use a feature-based technique by combining extracted features from both sequences and secondary structures. The feature-based approach enhanced the prediction accuracy as it included much more available information about the RNA-protein pairs. In the second approach, we apply search algorithms and data structures to extract effective string patterns for prediction of RPI, using both sequence information (protein and RNA sequences, and structure information (protein and RNA secondary structures. This led to different string-based models for predicting interacting RNA-protein pairs. We show results that demonstrate the effectiveness of the proposed approaches, including comparative results against leading state-of-the-art methods.

  18. A Meta-Path-Based Prediction Method for Human miRNA-Target Association

    Directory of Open Access Journals (Sweden)

    Jiawei Luo

    2016-01-01

    Full Text Available MicroRNAs (miRNAs are short noncoding RNAs that play important roles in regulating gene expressing, and the perturbed miRNAs are often associated with development and tumorigenesis as they have effects on their target mRNA. Predicting potential miRNA-target associations from multiple types of genomic data is a considerable problem in the bioinformatics research. However, most of the existing methods did not fully use the experimentally validated miRNA-mRNA interactions. Here, we developed RMLM and RMLMSe to predict the relationship between miRNAs and their targets. RMLM and RMLMSe are global approaches as they can reconstruct the missing associations for all the miRNA-target simultaneously and RMLMSe demonstrates that the integration of sequence information can improve the performance of RMLM. In RMLM, we use RM measure to evaluate different relatedness between miRNA and its target based on different meta-paths; logistic regression and MLE method are employed to estimate the weight of different meta-paths. In RMLMSe, sequence information is utilized to improve the performance of RMLM. Here, we carry on fivefold cross validation and pathway enrichment analysis to prove the performance of our methods. The fivefold experiments show that our methods have higher AUC scores compared with other methods and the integration of sequence information can improve the performance of miRNA-target association prediction.

  19. Spectral analysis-based risk score enables early prediction of mortality and cerebral performance in patients undergoing therapeutic hypothermia for ventricular fibrillation and comatose status

    Science.gov (United States)

    Filgueiras-Rama, David; Calvo, Conrado J.; Salvador-Montañés, Óscar; Cádenas, Rosalía; Ruiz-Cantador, Jose; Armada, Eduardo; Rey, Juan Ramón; Merino, J.L.; Peinado, Rafael; Pérez-Castellano, Nicasio; Pérez-Villacastín, Julián; Quintanilla, Jorge G.; Jiménez, Santiago; Castells, Francisco; Chorro, Francisco J.; López-Sendón, J.L.; Berenfeld, Omer; Jalife, José; López de Sá, Esteban; Millet, José

    2017-01-01

    Background Early prognosis in comatose survivors after cardiac arrest due to ventricular fibrillation (VF) is unreliable, especially in patients undergoing mild hypothermia. We aimed at developing a reliable risk-score to enable early prediction of cerebral performance and survival. Methods Sixty-one out of 239 consecutive patients undergoing mild hypothermia after cardiac arrest, with eventual return of spontaneous circulation (ROSC), and comatose status on admission fulfilled the inclusion criteria. Background clinical variables, VF time and frequency domain fundamental variables were considered. The primary and secondary outcomes were a favorable neurological performance (FNP) during hospitalization and survival to hospital discharge, respectively. The predictive model was developed in a retrospective cohort (n=32; September 2006–September 2011, 48.5 ± 10.5 months of follow-up) and further validated in a prospective cohort (n = 29; October 2011–July 2013, 5 ± 1.8 months of follow-up). Results FNP was present in 16 (50.0%) and 21 patients (72.4%) in the retrospective and prospective cohorts, respectively. Seventeen (53.1%) and 21 patients (72.4%), respectively, survived to hospital discharge. Both outcomes were significantly associated (p < 0.001). Retrospective multivariate analysis provided a prediction model (sensitivity= 0.94, specificity = 1) that included spectral dominant frequency, derived power density and peak ratios between high and low frequency bands, and the number of shocks delivered before ROSC. Validation on the prospective cohort showed sensitivity = 0.88 and specificity = 0.91. A model-derived risk-score properly predicted 93% of FNP. Testing the model on follow-up showed a c-statistic ≥ 0.89. Conclusions A spectral analysis-based model reliably correlates time-dependent VF spectral changes with acute cerebral injury in comatose survivors undergoing mild hypothermia after cardiac arrest. PMID:25828128

  20. Explicit MPC design and performance-based tuning of an Adaptive Cruise Control Stop-&-Go

    NARCIS (Netherlands)

    Naus, G.J.L.; Ploeg, J.; Molengraft, M.J.G. van de; Steinbuch, M.

    2008-01-01

    This paper presents the synthesis, the implementation and the performance-based tuning of an Adaptive Cruise Control (ACC) Stop-&-Go (S&G) design. A Model Predictive Control (MPC) framework is adopted to design the controller. Performance of the controller is evaluated, distinguishing between

  1. Predictive local receptive fields based respiratory motion tracking for motion-adaptive radiotherapy.

    Science.gov (United States)

    Yubo Wang; Tatinati, Sivanagaraja; Liyu Huang; Kim Jeong Hong; Shafiq, Ghufran; Veluvolu, Kalyana C; Khong, Andy W H

    2017-07-01

    Extracranial robotic radiotherapy employs external markers and a correlation model to trace the tumor motion caused by the respiration. The real-time tracking of tumor motion however requires a prediction model to compensate the latencies induced by the software (image data acquisition and processing) and hardware (mechanical and kinematic) limitations of the treatment system. A new prediction algorithm based on local receptive fields extreme learning machines (pLRF-ELM) is proposed for respiratory motion prediction. All the existing respiratory motion prediction methods model the non-stationary respiratory motion traces directly to predict the future values. Unlike these existing methods, the pLRF-ELM performs prediction by modeling the higher-level features obtained by mapping the raw respiratory motion into the random feature space of ELM instead of directly modeling the raw respiratory motion. The developed method is evaluated using the dataset acquired from 31 patients for two horizons in-line with the latencies of treatment systems like CyberKnife. Results showed that pLRF-ELM is superior to that of existing prediction methods. Results further highlight that the abstracted higher-level features are suitable to approximate the nonlinear and non-stationary characteristics of respiratory motion for accurate prediction.

  2. An Application to the Prediction of LOD Change Based on General Regression Neural Network

    Science.gov (United States)

    Zhang, X. H.; Wang, Q. J.; Zhu, J. J.; Zhang, H.

    2011-07-01

    Traditional prediction of the LOD (length of day) change was based on linear models, such as the least square model and the autoregressive technique, etc. Due to the complex non-linear features of the LOD variation, the performances of the linear model predictors are not fully satisfactory. This paper applies a non-linear neural network - general regression neural network (GRNN) model to forecast the LOD change, and the results are analyzed and compared with those obtained with the back propagation neural network and other models. The comparison shows that the performance of the GRNN model in the prediction of the LOD change is efficient and feasible.

  3. Comparison of classification methods for voxel-based prediction of acute ischemic stroke outcome following intra-arterial intervention

    Science.gov (United States)

    Winder, Anthony J.; Siemonsen, Susanne; Flottmann, Fabian; Fiehler, Jens; Forkert, Nils D.

    2017-03-01

    Voxel-based tissue outcome prediction in acute ischemic stroke patients is highly relevant for both clinical routine and research. Previous research has shown that features extracted from baseline multi-parametric MRI datasets have a high predictive value and can be used for the training of classifiers, which can generate tissue outcome predictions for both intravenous and conservative treatments. However, with the recent advent and popularization of intra-arterial thrombectomy treatment, novel research specifically addressing the utility of predictive classi- fiers for thrombectomy intervention is necessary for a holistic understanding of current stroke treatment options. The aim of this work was to develop three clinically viable tissue outcome prediction models using approximate nearest-neighbor, generalized linear model, and random decision forest approaches and to evaluate the accuracy of predicting tissue outcome after intra-arterial treatment. Therefore, the three machine learning models were trained, evaluated, and compared using datasets of 42 acute ischemic stroke patients treated with intra-arterial thrombectomy. Classifier training utilized eight voxel-based features extracted from baseline MRI datasets and five global features. Evaluation of classifier-based predictions was performed via comparison to the known tissue outcome, which was determined in follow-up imaging, using the Dice coefficient and leave-on-patient-out cross validation. The random decision forest prediction model led to the best tissue outcome predictions with a mean Dice coefficient of 0.37. The approximate nearest-neighbor and generalized linear model performed equally suboptimally with average Dice coefficients of 0.28 and 0.27 respectively, suggesting that both non-linearity and machine learning are desirable properties of a classifier well-suited to the intra-arterial tissue outcome prediction problem.

  4. Stringent homology-based prediction of H. sapiens-M. tuberculosis H37Rv protein-protein interactions.

    Science.gov (United States)

    Zhou, Hufeng; Gao, Shangzhi; Nguyen, Nam Ninh; Fan, Mengyuan; Jin, Jingjing; Liu, Bing; Zhao, Liang; Xiong, Geng; Tan, Min; Li, Shijun; Wong, Limsoon

    2014-04-08

    H. sapiens-M. tuberculosis H37Rv protein-protein interaction (PPI) data are essential for understanding the infection mechanism of the formidable pathogen M. tuberculosis H37Rv. Computational prediction is an important strategy to fill the gap in experimental H. sapiens-M. tuberculosis H37Rv PPI data. Homology-based prediction is frequently used in predicting both intra-species and inter-species PPIs. However, some limitations are not properly resolved in several published works that predict eukaryote-prokaryote inter-species PPIs using intra-species template PPIs. We develop a stringent homology-based prediction approach by taking into account (i) differences between eukaryotic and prokaryotic proteins and (ii) differences between inter-species and intra-species PPI interfaces. We compare our stringent homology-based approach to a conventional homology-based approach for predicting host-pathogen PPIs, based on cellular compartment distribution analysis, disease gene list enrichment analysis, pathway enrichment analysis and functional category enrichment analysis. These analyses support the validity of our prediction result, and clearly show that our approach has better performance in predicting H. sapiens-M. tuberculosis H37Rv PPIs. Using our stringent homology-based approach, we have predicted a set of highly plausible H. sapiens-M. tuberculosis H37Rv PPIs which might be useful for many of related studies. Based on our analysis of the H. sapiens-M. tuberculosis H37Rv PPI network predicted by our stringent homology-based approach, we have discovered several interesting properties which are reported here for the first time. We find that both host proteins and pathogen proteins involved in the host-pathogen PPIs tend to be hubs in their own intra-species PPI network. Also, both host and pathogen proteins involved in host-pathogen PPIs tend to have longer primary sequence, tend to have more domains, tend to be more hydrophilic, etc. And the protein domains from both

  5. Predicting Arithmetical Achievement from Neuro-Psychological Performance: A Longitudinal Study.

    Science.gov (United States)

    Fayol, Michel; Barrouillet, Pierre; Marinthe, Catherine

    1998-01-01

    Assessed whether performances of 5- and 6-year olds in arithmetic tests can be predicted from their performances in neuropsychological tests. Participants completed neuropsychological, drawing, and arithmetic tests at 5 and 6 years of age. Findings at older age were correctly assumed by conclusions of first evaluation. (LBT)

  6. A postprocessing method in the HMC framework for predicting gene function based on biological instrumental data

    Science.gov (United States)

    Feng, Shou; Fu, Ping; Zheng, Wenbin

    2018-03-01

    Predicting gene function based on biological instrumental data is a complicated and challenging hierarchical multi-label classification (HMC) problem. When using local approach methods to solve this problem, a preliminary results processing method is usually needed. This paper proposed a novel preliminary results processing method called the nodes interaction method. The nodes interaction method revises the preliminary results and guarantees that the predictions are consistent with the hierarchy constraint. This method exploits the label dependency and considers the hierarchical interaction between nodes when making decisions based on the Bayesian network in its first phase. In the second phase, this method further adjusts the results according to the hierarchy constraint. Implementing the nodes interaction method in the HMC framework also enhances the HMC performance for solving the gene function prediction problem based on the Gene Ontology (GO), the hierarchy of which is a directed acyclic graph that is more difficult to tackle. The experimental results validate the promising performance of the proposed method compared to state-of-the-art methods on eight benchmark yeast data sets annotated by the GO.

  7. Application of Machine Learning Algorithms for the Query Performance Prediction

    Directory of Open Access Journals (Sweden)

    MILICEVIC, M.

    2015-08-01

    Full Text Available This paper analyzes the relationship between the system load/throughput and the query response time in a real Online transaction processing (OLTP system environment. Although OLTP systems are characterized by short transactions, which normally entail high availability and consistent short response times, the need for operational reporting may jeopardize these objectives. We suggest a new approach to performance prediction for concurrent database workloads, based on the system state vector which consists of 36 attributes. There is no bias to the importance of certain attributes, but the machine learning methods are used to determine which attributes better describe the behavior of the particular database server and how to model that system. During the learning phase, the system's profile is created using multiple reference queries, which are selected to represent frequent business processes. The possibility of the accurate response time prediction may be a foundation for automated decision-making for database (DB query scheduling. Possible applications of the proposed method include adaptive resource allocation, quality of service (QoS management or real-time dynamic query scheduling (e.g. estimation of the optimal moment for a complex query execution.

  8. What predicts performance during clinical psychology training?

    OpenAIRE

    Scior, Katrina; Bradley, Caroline E; Potts, Henry W W; Woolf, Katherine; de C Williams, Amanda C

    2013-01-01

    Objectives While the question of who is likely to be selected for clinical psychology training has been studied, evidence on performance during training is scant. This study explored data from seven consecutive intakes of the UK's largest clinical psychology training course, aiming to identify what factors predict better or poorer outcomes. Design Longitudinal cross-sectional study using prospective and retrospective data. Method Characteristics at application were analysed in relation to a r...

  9. Uncertainty analysis of neural network based flood forecasting models: An ensemble based approach for constructing prediction interval

    Science.gov (United States)

    Kasiviswanathan, K.; Sudheer, K.

    2013-05-01

    Artificial neural network (ANN) based hydrologic models have gained lot of attention among water resources engineers and scientists, owing to their potential for accurate prediction of flood flows as compared to conceptual or physics based hydrologic models. The ANN approximates the non-linear functional relationship between the complex hydrologic variables in arriving at the river flow forecast values. Despite a large number of applications, there is still some criticism that ANN's point prediction lacks in reliability since the uncertainty of predictions are not quantified, and it limits its use in practical applications. A major concern in application of traditional uncertainty analysis techniques on neural network framework is its parallel computing architecture with large degrees of freedom, which makes the uncertainty assessment a challenging task. Very limited studies have considered assessment of predictive uncertainty of ANN based hydrologic models. In this study, a novel method is proposed that help construct the prediction interval of ANN flood forecasting model during calibration itself. The method is designed to have two stages of optimization during calibration: at stage 1, the ANN model is trained with genetic algorithm (GA) to obtain optimal set of weights and biases vector, and during stage 2, the optimal variability of ANN parameters (obtained in stage 1) is identified so as to create an ensemble of predictions. During the 2nd stage, the optimization is performed with multiple objectives, (i) minimum residual variance for the ensemble mean, (ii) maximum measured data points to fall within the estimated prediction interval and (iii) minimum width of prediction interval. The method is illustrated using a real world case study of an Indian basin. The method was able to produce an ensemble that has an average prediction interval width of 23.03 m3/s, with 97.17% of the total validation data points (measured) lying within the interval. The derived

  10. Online physician ratings fail to predict actual performance on measures of quality, value, and peer review.

    Science.gov (United States)

    Daskivich, Timothy J; Houman, Justin; Fuller, Garth; Black, Jeanne T; Kim, Hyung L; Spiegel, Brennan

    2018-04-01

    Patients use online consumer ratings to identify high-performing physicians, but it is unclear if ratings are valid measures of clinical performance. We sought to determine whether online ratings of specialist physicians from 5 platforms predict quality of care, value of care, and peer-assessed physician performance. We conducted an observational study of 78 physicians representing 8 medical and surgical specialties. We assessed the association of consumer ratings with specialty-specific performance scores (metrics including adherence to Choosing Wisely measures, 30-day readmissions, length of stay, and adjusted cost of care), primary care physician peer-review scores, and administrator peer-review scores. Across ratings platforms, multivariable models showed no significant association between mean consumer ratings and specialty-specific performance scores (β-coefficient range, -0.04, 0.04), primary care physician scores (β-coefficient range, -0.01, 0.3), and administrator scores (β-coefficient range, -0.2, 0.1). There was no association between ratings and score subdomains addressing quality or value-based care. Among physicians in the lowest quartile of specialty-specific performance scores, only 5%-32% had consumer ratings in the lowest quartile across platforms. Ratings were consistent across platforms; a physician's score on one platform significantly predicted his/her score on another in 5 of 10 comparisons. Online ratings of specialist physicians do not predict objective measures of quality of care or peer assessment of clinical performance. Scores are consistent across platforms, suggesting that they jointly measure a latent construct that is unrelated to performance. Online consumer ratings should not be used in isolation to select physicians, given their poor association with clinical performance.

  11. Large-scale ligand-based predictive modelling using support vector machines.

    Science.gov (United States)

    Alvarsson, Jonathan; Lampa, Samuel; Schaal, Wesley; Andersson, Claes; Wikberg, Jarl E S; Spjuth, Ola

    2016-01-01

    The increasing size of datasets in drug discovery makes it challenging to build robust and accurate predictive models within a reasonable amount of time. In order to investigate the effect of dataset sizes on predictive performance and modelling time, ligand-based regression models were trained on open datasets of varying sizes of up to 1.2 million chemical structures. For modelling, two implementations of support vector machines (SVM) were used. Chemical structures were described by the signatures molecular descriptor. Results showed that for the larger datasets, the LIBLINEAR SVM implementation performed on par with the well-established libsvm with a radial basis function kernel, but with dramatically less time for model building even on modest computer resources. Using a non-linear kernel proved to be infeasible for large data sizes, even with substantial computational resources on a computer cluster. To deploy the resulting models, we extended the Bioclipse decision support framework to support models from LIBLINEAR and made our models of logD and solubility available from within Bioclipse.

  12. The clustering-based case-based reasoning for imbalanced business failure prediction: a hybrid approach through integrating unsupervised process with supervised process

    Science.gov (United States)

    Li, Hui; Yu, Jun-Ling; Yu, Le-An; Sun, Jie

    2014-05-01

    Case-based reasoning (CBR) is one of the main forecasting methods in business forecasting, which performs well in prediction and holds the ability of giving explanations for the results. In business failure prediction (BFP), the number of failed enterprises is relatively small, compared with the number of non-failed ones. However, the loss is huge when an enterprise fails. Therefore, it is necessary to develop methods (trained on imbalanced samples) which forecast well for this small proportion of failed enterprises and performs accurately on total accuracy meanwhile. Commonly used methods constructed on the assumption of balanced samples do not perform well in predicting minority samples on imbalanced samples consisting of the minority/failed enterprises and the majority/non-failed ones. This article develops a new method called clustering-based CBR (CBCBR), which integrates clustering analysis, an unsupervised process, with CBR, a supervised process, to enhance the efficiency of retrieving information from both minority and majority in CBR. In CBCBR, various case classes are firstly generated through hierarchical clustering inside stored experienced cases, and class centres are calculated out by integrating cases information in the same clustered class. When predicting the label of a target case, its nearest clustered case class is firstly retrieved by ranking similarities between the target case and each clustered case class centre. Then, nearest neighbours of the target case in the determined clustered case class are retrieved. Finally, labels of the nearest experienced cases are used in prediction. In the empirical experiment with two imbalanced samples from China, the performance of CBCBR was compared with the classical CBR, a support vector machine, a logistic regression and a multi-variant discriminate analysis. The results show that compared with the other four methods, CBCBR performed significantly better in terms of sensitivity for identifying the

  13. Performance in grade 12 mathematics and science predicts student nurses' performance in first year science modules at a university in the Western Cape.

    Science.gov (United States)

    Mthimunye, Katlego D T; Daniels, Felicity M

    2017-10-26

    The demand for highly qualified and skilled nurses is increasing in South Africa as well as around the world. Having a background in science can create a significant advantage for students wishing to enrol for an undergraduate nursing qualification because nursing as profession is grounded in scientific evidence. The aim of this study was to investigate the predictive validity of grade 12 mathematics and science on the academic performance of first year student nurses in science modules. A quantitative research method using a cross-sectional predictive design was employed in this study. The participants included first year Bachelor of Nursing students enrolled at a university in the Western Cape, South Africa. Descriptive and inferential statistics were performed to analyse the data by using the IBM Statistical Package for Social Sciences versions 24. Descriptive analysis of all variables was performed as well as the Spearman's rank correlation test to describe the relationship among the study variables. Standard multiple linear regressions analysis was performed to determine the predictive validity of grade 12 mathematics and science on the academic performance of first year student nurses in science modules. The results of this study showed that grade 12 physical science is not a significant predictor (p > 0.062) of performance in first year science modules. The multiple linear regression revealed that grade 12 mathematics and life science grades explained 37.1% to 38.1% (R2 = 0.381 and adj R2 = 0.371) of the variation in the first year science grade distributions. Based on the results of the study it is evident that performance in grade 12 mathematics (β = 2.997) and life science (β = 3.175) subjects is a significant predictor (p < 0.001) of the performance in first year science modules for student nurses at the university identified for this study.

  14. Age-related differences in the accuracy of web query-based predictions of influenza-like illness.

    Directory of Open Access Journals (Sweden)

    Alexander Domnich

    Full Text Available Web queries are now widely used for modeling, nowcasting and forecasting influenza-like illness (ILI. However, given that ILI attack rates vary significantly across ages, in terms of both magnitude and timing, little is known about whether the association between ILI morbidity and ILI-related queries is comparable across different age-groups. The present study aimed to investigate features of the association between ILI morbidity and ILI-related query volume from the perspective of age.Since Google Flu Trends is unavailable in Italy, Google Trends was used to identify entry terms that correlated highly with official ILI surveillance data. All-age and age-class-specific modeling was performed by means of linear models with generalized least-square estimation. Hold-out validation was used to quantify prediction accuracy. For purposes of comparison, predictions generated by exponential smoothing were computed.Five search terms showed high correlation coefficients of > .6. In comparison with exponential smoothing, the all-age query-based model correctly predicted the peak time and yielded a higher correlation coefficient with observed ILI morbidity (.978 vs. .929. However, query-based prediction of ILI morbidity was associated with a greater error. Age-class-specific query-based models varied significantly in terms of prediction accuracy. In the 0-4 and 25-44-year age-groups, these did well and outperformed exponential smoothing predictions; in the 15-24 and ≥ 65-year age-classes, however, the query-based models were inaccurate and highly overestimated peak height. In all but one age-class, peak timing predicted by the query-based models coincided with observed timing.The accuracy of web query-based models in predicting ILI morbidity rates could differ among ages. Greater age-specific detail may be useful in flu query-based studies in order to account for age-specific features of the epidemiology of ILI.

  15. Plasmonic Light Trapping in Thin-Film Solar Cells: Impact of Modeling on Performance Prediction

    Directory of Open Access Journals (Sweden)

    Alberto Micco

    2015-06-01

    Full Text Available We present a comparative study on numerical models used to predict the absorption enhancement in thin-film solar cells due to the presence of structured back-reflectors exciting, at specific wavelengths, hybrid plasmonic-photonic resonances. To evaluate the effectiveness of the analyzed models, they have been applied in a case study: starting from a U-shaped textured glass thin-film, µc-Si:H solar cells have been successfully fabricated. The fabricated cells, with different intrinsic layer thicknesses, have been morphologically, optically and electrically characterized. The experimental results have been successively compared with the numerical predictions. We have found that, in contrast to basic models based on the underlying schematics of the cell, numerical models taking into account the real morphology of the fabricated device, are able to effectively predict the cells performances in terms of both optical absorption and short-circuit current values.

  16. A Study of Performance in Low-Power Tokamak Reactor with Integrated Predictive Modeling Code

    International Nuclear Information System (INIS)

    Pianroj, Y.; Onjun, T.; Suwanna, S.; Picha, R.; Poolyarat, N.

    2009-07-01

    Full text: A fusion hybrid or a small fusion power output with low power tokamak reactor is presented as another useful application of nuclear fusion. Such tokamak can be used for fuel breeding, high-level waste transmutation, hydrogen production at high temperature, and testing of nuclear fusion technology components. In this work, an investigation of the plasma performance in a small fusion power output design is carried out using the BALDUR predictive integrated modeling code. The simulations of the plasma performance in this design are carried out using the empirical-based Mixed Bohm/gyro Bohm (B/gB) model, whereas the pedestal temperature model is based on magnetic and flow shear (δ α ρ ζ 2 ) stabilization pedestal width scaling. The preliminary results using this core transport model show that the central ion and electron temperatures are rather pessimistic. To improve the performance, the optimization approach are carried out by varying some parameters, such as plasma current and power auxiliary heating, which results in some improvement of plasma performance

  17. Knowledge-based prediction of three-dimensional dose distributions for external beam radiotherapy

    International Nuclear Information System (INIS)

    Shiraishi, Satomi; Moore, Kevin L.

    2016-01-01

    Purpose: To demonstrate knowledge-based 3D dose prediction for external beam radiotherapy. Methods: Using previously treated plans as training data, an artificial neural network (ANN) was trained to predict a dose matrix based on patient-specific geometric and planning parameters, such as the closest distance (r) to planning target volume (PTV) and organ-at-risks (OARs). Twenty-three prostate and 43 stereotactic radiosurgery/radiotherapy (SRS/SRT) cases with at least one nearby OAR were studied. All were planned with volumetric-modulated arc therapy to prescription doses of 81 Gy for prostate and 12–30 Gy for SRS. Using these clinically approved plans, ANNs were trained to predict dose matrix and the predictive accuracy was evaluated using the dose difference between the clinical plan and prediction, δD = D clin − D pred . The mean (〈δD r 〉), standard deviation (σ δD r ), and their interquartile range (IQR) for the training plans were evaluated at a 2–3 mm interval from the PTV boundary (r PTV ) to assess prediction bias and precision. Initially, unfiltered models which were trained using all plans in the cohorts were created for each treatment site. The models predict approximately the average quality of OAR sparing. Emphasizing a subset of plans that exhibited superior to the average OAR sparing during training, refined models were created to predict high-quality rectum sparing for prostate and brainstem sparing for SRS. Using the refined model, potentially suboptimal plans were identified where the model predicted further sparing of the OARs was achievable. Replans were performed to test if the OAR sparing could be improved as predicted by the model. Results: The refined models demonstrated highly accurate dose distribution prediction. For prostate cases, the average prediction bias for all voxels irrespective of organ delineation ranged from −1% to 0% with maximum IQR of 3% over r PTV ∈ [ − 6, 30] mm. The average prediction error was less

  18. Computerized heat balance models to predict performance of operating nuclear power plants

    International Nuclear Information System (INIS)

    Breeding, C.L.; Carter, J.C.; Schaefer, R.C.

    1983-01-01

    The use of computerized heat balance models has greatly enhanced the decision making ability of TVA's Division of Nuclear Power. These models are utilized to predict the effects of various operating modes and to analyze changes in plant performance resulting from turbine cycle equipment modifications with greater speed and accuracy than was possible before. Computer models have been successfully used to optimize plant output by predicting the effects of abnormal condenser circulating water conditions. They were utilized to predict the degradation in performance resulting from installation of a baffle plate assembly to replace damaged low-pressure blading, thereby providing timely information allowing an optimal economic judgement as to when to replace the blading. Future use will be for routine performance test analysis. This paper presents the benefits of utility use of computerized heat balance models

  19. Predicting clinical symptoms of attention deficit hyperactivity disorder based on temporal patterns between and within intrinsic connectivity networks.

    Science.gov (United States)

    Wang, Xun-Heng; Jiao, Yun; Li, Lihua

    2017-10-24

    Attention deficit hyperactivity disorder (ADHD) is a common brain disorder with high prevalence in school-age children. Previously developed machine learning-based methods have discriminated patients with ADHD from normal controls by providing label information of the disease for individuals. Inattention and impulsivity are the two most significant clinical symptoms of ADHD. However, predicting clinical symptoms (i.e., inattention and impulsivity) is a challenging task based on neuroimaging data. The goal of this study is twofold: to build predictive models for clinical symptoms of ADHD based on resting-state fMRI and to mine brain networks for predictive patterns of inattention and impulsivity. To achieve this goal, a cohort of 74 boys with ADHD and a cohort of 69 age-matched normal controls were recruited from the ADHD-200 Consortium. Both structural and resting-state fMRI images were obtained for each participant. Temporal patterns between and within intrinsic connectivity networks (ICNs) were applied as raw features in the predictive models. Specifically, sample entropy was taken asan intra-ICN feature, and phase synchronization (PS) was used asan inter-ICN feature. The predictive models were based on the least absolute shrinkage and selectionator operator (LASSO) algorithm. The performance of the predictive model for inattention is r=0.79 (p<10 -8 ), and the performance of the predictive model for impulsivity is r=0.48 (p<10 -8 ). The ICN-related predictive patterns may provide valuable information for investigating the brain network mechanisms of ADHD. In summary, the predictive models for clinical symptoms could be beneficial for personalizing ADHD medications. Copyright © 2017 IBRO. Published by Elsevier Ltd. All rights reserved.

  20. Ski jump takeoff performance predictions for a mixed-flow, remote-lift STOVL aircraft

    Science.gov (United States)

    Birckelbaw, Lourdes G.

    1992-01-01

    A ski jump model was developed to predict ski jump takeoff performance for a short takeoff and vertical landing (STOVL) aircraft. The objective was to verify the model with results from a piloted simulation of a mixed flow, remote lift STOVL aircraft. The prediction model is discussed. The predicted results are compared with the piloted simulation results. The ski jump model can be utilized for basic research of other thrust vectoring STOVL aircraft performing a ski jump takeoff.

  1. The Search Performance Evaluation and Prediction in Exploratory Search

    OpenAIRE

    LIU, FEI

    2016-01-01

    The exploratory search for complex search tasks requires an effective search behavior model to evaluate and predict user search performance. Few studies have investigated the relationship between user search behavior and search performance in exploratory search. This research adopts a mixed approach combining search system development, user search experiment, search query log analysis, and multivariate regression analysis to resolve the knowledge gap. Through this study, it is shown that expl...

  2. Predictive Bias and Sensitivity in NRC Fuel Performance Codes

    Energy Technology Data Exchange (ETDEWEB)

    Geelhood, Kenneth J.; Luscher, Walter G.; Senor, David J.; Cunningham, Mitchel E.; Lanning, Donald D.; Adkins, Harold E.

    2009-10-01

    The latest versions of the fuel performance codes, FRAPCON-3 and FRAPTRAN were examined to determine if the codes are intrinsically conservative. Each individual model and type of code prediction was examined and compared to the data that was used to develop the model. In addition, a brief literature search was performed to determine if more recent data have become available since the original model development for model comparison.

  3. Performance Assessment of Turbulence Models for the Prediction of the Reactor Internal Flow in the Scale-down APR+

    International Nuclear Information System (INIS)

    Lee, Gonghee; Bang, Youngseok; Woo, Swengwoong; Kim, Dohyeong; Kang, Minku

    2013-01-01

    The types of errors in CFD simulation can be divided into the two main categories: numerical errors and model errors. Turbulence model is one of the important sources for model errors. In this study, in order to assess the prediction performance of Reynolds-averaged Navier-Stokes (RANS)-based two equations turbulence models for the analysis of flow distribution inside a 1/5 scale-down APR+, the simulation was conducted with the commercial CFD software, ANSYS CFX V. 14. In this study, in order to assess the prediction performance of turbulence models for the analysis of flow distribution inside a 1/5 scale-down APR+, the simulation was conducted with the commercial CFD software, ANSYS CFX V. 14. Both standard k-ε model and SST model predicted the similar flow pattern inside reactor. Therefore it was concluded that the prediction performance of both turbulence models was nearly same. Complex thermal-hydraulic characteristics exist inside reactor because the reactor internals consist of fuel assembly, control rod assembly, and the internal structures. Either flow distribution test for the scale-down reactor model or computational fluid dynamics (CFD) simulation have been conducted to understand these complex thermal-hydraulic features inside reactor

  4. Predicting the outcomes of performance error indicators on accreditation status in the nuclear power industry

    International Nuclear Information System (INIS)

    Wilson, P.A.

    1986-01-01

    The null hypothesis for this study suggested that there was no significant difference in the types of performance error indicators between accredited and non-accredited programs on the following types of indicators: (1) number of significant event reports per unit, (2) number of forced outages per unit, (3) number of unplanned automatic scrams per unit, and (4) amount of equivalent availability per unit. A sample of 90 nuclear power plants was selected for this study. Data were summarized from two data bases maintained by the Institute of Nuclear Power Operations. Results of this study did not support the research hypothesis. There was no significant difference between the accredited and non-accredited programs on any of the four performance error indicators. The primary conclusions of this include the following: (1) The four selected performance error indicators cannot be used individually or collectively to predict accreditation status in the nuclear power industry. (2) Annual performance error indicator ratings cannot be used to determine the effects of performance-based training on plant performance. (3) The four selected performance error indicators cannot be used to measure the effect of operator job performance on plant effectiveness

  5. Prediction of SFL Interruption Performance from the Results of Arc Simulation during High-Current Phase

    Science.gov (United States)

    Lee, Jong-Chul; Lee, Won-Ho; Kim, Woun-Jea

    2015-09-01

    The design and development procedures of SF6 gas circuit breakers are still largely based on trial and error through testing although the development costs go higher every year. The computation cannot cover the testing satisfactorily because all the real processes arc not taken into account. But the knowledge of the arc behavior and the prediction of the thermal-flow inside the interrupters by numerical simulations are more useful than those by experiments due to the difficulties to obtain physical quantities experimentally and the reduction of computational costs in recent years. In this paper, in order to get further information into the interruption process of a SF6 self-blast interrupter, which is based on a combination of thermal expansion and the arc rotation principle, gas flow simulations with a CFD-arc modeling are performed during the whole switching process such as high-current period, pre-current zero period, and current-zero period. Through the complete work, the pressure-rise and the ramp of the pressure inside the chamber before current zero as well as the post-arc current after current zero should be a good criterion to predict the short-line fault interruption performance of interrupters.

  6. Investigations of internal turbulent flows in a low-head tubular pump and its performance predictions

    International Nuclear Information System (INIS)

    Tang, X L; Chen, X S; Wang, F J; Yang, W; Wu, Y L

    2012-01-01

    Based on the RANS equations, standard k−ε turbulence model and SIMPLE algorithm, the internal turbulent flows in a low-head tubular pump were simulated by using the FLUENT software. Based on the predicted flow fields, the external performance curves including the head-discharge, efficiency-discharge and power-discharge curves were further obtained. The calculated results indicate that the internal flow pattern is smooth at the best efficiency point (BEP). When it works under off-design operating cases, the flow pattern inside the diffuser and the discharge passage is disorder, and at the same time, the hydraulic losses mainly come from the secondary flows. At large flow rates, the minimum static pressure near the inlet of the blade pressure surfaces due to the negative attack angle. At small flow rates, the minimum value happens near the inlet of the suction surfaces. At the BEP, the lowest static pressure appears in the region behind the suction surfaces inlet. The newly-designed model is validated by the comparisons between its predicted external performance and the experimental data of the JGM-3 model. This research provides some important references for the optimization of a low-head tubular pump.

  7. Predicting Story Goodness Performance from Cognitive Measures Following Traumatic Brain Injury

    Science.gov (United States)

    Le, Karen; Coelho, Carl; Mozeiko, Jennifer; Krueger, Frank; Grafman, Jordan

    2012-01-01

    Purpose: This study examined the prediction of performance on measures of the Story Goodness Index (SGI; Le, Coelho, Mozeiko, & Grafman, 2011) from executive function (EF) and memory measures following traumatic brain injury (TBI). It was hypothesized that EF and memory measures would significantly predict SGI outcomes. Method: One hundred…

  8. A systematic review of breast cancer incidence risk prediction models with meta-analysis of their performance.

    Science.gov (United States)

    Meads, Catherine; Ahmed, Ikhlaaq; Riley, Richard D

    2012-04-01

    'Gail 2' model showed the average C statistic was 0.63 (95% CI 0.59-0.67), and the expected/observed ratio of events varied considerably across studies (95% prediction interval for E/O ratio when the model was applied in practice was 0.75-1.19). There is a need for models with better predictive performance but, given the large amount of work already conducted, further improvement of existing models based on conventional risk factors is perhaps unlikely. Research to identify new risk factors with large additionally predictive ability is therefore needed, alongside clearer reporting and continual validation of new models as they develop.

  9. PREDICTING THERMAL PERFORMANCE OF ROOFING SYSTEMS IN SURABAYA

    Directory of Open Access Journals (Sweden)

    MINTOROGO Danny Santoso

    2015-07-01

    Full Text Available Traditional roofing systems in the developing country likes Indonesia are still be dominated by the 30o, 45o, and more pitched angle roofs; the roofing cover materials are widely used to traditional clay roof tiles, then modern concrete roof tiles, and ceramic roof tiles. In the 90’s decay, shop houses are prosperous built with flat concrete roofs dominant. Green roofs and roof ponds are almost rarely built to meet the sustainable environmental issues. Some tested various roof systems in Surabaya were carried out to observe the roof thermal performances. Mathematical equation model from three references are also performed in order to compare with the real project tested. Calculated with equation (Kabre et al., the 30o pitched concrete-roof-tile, 30o clay-roof-tile, 45o pitched concrete-roof-tile are the worst thermal heat flux coming to room respectively. In contrast, the bare soil concrete roof and roof pond system are the least heat flux streamed onto room. Based on predicted calculation without insulation and cross-ventilation attic space, the roof pond and bare soil concrete roof (greenery roof are the appropriate roof systems for the Surabaya’s climate; meanwhile the most un-recommended roof is pitched 30o or 45o angle with concrete-roof tiles roofing systems.

  10. Decision tree-based learning to predict patient controlled analgesia consumption and readjustment

    Directory of Open Access Journals (Sweden)

    Hu Yuh-Jyh

    2012-11-01

    Full Text Available Abstract Background Appropriate postoperative pain management contributes to earlier mobilization, shorter hospitalization, and reduced cost. The under treatment of pain may impede short-term recovery and have a detrimental long-term effect on health. This study focuses on Patient Controlled Analgesia (PCA, which is a delivery system for pain medication. This study proposes and demonstrates how to use machine learning and data mining techniques to predict analgesic requirements and PCA readjustment. Methods The sample in this study included 1099 patients. Every patient was described by 280 attributes, including the class attribute. In addition to commonly studied demographic and physiological factors, this study emphasizes attributes related to PCA. We used decision tree-based learning algorithms to predict analgesic consumption and PCA control readjustment based on the first few hours of PCA medications. We also developed a nearest neighbor-based data cleaning method to alleviate the class-imbalance problem in PCA setting readjustment prediction. Results The prediction accuracies of total analgesic consumption (continuous dose and PCA dose and PCA analgesic requirement (PCA dose only by an ensemble of decision trees were 80.9% and 73.1%, respectively. Decision tree-based learning outperformed Artificial Neural Network, Support Vector Machine, Random Forest, Rotation Forest, and Naïve Bayesian classifiers in analgesic consumption prediction. The proposed data cleaning method improved the performance of every learning method in this study of PCA setting readjustment prediction. Comparative analysis identified the informative attributes from the data mining models and compared them with the correlates of analgesic requirement reported in previous works. Conclusion This study presents a real-world application of data mining to anesthesiology. Unlike previous research, this study considers a wider variety of predictive factors, including PCA

  11. PROSPER: an integrated feature-based tool for predicting protease substrate cleavage sites.

    Directory of Open Access Journals (Sweden)

    Jiangning Song

    Full Text Available The ability to catalytically cleave protein substrates after synthesis is fundamental for all forms of life. Accordingly, site-specific proteolysis is one of the most important post-translational modifications. The key to understanding the physiological role of a protease is to identify its natural substrate(s. Knowledge of the substrate specificity of a protease can dramatically improve our ability to predict its target protein substrates, but this information must be utilized in an effective manner in order to efficiently identify protein substrates by in silico approaches. To address this problem, we present PROSPER, an integrated feature-based server for in silico identification of protease substrates and their cleavage sites for twenty-four different proteases. PROSPER utilizes established specificity information for these proteases (derived from the MEROPS database with a machine learning approach to predict protease cleavage sites by using different, but complementary sequence and structure characteristics. Features used by PROSPER include local amino acid sequence profile, predicted secondary structure, solvent accessibility and predicted native disorder. Thus, for proteases with known amino acid specificity, PROSPER provides a convenient, pre-prepared tool for use in identifying protein substrates for the enzymes. Systematic prediction analysis for the twenty-four proteases thus far included in the database revealed that the features we have included in the tool strongly improve performance in terms of cleavage site prediction, as evidenced by their contribution to performance improvement in terms of identifying known cleavage sites in substrates for these enzymes. In comparison with two state-of-the-art prediction tools, PoPS and SitePrediction, PROSPER achieves greater accuracy and coverage. To our knowledge, PROSPER is the first comprehensive server capable of predicting cleavage sites of multiple proteases within a single substrate

  12. Predictive tool of energy performance of cold storage in agrifood industries: The Portuguese case study

    International Nuclear Information System (INIS)

    Nunes, José; Neves, Diogo; Gaspar, Pedro D.; Silva, Pedro D.; Andrade, Luís P.

    2014-01-01

    Highlights: • A predictive tool for assessment of the energy performance in agrifood industries that use cold storage is developed. • The correlations used by the predictive tool result from the greatest number of data sets collected to date in Portugal. • Strong relationships between raw material, energy consumption and volume of cold stores were established. • Case studies were analyzed that demonstrate the applicability of the tool. • The tool results are useful in the decision-making process of practice measures for the improvement of energy efficiency. - Abstract: Food processing and conservation represent decisive factors for the sustainability of the planet given the significant growth of the world population in the last decades. Therefore, the cooling process during the manufacture and/or storage of food products has been subject of study and improvement in order to ensure the food supply with good quality and safety. A predictive tool for assessment of the energy performance in agrifood industries that use cold storage is developed in order to contribute to the improvement of the energy efficiency of this industry. The predictive tool is based on a set of characteristic correlated parameters: amount of raw material annually processed, annual energy consumption and volume of cold rooms. Case studies of application of the predictive tool consider industries in the meat sector, specifically slaughterhouses. The results obtained help on the decision-making of practice measures for improvement of the energy efficiency in this industry

  13. SRMDAP: SimRank and Density-Based Clustering Recommender Model for miRNA-Disease Association Prediction

    Directory of Open Access Journals (Sweden)

    Xiaoying Li

    2018-01-01

    Full Text Available Aberrant expression of microRNAs (miRNAs can be applied for the diagnosis, prognosis, and treatment of human diseases. Identifying the relationship between miRNA and human disease is important to further investigate the pathogenesis of human diseases. However, experimental identification of the associations between diseases and miRNAs is time-consuming and expensive. Computational methods are efficient approaches to determine the potential associations between diseases and miRNAs. This paper presents a new computational method based on the SimRank and density-based clustering recommender model for miRNA-disease associations prediction (SRMDAP. The AUC of 0.8838 based on leave-one-out cross-validation and case studies suggested the excellent performance of the SRMDAP in predicting miRNA-disease associations. SRMDAP could also predict diseases without any related miRNAs and miRNAs without any related diseases.

  14. Screw Remaining Life Prediction Based on Quantum Genetic Algorithm and Support Vector Machine

    Directory of Open Access Journals (Sweden)

    Xiaochen Zhang

    2017-01-01

    Full Text Available To predict the remaining life of ball screw, a screw remaining life prediction method based on quantum genetic algorithm (QGA and support vector machine (SVM is proposed. A screw accelerated test bench is introduced. Accelerometers are installed to monitor the performance degradation of ball screw. Combined with wavelet packet decomposition and isometric mapping (Isomap, the sensitive feature vectors are obtained and stored in database. Meanwhile, the sensitive feature vectors are randomly chosen from the database and constitute training samples and testing samples. Then the optimal kernel function parameter and penalty factor of SVM are searched with the method of QGA. Finally, the training samples are used to train optimized SVM while testing samples are adopted to test the prediction accuracy of the trained SVM so the screw remaining life prediction model can be got. The experiment results show that the screw remaining life prediction model could effectively predict screw remaining life.

  15. Predictive factors for masticatory performance in Duchenne muscular dystrophy

    NARCIS (Netherlands)

    Bruggen, H.W. van; Engel-Hoek, L. van den; Steenks, M.H.; Bronkhorst, E.M.; Creugers, N.H.; Groot, I.J.M. de; Kalaykova, S.

    2014-01-01

    Patients with Duchenne muscular dystrophy (DMD) report masticatory and swallowing problems. Such problems may cause complications such as choking, and feeling of food sticking in the throat. We investigated whether masticatory performance in DMD is objectively impaired, and explored predictive

  16. Model-based uncertainty in species range prediction

    DEFF Research Database (Denmark)

    Pearson, R. G.; Thuiller, Wilfried; Bastos Araujo, Miguel

    2006-01-01

    Aim Many attempts to predict the potential range of species rely on environmental niche (or 'bioclimate envelope') modelling, yet the effects of using different niche-based methodologies require further investigation. Here we investigate the impact that the choice of model can have on predictions...

  17. Model-based prediction of myelosuppression and recovery based on frequent neutrophil monitoring.

    Science.gov (United States)

    Netterberg, Ida; Nielsen, Elisabet I; Friberg, Lena E; Karlsson, Mats O

    2017-08-01

    To investigate whether a more frequent monitoring of the absolute neutrophil counts (ANC) during myelosuppressive chemotherapy, together with model-based predictions, can improve therapy management, compared to the limited clinical monitoring typically applied today. Daily ANC in chemotherapy-treated cancer patients were simulated from a previously published population model describing docetaxel-induced myelosuppression. The simulated values were used to generate predictions of the individual ANC time-courses, given the myelosuppression model. The accuracy of the predicted ANC was evaluated under a range of conditions with reduced amount of ANC measurements. The predictions were most accurate when more data were available for generating the predictions and when making short forecasts. The inaccuracy of ANC predictions was highest around nadir, although a high sensitivity (≥90%) was demonstrated to forecast Grade 4 neutropenia before it occurred. The time for a patient to recover to baseline could be well forecasted 6 days (±1 day) before the typical value occurred on day 17. Daily monitoring of the ANC, together with model-based predictions, could improve anticancer drug treatment by identifying patients at risk for severe neutropenia and predicting when the next cycle could be initiated.

  18. Sphinx: merging knowledge-based and ab initio approaches to improve protein loop prediction.

    Science.gov (United States)

    Marks, Claire; Nowak, Jaroslaw; Klostermann, Stefan; Georges, Guy; Dunbar, James; Shi, Jiye; Kelm, Sebastian; Deane, Charlotte M

    2017-05-01

    Loops are often vital for protein function, however, their irregular structures make them difficult to model accurately. Current loop modelling algorithms can mostly be divided into two categories: knowledge-based, where databases of fragments are searched to find suitable conformations and ab initio, where conformations are generated computationally. Existing knowledge-based methods only use fragments that are the same length as the target, even though loops of slightly different lengths may adopt similar conformations. Here, we present a novel method, Sphinx, which combines ab initio techniques with the potential extra structural information contained within loops of a different length to improve structure prediction. We show that Sphinx is able to generate high-accuracy predictions and decoy sets enriched with near-native loop conformations, performing better than the ab initio algorithm on which it is based. In addition, it is able to provide predictions for every target, unlike some knowledge-based methods. Sphinx can be used successfully for the difficult problem of antibody H3 prediction, outperforming RosettaAntibody, one of the leading H3-specific ab initio methods, both in accuracy and speed. Sphinx is available at http://opig.stats.ox.ac.uk/webapps/sphinx. deane@stats.ox.ac.uk. Supplementary data are available at Bioinformatics online. © The Author 2017. Published by Oxford University Press.

  19. Predicting infant cortical surface development using a 4D varifold-based learning framework and local topography-based shape morphing.

    Science.gov (United States)

    Rekik, Islem; Li, Gang; Lin, Weili; Shen, Dinggang

    2016-02-01

    Longitudinal neuroimaging analysis methods have remarkably advanced our understanding of early postnatal brain development. However, learning predictive models to trace forth the evolution trajectories of both normal and abnormal cortical shapes remains broadly absent. To fill this critical gap, we pioneered the first prediction model for longitudinal developing cortical surfaces in infants using a spatiotemporal current-based learning framework solely from the baseline cortical surface. In this paper, we detail this prediction model and even further improve its performance by introducing two key variants. First, we use the varifold metric to overcome the limitations of the current metric for surface registration that was used in our preliminary study. We also extend the conventional varifold-based surface registration model for pairwise registration to a spatiotemporal surface regression model. Second, we propose a morphing process of the baseline surface using its topographic attributes such as normal direction and principal curvature sign. Specifically, our method learns from longitudinal data both the geometric (vertices positions) and dynamic (temporal evolution trajectories) features of the infant cortical surface, comprising a training stage and a prediction stage. In the training stage, we use the proposed varifold-based shape regression model to estimate geodesic cortical shape evolution trajectories for each training subject. We then build an empirical mean spatiotemporal surface atlas. In the prediction stage, given an infant, we select the best learnt features from training subjects to simultaneously predict the cortical surface shapes at all later timepoints, based on similarity metrics between this baseline surface and the learnt baseline population average surface atlas. We used a leave-one-out cross validation method to predict the inner cortical surface shape at 3, 6, 9 and 12 months of age from the baseline cortical surface shape at birth. Our

  20. PEDLA: predicting enhancers with a deep learning-based algorithmic framework.

    Science.gov (United States)

    Liu, Feng; Li, Hao; Ren, Chao; Bo, Xiaochen; Shu, Wenjie

    2016-06-22

    Transcriptional enhancers are non-coding segments of DNA that play a central role in the spatiotemporal regulation of gene expression programs. However, systematically and precisely predicting enhancers remain a major challenge. Although existing methods have achieved some success in enhancer prediction, they still suffer from many issues. We developed a deep learning-based algorithmic framework named PEDLA (https://github.com/wenjiegroup/PEDLA), which can directly learn an enhancer predictor from massively heterogeneous data and generalize in ways that are mostly consistent across various cell types/tissues. We first trained PEDLA with 1,114-dimensional heterogeneous features in H1 cells, and demonstrated that PEDLA framework integrates diverse heterogeneous features and gives state-of-the-art performance relative to five existing methods for enhancer prediction. We further extended PEDLA to iteratively learn from 22 training cell types/tissues. Our results showed that PEDLA manifested superior performance consistency in both training and independent test sets. On average, PEDLA achieved 95.0% accuracy and a 96.8% geometric mean (GM) of sensitivity and specificity across 22 training cell types/tissues, as well as 95.7% accuracy and a 96.8% GM across 20 independent test cell types/tissues. Together, our work illustrates the power of harnessing state-of-the-art deep learning techniques to consistently identify regulatory elements at a genome-wide scale from massively heterogeneous data across diverse cell types/tissues.

  1. Predicting Drug-Target Interactions Based on Small Positive Samples.

    Science.gov (United States)

    Hu, Pengwei; Chan, Keith C C; Hu, Yanxing

    2018-01-01

    A basic task in drug discovery is to find new medication in the form of candidate compounds that act on a target protein. In other words, a drug has to interact with a target and such drug-target interaction (DTI) is not expected to be random. Significant and interesting patterns are expected to be hidden in them. If these patterns can be discovered, new drugs are expected to be more easily discoverable. Currently, a number of computational methods have been proposed to predict DTIs based on their similarity. However, such as approach does not allow biochemical features to be directly considered. As a result, some methods have been proposed to try to discover patterns in physicochemical interactions. Since the number of potential negative DTIs are very high both in absolute terms and in comparison to that of the known ones, these methods are rather computationally expensive and they can only rely on subsets, rather than the full set, of negative DTIs for training and validation. As there is always a relatively high chance for negative DTIs to be falsely identified and as only partial subset of such DTIs is considered, existing approaches can be further improved to better predict DTIs. In this paper, we present a novel approach, called ODT (one class drug target interaction prediction), for such purpose. One main task of ODT is to discover association patterns between interacting drugs and proteins from the chemical structure of the former and the protein sequence network of the latter. ODT does so in two phases. First, the DTI-network is transformed to a representation by structural properties. Second, it applies a oneclass classification algorithm to build a prediction model based only on known positive interactions. We compared the best AUROC scores of the ODT with several state-of-art approaches on Gold standard data. The prediction accuracy of the ODT is superior in comparison with all the other methods at GPCRs dataset and Ion channels dataset. Performance

  2. Harvested Energy Prediction Schemes for Wireless Sensor Networks: Performance Evaluation and Enhancements

    Directory of Open Access Journals (Sweden)

    Muhammad

    2017-01-01

    Full Text Available We review harvested energy prediction schemes to be used in wireless sensor networks and explore the relative merits of landmark solutions. We propose enhancements to the well-known Profile-Energy (Pro-Energy model, the so-called Improved Profile-Energy (IPro-Energy, and compare its performance with Accurate Solar Irradiance Prediction Model (ASIM, Pro-Energy, and Weather Conditioned Moving Average (WCMA. The performance metrics considered are the prediction accuracy and the execution time which measure the implementation complexity. In addition, the effectiveness of the considered models, when integrated in an energy management scheme, is also investigated in terms of the achieved throughput and the energy consumption. Both solar irradiance and wind power datasets are used for the evaluation study. Our results indicate that the proposed IPro-Energy scheme outperforms the other candidate models in terms of the prediction accuracy achieved by up to 78% for short term predictions and 50% for medium term prediction horizons. For long term predictions, its prediction accuracy is comparable to the Pro-Energy model but outperforms the other models by up to 64%. In addition, the IPro scheme is able to achieve the highest throughput when integrated in the developed energy management scheme. Finally, the ASIM scheme reports the smallest implementation complexity.

  3. Predicting performance in a first engineering calculus course: implications for interventions

    Science.gov (United States)

    Hieb, Jeffrey L.; Lyle, Keith B.; Ralston, Patricia A. S.; Chariker, Julia

    2015-01-01

    At the University of Louisville, a large, urban institution in the south-east United States, undergraduate engineering students take their mathematics courses from the school of engineering. In the fall of their freshman year, engineering students take Engineering Analysis I, a calculus-based engineering analysis course. After the first two weeks of the semester, many students end up leaving Engineering Analysis I and moving to a mathematics intervention course. In an effort to retain more students in Engineering Analysis I, the department collaborated with university academic support services to create a summer intervention programme. Students were targeted for the summer programme based on their score on an algebra readiness exam (ARE). In a previous study, the ARE scores were found to be a significant predictor of retention and performance in Engineering Analysis I. This study continues that work, analysing data from students who entered the engineering school in the fall of 2012. The predictive validity of the ARE was verified, and a hierarchical linear regression model was created using math American College Testing (ACT) scores, ARE scores, summer intervention participation, and several metacognitive and motivational factors as measured by subscales of the Motivated Strategies for Learning Questionnaire. In the regression model, ARE score explained an additional 5.1% of the variation in exam performance in Engineering Analysis I beyond math ACT score. Students took the ARE before and after the summer interventions and scores were significantly higher following the intervention. However, intervention participants nonetheless had lower exam scores in Engineering Analysis I. The following factors related to motivation and learning strategies were found to significantly predict exam scores in Engineering Analysis I: time and study environment management, internal goal orientation, and test anxiety. The adjusted R2 for the full model was 0.42, meaning that the

  4. Searching for an Accurate Marker-Based Prediction of an Individual Quantitative Trait in Molecular Plant Breeding.

    Science.gov (United States)

    Fu, Yong-Bi; Yang, Mo-Hua; Zeng, Fangqin; Biligetu, Bill

    2017-01-01

    Molecular plant breeding with the aid of molecular markers has played an important role in modern plant breeding over the last two decades. Many marker-based predictions for quantitative traits have been made to enhance parental selection, but the trait prediction accuracy remains generally low, even with the aid of dense, genome-wide SNP markers. To search for more accurate trait-specific prediction with informative SNP markers, we conducted a literature review on the prediction issues in molecular plant breeding and on the applicability of an RNA-Seq technique for developing function-associated specific trait (FAST) SNP markers. To understand whether and how FAST SNP markers could enhance trait prediction, we also performed a theoretical reasoning on the effectiveness of these markers in a trait-specific prediction, and verified the reasoning through computer simulation. To the end, the search yielded an alternative to regular genomic selection with FAST SNP markers that could be explored to achieve more accurate trait-specific prediction. Continuous search for better alternatives is encouraged to enhance marker-based predictions for an individual quantitative trait in molecular plant breeding.

  5. Uncertainties in model-based outcome predictions for treatment planning

    International Nuclear Information System (INIS)

    Deasy, Joseph O.; Chao, K.S. Clifford; Markman, Jerry

    2001-01-01

    Purpose: Model-based treatment-plan-specific outcome predictions (such as normal tissue complication probability [NTCP] or the relative reduction in salivary function) are typically presented without reference to underlying uncertainties. We provide a method to assess the reliability of treatment-plan-specific dose-volume outcome model predictions. Methods and Materials: A practical method is proposed for evaluating model prediction based on the original input data together with bootstrap-based estimates of parameter uncertainties. The general framework is applicable to continuous variable predictions (e.g., prediction of long-term salivary function) and dichotomous variable predictions (e.g., tumor control probability [TCP] or NTCP). Using bootstrap resampling, a histogram of the likelihood of alternative parameter values is generated. For a given patient and treatment plan we generate a histogram of alternative model results by computing the model predicted outcome for each parameter set in the bootstrap list. Residual uncertainty ('noise') is accounted for by adding a random component to the computed outcome values. The residual noise distribution is estimated from the original fit between model predictions and patient data. Results: The method is demonstrated using a continuous-endpoint model to predict long-term salivary function for head-and-neck cancer patients. Histograms represent the probabilities for the level of posttreatment salivary function based on the input clinical data, the salivary function model, and the three-dimensional dose distribution. For some patients there is significant uncertainty in the prediction of xerostomia, whereas for other patients the predictions are expected to be more reliable. In contrast, TCP and NTCP endpoints are dichotomous, and parameter uncertainties should be folded directly into the estimated probabilities, thereby improving the accuracy of the estimates. Using bootstrap parameter estimates, competing treatment

  6. A Mobile Health Application to Predict Postpartum Depression Based on Machine Learning.

    Science.gov (United States)

    Jiménez-Serrano, Santiago; Tortajada, Salvador; García-Gómez, Juan Miguel

    2015-07-01

    Postpartum depression (PPD) is a disorder that often goes undiagnosed. The development of a screening program requires considerable and careful effort, where evidence-based decisions have to be taken in order to obtain an effective test with a high level of sensitivity and an acceptable specificity that is quick to perform, easy to interpret, culturally sensitive, and cost-effective. The purpose of this article is twofold: first, to develop classification models for detecting the risk of PPD during the first week after childbirth, thus enabling early intervention; and second, to develop a mobile health (m-health) application (app) for the Android(®) (Google, Mountain View, CA) platform based on the model with best performance for both mothers who have just given birth and clinicians who want to monitor their patient's test. A set of predictive models for estimating the risk of PPD was trained using machine learning techniques and data about postpartum women collected from seven Spanish hospitals. An internal evaluation was carried out using a hold-out strategy. An easy flowchart and architecture for designing the graphical user interface of the m-health app was followed. Naive Bayes showed the best balance between sensitivity and specificity as a predictive model for PPD during the first week after delivery. It was integrated into the clinical decision support system for Android mobile apps. This approach can enable the early prediction and detection of PPD because it fulfills the conditions of an effective screening test with a high level of sensitivity and specificity that is quick to perform, easy to interpret, culturally sensitive, and cost-effective.

  7. Prediction of the performance of an ion chamber amplifier under γ radiation

    International Nuclear Information System (INIS)

    Agarwal, Vivek; Sundarsingh, V.P.; Ramachandran, V.

    2005-01-01

    The ion chamber amplifier (ICA) plays a major role in the proper functioning of a nuclear reactor as it monitors the radiations from the nuclear reactor by measuring the ionic activity inside the ion chamber. The signal conditioning circuitry of the ICA detects and conditions the weak ionic currents coming from the ion chamber dome. Degradation in the performance of the semiconductor devices used in this part of the ICA, can lead to inaccurate monitoring of the reactor operation, resulting in a possible catastrophe due to malfunction. Further, the response of the ICA under irradiation also depends upon the strength of the input signal (ionic) current it is required to handle. The active devices used in the ICA under study are operational amplifiers (Op-Amps) such as DN8500A and OPA111, instrumentation amplifier INA101, transistor 2N2920A and a voltage reference device, AD584. Since these devices may be sensitive to radiation, one must know their radiation behaviour so that the performance of the ICA can be predicted. This paper examines the performance of the ICA by characterising the radiation profiles of its vital components, viz. the Op-Amps, instrumentation amplifiers, transistors, etc. by monitoring their parametric changes on-line, i.e. when the source is on, and the devices are biased. The simulation runs involve the simulation of the entire ICA circuitry using the changed values of the vital parameters such as input bias current and input offset voltage. The main advantage of this method is that it obviates irradiating the whole ICA circuit to study its irradiation performance, and simulates an environment of radiation leakage around the ICA. Based on this study, results are presented to predict the performance of the ICA

  8. Model-free and model-based reward prediction errors in EEG.

    Science.gov (United States)

    Sambrook, Thomas D; Hardwick, Ben; Wills, Andy J; Goslin, Jeremy

    2018-05-24

    Learning theorists posit two reinforcement learning systems: model-free and model-based. Model-based learning incorporates knowledge about structure and contingencies in the world to assign candidate actions with an expected value. Model-free learning is ignorant of the world's structure; instead, actions hold a value based on prior reinforcement, with this value updated by expectancy violation in the form of a reward prediction error. Because they use such different learning mechanisms, it has been previously assumed that model-based and model-free learning are computationally dissociated in the brain. However, recent fMRI evidence suggests that the brain may compute reward prediction errors to both model-free and model-based estimates of value, signalling the possibility that these systems interact. Because of its poor temporal resolution, fMRI risks confounding reward prediction errors with other feedback-related neural activity. In the present study, EEG was used to show the presence of both model-based and model-free reward prediction errors and their place in a temporal sequence of events including state prediction errors and action value updates. This demonstration of model-based prediction errors questions a long-held assumption that model-free and model-based learning are dissociated in the brain. Copyright © 2018 Elsevier Inc. All rights reserved.

  9. Rainfall Prediction of Indian Peninsula: Comparison of Time Series Based Approach and Predictor Based Approach using Machine Learning Techniques

    Science.gov (United States)

    Dash, Y.; Mishra, S. K.; Panigrahi, B. K.

    2017-12-01

    Prediction of northeast/post monsoon rainfall which occur during October, November and December (OND) over Indian peninsula is a challenging task due to the dynamic nature of uncertain chaotic climate. It is imperative to elucidate this issue by examining performance of different machine leaning (ML) approaches. The prime objective of this research is to compare between a) statistical prediction using historical rainfall observations and global atmosphere-ocean predictors like Sea Surface Temperature (SST) and Sea Level Pressure (SLP) and b) empirical prediction based on a time series analysis of past rainfall data without using any other predictors. Initially, ML techniques have been applied on SST and SLP data (1948-2014) obtained from NCEP/NCAR reanalysis monthly mean provided by the NOAA ESRL PSD. Later, this study investigated the applicability of ML methods using OND rainfall time series for 1948-2014 and forecasted up to 2018. The predicted values of aforementioned methods were verified using observed time series data collected from Indian Institute of Tropical Meteorology and the result revealed good performance of ML algorithms with minimal error scores. Thus, it is found that both statistical and empirical methods are useful for long range climatic projections.

  10. Linear Model-Based Predictive Control of the LHC 1.8 K Cryogenic Loop

    CERN Document Server

    Blanco-Viñuela, E; De Prada-Moraga, C

    1999-01-01

    The LHC accelerator will employ 1800 superconducting magnets (for guidance and focusing of the particle beams) in a pressurized superfluid helium bath at 1.9 K. This temperature is a severely constrained control parameter in order to avoid the transition from the superconducting to the normal state. Cryogenic processes are difficult to regulate due to their highly non-linear physical parameters (heat capacity, thermal conductance, etc.) and undesirable peculiarities like non self-regulating process, inverse response and variable dead time. To reduce the requirements on either temperature sensor or cryogenic system performance, various control strategies have been investigated on a reduced-scale LHC prototype built at CERN (String Test). Model Based Predictive Control (MBPC) is a regulation algorithm based on the explicit use of a process model to forecast the plant output over a certain prediction horizon. This predicted controlled variable is used in an on-line optimization procedure that minimizes an approp...

  11. Calorimeter prediction based on multiple exponentials

    International Nuclear Information System (INIS)

    Smith, M.K.; Bracken, D.S.

    2002-01-01

    Calorimetry allows very precise measurements of nuclear material to be carried out, but it also requires relatively long measurement times to do so. The ability to accurately predict the equilibrium response of a calorimeter would significantly reduce the amount of time required for calorimetric assays. An algorithm has been developed that is effective at predicting the equilibrium response. This multi-exponential prediction algorithm is based on an iterative technique using commercial fitting routines that fit a constant plus a variable number of exponential terms to calorimeter data. Details of the implementation and the results of trials on a large number of calorimeter data sets will be presented

  12. Remaining Useful Life Prediction for Lithium-Ion Batteries Based on Gaussian Processes Mixture

    Science.gov (United States)

    Li, Lingling; Wang, Pengchong; Chao, Kuei-Hsiang; Zhou, Yatong; Xie, Yang

    2016-01-01

    The remaining useful life (RUL) prediction of Lithium-ion batteries is closely related to the capacity degeneration trajectories. Due to the self-charging and the capacity regeneration, the trajectories have the property of multimodality. Traditional prediction models such as the support vector machines (SVM) or the Gaussian Process regression (GPR) cannot accurately characterize this multimodality. This paper proposes a novel RUL prediction method based on the Gaussian Process Mixture (GPM). It can process multimodality by fitting different segments of trajectories with different GPR models separately, such that the tiny differences among these segments can be revealed. The method is demonstrated to be effective for prediction by the excellent predictive result of the experiments on the two commercial and chargeable Type 1850 Lithium-ion batteries, provided by NASA. The performance comparison among the models illustrates that the GPM is more accurate than the SVM and the GPR. In addition, GPM can yield the predictive confidence interval, which makes the prediction more reliable than that of traditional models. PMID:27632176

  13. Nonlinear Model Predictive Control Based on a Self-Organizing Recurrent Neural Network.

    Science.gov (United States)

    Han, Hong-Gui; Zhang, Lu; Hou, Ying; Qiao, Jun-Fei

    2016-02-01

    A nonlinear model predictive control (NMPC) scheme is developed in this paper based on a self-organizing recurrent radial basis function (SR-RBF) neural network, whose structure and parameters are adjusted concurrently in the training process. The proposed SR-RBF neural network is represented in a general nonlinear form for predicting the future dynamic behaviors of nonlinear systems. To improve the modeling accuracy, a spiking-based growing and pruning algorithm and an adaptive learning algorithm are developed to tune the structure and parameters of the SR-RBF neural network, respectively. Meanwhile, for the control problem, an improved gradient method is utilized for the solution of the optimization problem in NMPC. The stability of the resulting control system is proved based on the Lyapunov stability theory. Finally, the proposed SR-RBF neural network-based NMPC (SR-RBF-NMPC) is used to control the dissolved oxygen (DO) concentration in a wastewater treatment process (WWTP). Comparisons with other existing methods demonstrate that the SR-RBF-NMPC can achieve a considerably better model fitting for WWTP and a better control performance for DO concentration.

  14. Occupant feedback based model predictive control for thermal comfort and energy optimization: A chamber experimental evaluation

    International Nuclear Information System (INIS)

    Chen, Xiao; Wang, Qian; Srebric, Jelena

    2016-01-01

    Highlights: • This study evaluates an occupant-feedback driven Model Predictive Controller (MPC). • The MPC adjusts indoor temperature based on a dynamic thermal sensation (DTS) model. • A chamber model for predicting chamber air temperature is developed and validated. • Experiments show that MPC using DTS performs better than using Predicted Mean Vote. - Abstract: In current centralized building climate control, occupants do not have much opportunity to intervene the automated control system. This study explores the benefit of using thermal comfort feedback from occupants in the model predictive control (MPC) design based on a novel dynamic thermal sensation (DTS) model. This DTS model based MPC was evaluated in chamber experiments. A hierarchical structure for thermal control was adopted in the chamber experiments. At the high level, an MPC controller calculates the optimal supply air temperature of the chamber heating, ventilation, and air conditioning (HVAC) system, using the feedback of occupants’ votes on thermal sensation. At the low level, the actual supply air temperature is controlled by the chiller/heater using a PI control to achieve the optimal set point. This DTS-based MPC was also compared to an MPC designed based on the Predicted Mean Vote (PMV) model for thermal sensation. The experiment results demonstrated that the DTS-based MPC using occupant feedback allows significant energy saving while maintaining occupant thermal comfort compared to the PMV-based MPC.

  15. A cycle simulation model for predicting the performance of a diesel engine fuelled by diesel and biodiesel blends

    International Nuclear Information System (INIS)

    Gogoi, T.K.; Baruah, D.C.

    2010-01-01

    Among the alternative fuels, biodiesel and its blends are considered suitable and the most promising fuel for diesel engine. The properties of biodiesel are found similar to that of diesel. Many researchers have experimentally evaluated the performance characteristics of conventional diesel engines fuelled by biodiesel and its blends. However, experiments require enormous effort, money and time. Hence, a cycle simulation model incorporating a thermodynamic based single zone combustion model is developed to predict the performance of diesel engine. The effect of engine speed and compression ratio on brake power and brake thermal efficiency is analysed through the model. The fuel considered for the analysis are diesel, 20%, 40%, 60% blending of diesel and biodiesel derived from Karanja oil (Pongamia Glabra). The model predicts similar performance with diesel, 20% and 40% blending. However, with 60% blending, it reveals better performance in terms of brake power and brake thermal efficiency.

  16. Scientific reporting is suboptimal for aspects that characterize genetic risk prediction studies: a review of published articles based on the Genetic RIsk Prediction Studies statement.

    Science.gov (United States)

    Iglesias, Adriana I; Mihaescu, Raluca; Ioannidis, John P A; Khoury, Muin J; Little, Julian; van Duijn, Cornelia M; Janssens, A Cecile J W

    2014-05-01

    Our main objective was to raise awareness of the areas that need improvements in the reporting of genetic risk prediction articles for future publications, based on the Genetic RIsk Prediction Studies (GRIPS) statement. We evaluated studies that developed or validated a prediction model based on multiple DNA variants, using empirical data, and were published in 2010. A data extraction form based on the 25 items of the GRIPS statement was created and piloted. Forty-two studies met our inclusion criteria. Overall, more than half of the evaluated items (34 of 62) were reported in at least 85% of included articles. Seventy-seven percentage of the articles were identified as genetic risk prediction studies through title assessment, but only 31% used the keywords recommended by GRIPS in the title or abstract. Seventy-four percentage mentioned which allele was the risk variant. Overall, only 10% of the articles reported all essential items needed to perform external validation of the risk model. Completeness of reporting in genetic risk prediction studies is adequate for general elements of study design but is suboptimal for several aspects that characterize genetic risk prediction studies such as description of the model construction. Improvements in the transparency of reporting of these aspects would facilitate the identification, replication, and application of genetic risk prediction models. Copyright © 2014 Elsevier Inc. All rights reserved.

  17. Performance prediction of a proton exchange membrane fuel cell using the ANFIS model

    Energy Technology Data Exchange (ETDEWEB)

    Vural, Yasemin; Ingham, Derek B.; Pourkashanian, Mohamed [Centre for Computational Fluid Dynamics, University of Leeds, Houldsworth Building, LS2 9JT Leeds (United Kingdom)

    2009-11-15

    In this study, the performance (current-voltage curve) prediction of a Proton Exchange Membrane Fuel Cell (PEMFC) is performed for different operational conditions using an Adaptive Neuro-Fuzzy Inference System (ANFIS). First, ANFIS is trained with a set of input and output data. The trained model is then tested with an independent set of experimental data. The trained and tested model is then used to predict the performance curve of the PEMFC under various operational conditions. The model shows very good agreement with the experimental data and this indicates that ANFIS is capable of predicting fuel cell performance (in terms of cell voltage) with a high accuracy in an easy, rapid and cost effective way for the case presented. Finally, the capabilities and the limitations of the model for the application in fuel cells have been discussed. (author)

  18. Estimating Driving Performance Based on EEG Spectrum Analysis

    Directory of Open Access Journals (Sweden)

    Jung Tzyy-Ping

    2005-01-01

    Full Text Available The growing number of traffic accidents in recent years has become a serious concern to society. Accidents caused by driver's drowsiness behind the steering wheel have a high fatality rate because of the marked decline in the driver's abilities of perception, recognition, and vehicle control abilities while sleepy. Preventing such accidents caused by drowsiness is highly desirable but requires techniques for continuously detecting, estimating, and predicting the level of alertness of drivers and delivering effective feedbacks to maintain their maximum performance. This paper proposes an EEG-based drowsiness estimation system that combines electroencephalogram (EEG log subband power spectrum, correlation analysis, principal component analysis, and linear regression models to indirectly estimate driver's drowsiness level in a virtual-reality-based driving simulator. Our results demonstrated that it is feasible to accurately estimate quantitatively driving performance, expressed as deviation between the center of the vehicle and the center of the cruising lane, in a realistic driving simulator.

  19. Performance-Based Funding Brief

    Science.gov (United States)

    Washington Higher Education Coordinating Board, 2011

    2011-01-01

    A number of states have made progress in implementing performance-based funding (PFB) and accountability. This policy brief summarizes main features of performance-based funding systems in three states: Tennessee, Ohio, and Indiana. The brief also identifies key issues that states considering performance-based funding must address, as well as…

  20. Rutting Prediction in Asphalt Pavement Based on Viscoelastic Theory

    Directory of Open Access Journals (Sweden)

    Nahi Mohammed Hadi

    2016-01-01

    Full Text Available Rutting is one of the most disturbing failures on the asphalt roads due to the interrupting it is caused to the drivers. Predicting of asphalt pavement rutting is essential tool leads to better asphalt mixture design. This work describes a method of predicting the behaviour of various asphalt pavement mixes and linking these to an accelerated performance testing. The objective of this study is to develop a finite element model based on viscoplastic theory for simulating the laboratory testing of asphalt mixes in Hamburg Wheel Rut Tester (HWRT for rutting. The creep parameters C1, C2 and C3 are developed from the triaxial repeated load creep test at 50°C and at a frequency of 1 Hz and the modulus of elasticity and Poisson’ s ratio determined at the same temperature. Viscoelastic model (creep model is adopted using a FE simulator (ANSYS in order to calculate the rutting for various mixes under a uniform loading pressure of 500 kPa. An eight-node with a three Degrees of Freedom (UX, UY, and UZ Element is used for the simulation. The creep model developed for HWRT tester was verified by comparing the predicted rut depths with the measured one and by comparing the rut depth with ABAQUS result from literature. Reasonable agreement can be obtained between the predicted rut depths and the measured one. Moreover, it is found that creep model parameter C1 and C3 have a strong relationship with rutting. It was clear that the parameter C1 strongly influences rutting than the parameter C3. Finally, it can be concluded that creep model based on finite element method can be used as an effective tool to analyse rutting of asphalt pavements.

  1. Procrastinating Behavior in Computer-Based Learning Environments to Predict Performance: A Case Study in Moodle.

    Science.gov (United States)

    Cerezo, Rebeca; Esteban, María; Sánchez-Santillán, Miguel; Núñez, José C

    2017-01-01

    Introduction: Research about student performance has traditionally considered academic procrastination as a behavior that has negative effects on academic achievement. Although there is much evidence for this in class-based environments, there is a lack of research on Computer-Based Learning Environments (CBLEs) . Therefore, the purpose of this study is to evaluate student behavior in a blended learning program and specifically procrastination behavior in relation to performance through Data Mining techniques. Materials and Methods: A sample of 140 undergraduate students participated in a blended learning experience implemented in a Moodle (Modular Object Oriented Developmental Learning Environment) Management System. Relevant interaction variables were selected for the study, taking into account student achievement and analyzing data by means of association rules, a mining technique. The association rules were arrived at and filtered through two selection criteria: 1, rules must have an accuracy over 0.8 and 2, they must be present in both sub-samples. Results: The findings of our study highlight the influence of time management in online learning environments, particularly on academic achievement, as there is an association between procrastination variables and student performance. Conclusion: Negative impact of procrastination in learning outcomes has been observed again but in virtual learning environments where practical implications, prevention of, and intervention in, are different from class-based learning. These aspects are discussed to help resolve student difficulties at various ages.

  2. Procrastinating Behavior in Computer-Based Learning Environments to Predict Performance: A Case Study in Moodle

    Science.gov (United States)

    Cerezo, Rebeca; Esteban, María; Sánchez-Santillán, Miguel; Núñez, José C.

    2017-01-01

    Introduction: Research about student performance has traditionally considered academic procrastination as a behavior that has negative effects on academic achievement. Although there is much evidence for this in class-based environments, there is a lack of research on Computer-Based Learning Environments (CBLEs). Therefore, the purpose of this study is to evaluate student behavior in a blended learning program and specifically procrastination behavior in relation to performance through Data Mining techniques. Materials and Methods: A sample of 140 undergraduate students participated in a blended learning experience implemented in a Moodle (Modular Object Oriented Developmental Learning Environment) Management System. Relevant interaction variables were selected for the study, taking into account student achievement and analyzing data by means of association rules, a mining technique. The association rules were arrived at and filtered through two selection criteria: 1, rules must have an accuracy over 0.8 and 2, they must be present in both sub-samples. Results: The findings of our study highlight the influence of time management in online learning environments, particularly on academic achievement, as there is an association between procrastination variables and student performance. Conclusion: Negative impact of procrastination in learning outcomes has been observed again but in virtual learning environments where practical implications, prevention of, and intervention in, are different from class-based learning. These aspects are discussed to help resolve student difficulties at various ages. PMID:28883801

  3. Procrastinating Behavior in Computer-Based Learning Environments to Predict Performance: A Case Study in Moodle

    Directory of Open Access Journals (Sweden)

    Rebeca Cerezo

    2017-08-01

    Full Text Available Introduction: Research about student performance has traditionally considered academic procrastination as a behavior that has negative effects on academic achievement. Although there is much evidence for this in class-based environments, there is a lack of research on Computer-Based Learning Environments (CBLEs. Therefore, the purpose of this study is to evaluate student behavior in a blended learning program and specifically procrastination behavior in relation to performance through Data Mining techniques.Materials and Methods: A sample of 140 undergraduate students participated in a blended learning experience implemented in a Moodle (Modular Object Oriented Developmental Learning Environment Management System. Relevant interaction variables were selected for the study, taking into account student achievement and analyzing data by means of association rules, a mining technique. The association rules were arrived at and filtered through two selection criteria: 1, rules must have an accuracy over 0.8 and 2, they must be present in both sub-samples.Results: The findings of our study highlight the influence of time management in online learning environments, particularly on academic achievement, as there is an association between procrastination variables and student performance.Conclusion: Negative impact of procrastination in learning outcomes has been observed again but in virtual learning environments where practical implications, prevention of, and intervention in, are different from class-based learning. These aspects are discussed to help resolve student difficulties at various ages.

  4. Advanced Emergency Braking Control Based on a Nonlinear Model Predictive Algorithm for Intelligent Vehicles

    Directory of Open Access Journals (Sweden)

    Ronghui Zhang

    2017-05-01

    Full Text Available Focusing on safety, comfort and with an overall aim of the comprehensive improvement of a vision-based intelligent vehicle, a novel Advanced Emergency Braking System (AEBS is proposed based on Nonlinear Model Predictive Algorithm. Considering the nonlinearities of vehicle dynamics, a vision-based longitudinal vehicle dynamics model is established. On account of the nonlinear coupling characteristics of the driver, surroundings, and vehicle itself, a hierarchical control structure is proposed to decouple and coordinate the system. To avoid or reduce the collision risk between the intelligent vehicle and collision objects, a coordinated cost function of tracking safety, comfort, and fuel economy is formulated. Based on the terminal constraints of stable tracking, a multi-objective optimization controller is proposed using the theory of non-linear model predictive control. To quickly and precisely track control target in a finite time, an electronic brake controller for AEBS is designed based on the Nonsingular Fast Terminal Sliding Mode (NFTSM control theory. To validate the performance and advantages of the proposed algorithm, simulations are implemented. According to the simulation results, the proposed algorithm has better integrated performance in reducing the collision risk and improving the driving comfort and fuel economy of the smart car compared with the existing single AEBS.

  5. Knowledge-based prediction of three-dimensional dose distributions for external beam radiotherapy

    Energy Technology Data Exchange (ETDEWEB)

    Shiraishi, Satomi; Moore, Kevin L., E-mail: kevinmoore@ucsd.edu [Department of Radiation Medicine and Applied Sciences, University of California, San Diego, La Jolla, California 92093 (United States)

    2016-01-15

    Purpose: To demonstrate knowledge-based 3D dose prediction for external beam radiotherapy. Methods: Using previously treated plans as training data, an artificial neural network (ANN) was trained to predict a dose matrix based on patient-specific geometric and planning parameters, such as the closest distance (r) to planning target volume (PTV) and organ-at-risks (OARs). Twenty-three prostate and 43 stereotactic radiosurgery/radiotherapy (SRS/SRT) cases with at least one nearby OAR were studied. All were planned with volumetric-modulated arc therapy to prescription doses of 81 Gy for prostate and 12–30 Gy for SRS. Using these clinically approved plans, ANNs were trained to predict dose matrix and the predictive accuracy was evaluated using the dose difference between the clinical plan and prediction, δD = D{sub clin} − D{sub pred}. The mean (〈δD{sub r}〉), standard deviation (σ{sub δD{sub r}}), and their interquartile range (IQR) for the training plans were evaluated at a 2–3 mm interval from the PTV boundary (r{sub PTV}) to assess prediction bias and precision. Initially, unfiltered models which were trained using all plans in the cohorts were created for each treatment site. The models predict approximately the average quality of OAR sparing. Emphasizing a subset of plans that exhibited superior to the average OAR sparing during training, refined models were created to predict high-quality rectum sparing for prostate and brainstem sparing for SRS. Using the refined model, potentially suboptimal plans were identified where the model predicted further sparing of the OARs was achievable. Replans were performed to test if the OAR sparing could be improved as predicted by the model. Results: The refined models demonstrated highly accurate dose distribution prediction. For prostate cases, the average prediction bias for all voxels irrespective of organ delineation ranged from −1% to 0% with maximum IQR of 3% over r{sub PTV} ∈ [ − 6, 30] mm. The

  6. Short-term prediction method of wind speed series based on fractal interpolation

    International Nuclear Information System (INIS)

    Xiu, Chunbo; Wang, Tiantian; Tian, Meng; Li, Yanqing; Cheng, Yi

    2014-01-01

    Highlights: • An improved fractal interpolation prediction method is proposed. • The chaos optimization algorithm is used to obtain the iterated function system. • The fractal extrapolate interpolation prediction of wind speed series is performed. - Abstract: In order to improve the prediction performance of the wind speed series, the rescaled range analysis is used to analyze the fractal characteristics of the wind speed series. An improved fractal interpolation prediction method is proposed to predict the wind speed series whose Hurst exponents are close to 1. An optimization function which is composed of the interpolation error and the constraint items of the vertical scaling factors in the fractal interpolation iterated function system is designed. The chaos optimization algorithm is used to optimize the function to resolve the optimal vertical scaling factors. According to the self-similarity characteristic and the scale invariance, the fractal extrapolate interpolation prediction can be performed by extending the fractal characteristic from internal interval to external interval. Simulation results show that the fractal interpolation prediction method can get better prediction result than others for the wind speed series with the fractal characteristic, and the prediction performance of the proposed method can be improved further because the fractal characteristic of its iterated function system is similar to that of the predicted wind speed series

  7. Fast integration-based prediction bands for ordinary differential equation models.

    Science.gov (United States)

    Hass, Helge; Kreutz, Clemens; Timmer, Jens; Kaschek, Daniel

    2016-04-15

    To gain a deeper understanding of biological processes and their relevance in disease, mathematical models are built upon experimental data. Uncertainty in the data leads to uncertainties of the model's parameters and in turn to uncertainties of predictions. Mechanistic dynamic models of biochemical networks are frequently based on nonlinear differential equation systems and feature a large number of parameters, sparse observations of the model components and lack of information in the available data. Due to the curse of dimensionality, classical and sampling approaches propagating parameter uncertainties to predictions are hardly feasible and insufficient. However, for experimental design and to discriminate between competing models, prediction and confidence bands are essential. To circumvent the hurdles of the former methods, an approach to calculate a profile likelihood on arbitrary observations for a specific time point has been introduced, which provides accurate confidence and prediction intervals for nonlinear models and is computationally feasible for high-dimensional models. In this article, reliable and smooth point-wise prediction and confidence bands to assess the model's uncertainty on the whole time-course are achieved via explicit integration with elaborate correction mechanisms. The corresponding system of ordinary differential equations is derived and tested on three established models for cellular signalling. An efficiency analysis is performed to illustrate the computational benefit compared with repeated profile likelihood calculations at multiple time points. The integration framework and the examples used in this article are provided with the software package Data2Dynamics, which is based on MATLAB and freely available at http://www.data2dynamics.org helge.hass@fdm.uni-freiburg.de Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e

  8. Location Prediction-Based Data Dissemination Using Swarm Intelligence in Opportunistic Cognitive Networks

    Directory of Open Access Journals (Sweden)

    Jie Li

    2014-01-01

    Full Text Available Swarm intelligence is widely used in the application of communication networks. In this paper we adopt a biologically inspired strategy to investigate the data dissemination problem in the opportunistic cognitive networks (OCNs. We model the system as a centralized and distributed hybrid system including a location prediction server and a pervasive environment deploying the large-scale human-centric devices. To exploit such environment, data gathering and dissemination are fundamentally based on the contact opportunities. To tackle the lack of contemporaneous end-to-end connectivity in opportunistic networks, we apply ant colony optimization as a cognitive heuristic technology to formulate a self-adaptive dissemination-based routing scheme in opportunistic cognitive networks. This routing strategy has attempted to find the most appropriate nodes conveying messages to the destination node based on the location prediction information and intimacy between nodes, which uses the online unsupervised learning on geographical locations and the biologically inspired algorithm on the relationship of nodes to estimate the delivery probability. Extensive simulation is carried out on the real-world traces to evaluate the accuracy of the location prediction and the proposed scheme in terms of transmission cost, delivery ratio, average hops, and delivery latency, which achieves better routing performances compared to the typical routing schemes in OCNs.

  9. Accurate bearing remaining useful life prediction based on Weibull distribution and artificial neural network

    Science.gov (United States)

    Ben Ali, Jaouher; Chebel-Morello, Brigitte; Saidi, Lotfi; Malinowski, Simon; Fnaiech, Farhat

    2015-05-01

    Accurate remaining useful life (RUL) prediction of critical assets is an important challenge in condition based maintenance to improve reliability and decrease machine's breakdown and maintenance's cost. Bearing is one of the most important components in industries which need to be monitored and the user should predict its RUL. The challenge of this study is to propose an original feature able to evaluate the health state of bearings and to estimate their RUL by Prognostics and Health Management (PHM) techniques. In this paper, the proposed method is based on the data-driven prognostic approach. The combination of Simplified Fuzzy Adaptive Resonance Theory Map (SFAM) neural network and Weibull distribution (WD) is explored. WD is used just in the training phase to fit measurement and to avoid areas of fluctuation in the time domain. SFAM training process is based on fitted measurements at present and previous inspection time points as input. However, the SFAM testing process is based on real measurements at present and previous inspections. Thanks to the fuzzy learning process, SFAM has an important ability and a good performance to learn nonlinear time series. As output, seven classes are defined; healthy bearing and six states for bearing degradation. In order to find the optimal RUL prediction, a smoothing phase is proposed in this paper. Experimental results show that the proposed method can reliably predict the RUL of rolling element bearings (REBs) based on vibration signals. The proposed prediction approach can be applied to prognostic other various mechanical assets.

  10. Analytical prediction of thermal performance of hypervapotron and its application to ITER

    International Nuclear Information System (INIS)

    Baxi, C.B.; Falter, H.

    1992-09-01

    A hypervapotron (HV) is a water cooled device made of high thermal conductivity material such as copper. A surface heat flux of up to 30 MW/m 2 has been achieved in copper hypervapotrans cooled by water at a velocity of 10 m/s and at a pressure of six bar. Hypervapotrons have been used in the past as beam dumps at the Joint European Torus (JET). It is planned to use them for diverter cooling during Mark II upgrade of the JET. Although a large amount of experimental data has been collected on these devices, an analytical performance prediction has not been done before due to the complexity of the heat transfer mechanisms. A method to analytically predict the thermal performance of the hypervapotron is described. The method uses a combination of a number of thermal hydraulic correlations and a finite element analysis. The analytical prediction shows an excellent agreement with experimental results over a wide range of velocities, pressures, subcooling, and geometries. The method was used to predict the performance of hypervapotron made of beryllium. Merits for the use of hypervapotrons for International Thermonuclear Experimental Reactor (ITER) and Tokamak Physics Experiment (TPX) are discussed

  11. MPC-based energy management with adaptive Markov-chain prediction for a dual-mode hybrid electric vehicle

    Institute of Scientific and Technical Information of China (English)

    XIANG; ChangLe; DING; Feng; WANG; WeiDa; HE; Wei; QI; YunLong

    2017-01-01

    The and energy to management strategy battery is state an important part of a hybrid electrical vehicle design.It is used to improve various fuel economy sustain a proper of charge an by controlling control the power components is while satisfying to constraints and driving demands.However,achieving optimal and performance challenging due the nonlinearities of the hybrid powertrain,conflicting vehicle the time varying constraints,the dilemma capable in which controller control complexity and real-time capability are generally objectives.In this paper,a of real-time cascaded complies strategy is proposed for a dual-mode hybrid electric that considers controller based nonlinearities based the system model and with all time-varying with constraints.sampling The strategy consists of a supervisory controller on a non-linear predictive control short(MPC)sampling a long time with future strategy interval and a coordinating on linear model predictive based control with a time interval to deal different load dynamics of the system.The Additionally,a novel data methodology using adaptive Markov chains to predict demand is introduced.predictive future information is used to improve controller cycles performance.conducted.The The proposed is implemented validity on a real test-bed approach and experimental trials using economy unknown is driving are results other demonstrate the of the proposed and show that fuel significantly improved compared with methods.

  12. MPC-based energy management with adaptive Markov-chain prediction for a dual-mode hybrid electric vehicle

    Institute of Scientific and Technical Information of China (English)

    XIANG ChangLe; DING Feng; WANG WeiDa; HE Wei; QI YunLong

    2017-01-01

    The energy management strategy is an important part of a hybrid electrical vehicle design.It is used to improve fuel economy and to sustain a proper battery state of charge by controlling the power components while satisfying various constraints and driving demands.However,achieving an optimal control performance is challenging due to the nonlinearities of the hybrid powertrain,the time varying constraints,and the dilemma in which controller complexity and real-time capability are generally conflicting objectives.In this paper,a real-time capable cascaded control strategy is proposed for a dual-mode hybrid electric vehicle that considers nonlinearities of the system and complies with all time-varying constraints.The strategy consists of a supervisory controller based on a non-linear model predictive control (MPC) with a long sampling time interval and a coordinating controller based on linear model predictive control with a short sampling time interval to deal with different dynamics of the system.Additionally,a novel data based methodology using adaptive Markov chains to predict future load demand is introduced.The predictive future information is used to improve controller performance.The proposed strategy is implemented on a real test-bed and experimental trials using unknown driving cycles are conducted.The results demonstrate the validity of the proposed approach and show that fuel economy is significantly improved compared with other methods.

  13. Modeling and simulation of adaptive Neuro-fuzzy based intelligent system for predictive stabilization in structured overlay networks

    Directory of Open Access Journals (Sweden)

    Ramanpreet Kaur

    2017-02-01

    Full Text Available Intelligent prediction of neighboring node (k well defined neighbors as specified by the dht protocol dynamism is helpful to improve the resilience and can reduce the overhead associated with topology maintenance of structured overlay networks. The dynamic behavior of overlay nodes depends on many factors such as underlying user’s online behavior, geographical position, time of the day, day of the week etc. as reported in many applications. We can exploit these characteristics for efficient maintenance of structured overlay networks by implementing an intelligent predictive framework for setting stabilization parameters appropriately. Considering the fact that human driven behavior usually goes beyond intermittent availability patterns, we use a hybrid Neuro-fuzzy based predictor to enhance the accuracy of the predictions. In this paper, we discuss our predictive stabilization approach, implement Neuro-fuzzy based prediction in MATLAB simulation and apply this predictive stabilization model in a chord based overlay network using OverSim as a simulation tool. The MATLAB simulation results present that the behavior of neighboring nodes is predictable to a large extent as indicated by the very small RMSE. The OverSim based simulation results also observe significant improvements in the performance of chord based overlay network in terms of lookup success ratio, lookup hop count and maintenance overhead as compared to periodic stabilization approach.

  14. LiDAR based prediction of forest biomass using hierarchical models with spatially varying coefficients

    Science.gov (United States)

    Babcock, Chad; Finley, Andrew O.; Bradford, John B.; Kolka, Randall K.; Birdsey, Richard A.; Ryan, Michael G.

    2015-01-01

    Many studies and production inventory systems have shown the utility of coupling covariates derived from Light Detection and Ranging (LiDAR) data with forest variables measured on georeferenced inventory plots through regression models. The objective of this study was to propose and assess the use of a Bayesian hierarchical modeling framework that accommodates both residual spatial dependence and non-stationarity of model covariates through the introduction of spatial random effects. We explored this objective using four forest inventory datasets that are part of the North American Carbon Program, each comprising point-referenced measures of above-ground forest biomass and discrete LiDAR. For each dataset, we considered at least five regression model specifications of varying complexity. Models were assessed based on goodness of fit criteria and predictive performance using a 10-fold cross-validation procedure. Results showed that the addition of spatial random effects to the regression model intercept improved fit and predictive performance in the presence of substantial residual spatial dependence. Additionally, in some cases, allowing either some or all regression slope parameters to vary spatially, via the addition of spatial random effects, further improved model fit and predictive performance. In other instances, models showed improved fit but decreased predictive performance—indicating over-fitting and underscoring the need for cross-validation to assess predictive ability. The proposed Bayesian modeling framework provided access to pixel-level posterior predictive distributions that were useful for uncertainty mapping, diagnosing spatial extrapolation issues, revealing missing model covariates, and discovering locally significant parameters.

  15. Prediction of selected Indian stock using a partitioning–interpolation based ARIMA–GARCH model

    Directory of Open Access Journals (Sweden)

    C. Narendra Babu

    2015-07-01

    Full Text Available Accurate long-term prediction of time series data (TSD is a very useful research challenge in diversified fields. As financial TSD are highly volatile, multi-step prediction of financial TSD is a major research problem in TSD mining. The two challenges encountered are, maintaining high prediction accuracy and preserving the data trend across the forecast horizon. The linear traditional models such as autoregressive integrated moving average (ARIMA and generalized autoregressive conditional heteroscedastic (GARCH preserve data trend to some extent, at the cost of prediction accuracy. Non-linear models like ANN maintain prediction accuracy by sacrificing data trend. In this paper, a linear hybrid model, which maintains prediction accuracy while preserving data trend, is proposed. A quantitative reasoning analysis justifying the accuracy of proposed model is also presented. A moving-average (MA filter based pre-processing, partitioning and interpolation (PI technique are incorporated by the proposed model. Some existing models and the proposed model are applied on selected NSE India stock market data. Performance results show that for multi-step ahead prediction, the proposed model outperforms the others in terms of both prediction accuracy and preserving data trend.

  16. Assessing the model transferability for prediction of transcription factor binding sites based on chromatin accessibility.

    Science.gov (United States)

    Liu, Sheng; Zibetti, Cristina; Wan, Jun; Wang, Guohua; Blackshaw, Seth; Qian, Jiang

    2017-07-27

    Computational prediction of transcription factor (TF) binding sites in different cell types is challenging. Recent technology development allows us to determine the genome-wide chromatin accessibility in various cellular and developmental contexts. The chromatin accessibility profiles provide useful information in prediction of TF binding events in various physiological conditions. Furthermore, ChIP-Seq analysis was used to determine genome-wide binding sites for a range of different TFs in multiple cell types. Integration of these two types of genomic information can improve the prediction of TF binding events. We assessed to what extent a model built upon on other TFs and/or other cell types could be used to predict the binding sites of TFs of interest. A random forest model was built using a set of cell type-independent features such as specific sequences recognized by the TFs and evolutionary conservation, as well as cell type-specific features derived from chromatin accessibility data. Our analysis suggested that the models learned from other TFs and/or cell lines performed almost as well as the model learned from the target TF in the cell type of interest. Interestingly, models based on multiple TFs performed better than single-TF models. Finally, we proposed a universal model, BPAC, which was generated using ChIP-Seq data from multiple TFs in various cell types. Integrating chromatin accessibility information with sequence information improves prediction of TF binding.The prediction of TF binding is transferable across TFs and/or cell lines suggesting there are a set of universal "rules". A computational tool was developed to predict TF binding sites based on the universal "rules".

  17. Demographic factors and hospital size predict patient satisfaction variance--implications for hospital value-based purchasing.

    Science.gov (United States)

    McFarland, Daniel C; Ornstein, Katherine A; Holcombe, Randall F

    2015-08-01

    Hospital Value-Based Purchasing (HVBP) incentivizes quality performance-based healthcare by linking payments directly to patient satisfaction scores obtained from Hospital Consumer Assessment of Healthcare Providers and Systems (HCAHPS) surveys. Lower HCAHPS scores appear to cluster in heterogeneous population-dense areas and could bias Centers for Medicare & Medicaid Services (CMS) reimbursement. Assess nonrandom variation in patient satisfaction as determined by HCAHPS. Multivariate regression modeling was performed for individual dimensions of HCAHPS and aggregate scores. Standardized partial regression coefficients assessed strengths of predictors. Weighted Individual (hospital) Patient Satisfaction Adjusted Score (WIPSAS) utilized 4 highly predictive variables, and hospitals were reranked accordingly. A total of 3907 HVBP-participating hospitals. There were 934,800 patient surveys by the most conservative estimate. A total of 3144 county demographics (US Census) and HCAHPS surveys. Hospital size and primary language (non-English speaking) most strongly predicted unfavorable HCAHPS scores, whereas education and white ethnicity most strongly predicted favorable HCAHPS scores. The average adjusted patient satisfaction scores calculated by WIPSAS approximated the national average of HCAHPS scores. However, WIPSAS changed hospital rankings by variable amounts depending on the strength of the predictive variables in the hospitals' locations. Structural and demographic characteristics that predict lower scores were accounted for by WIPSAS that also improved rankings of many safety-net hospitals and academic medical centers in diverse areas. Demographic and structural factors (eg, hospital beds) predict patient satisfaction scores even after CMS adjustments. CMS should consider WIPSAS or a similar adjustment to account for the severity of patient satisfaction inequities that hospitals could strive to correct. © 2015 Society of Hospital Medicine.

  18. Demographic Factors and Hospital Size Predict Patient Satisfaction Variance- Implications for Hospital Value-Based Purchasing

    Science.gov (United States)

    McFarland, Daniel C.; Ornstein, Katherine; Holcombe, Randall F.

    2016-01-01

    Background Hospital Value-Based Purchasing (HVBP) incentivizes quality performance based healthcare by linking payments directly to patient satisfaction scores obtained from Hospital Consumer Assessment of Healthcare Providers and Systems (HCAHPS) surveys. Lower HCAHPS scores appear to cluster in heterogeneous population dense areas and could bias CMS reimbursement. Objective Assess nonrandom variation in patient satisfaction as determined by HCAHPS. Design Multivariate regression modeling was performed for individual dimensions of HCAHPS and aggregate scores. Standardized partial regression coefficients assessed strengths of predictors. Weighted Individual (hospital) Patient Satisfaction Adjusted Score (WIPSAS) utilized four highly predictive variables and hospitals were re-ranked accordingly. Setting 3,907 HVBP-participating hospitals. Patients 934,800 patient surveys, by most conservative estimate. Measurements 3,144 county demographics (U.S. Census), and HCAHPS. Results Hospital size and primary language (‘non-English speaking’) most strongly predicted unfavorable HCAHPS scores while education and white ethnicity most strongly predicted favorable HCAHPS scores. The average adjusted patient satisfaction scores calculated by WIPSAS approximated the national average of HCAHPS scores. However, WIPSAS changed hospital rankings by variable amounts depending on the strength of the predictive variables in the hospitals’ locations. Structural and demographic characteristics that predict lower scores were accounted for by WIPSAS that also improved rankings of many safety-net hospitals and academic medical centers in diverse areas. Conclusions Demographic and structural factors (e.g., hospital beds) predict patient satisfaction scores even after CMS adjustments. CMS should consider WIPSAS or a similar adjustment to account for the severity of patient satisfaction inequities that hospitals could strive to correct. PMID:25940305

  19. Prediction of small spark ignited engine performance using producer gas as fuel

    Directory of Open Access Journals (Sweden)

    N. Homdoung

    2015-03-01

    Full Text Available Producer gas from biomass gasification is expected to contribute to greater energy mix in the future. Therefore, effect of producer gas on engine performance is of great interest. Evaluation of engine performances can be hard and costly. Ideally, they may be predicted mathematically. This work was to apply mathematical models in evaluating performance of a small producer gas engine. The engine was a spark ignition, single cylinder unit with a CR of 14:1. Simulation was carried out on full load and varying engine speeds. From simulated results, it was found that the simple mathematical model can predict the performance of the gas engine and gave good agreement with experimental results. The differences were within ±7%.

  20. Artificial Fish Swarm Algorithm-Based Particle Filter for Li-Ion Battery Life Prediction

    Directory of Open Access Journals (Sweden)

    Ye Tian

    2014-01-01

    Full Text Available An intelligent online prognostic approach is proposed for predicting the remaining useful life (RUL of lithium-ion (Li-ion batteries based on artificial fish swarm algorithm (AFSA and particle filter (PF, which is an integrated approach combining model-based method with data-driven method. The parameters, used in the empirical model which is based on the capacity fade trends of Li-ion batteries, are identified dependent on the tracking ability of PF. AFSA-PF aims to improve the performance of the basic PF. By driving the prior particles to the domain with high likelihood, AFSA-PF allows global optimization, prevents particle degeneracy, thereby improving particle distribution and increasing prediction accuracy and algorithm convergence. Data provided by NASA are used to verify this approach and compare it with basic PF and regularized PF. AFSA-PF is shown to be more accurate and precise.

  1. Fault diagnosis and performance evaluation for high current LIA based on radial basis function neural network

    International Nuclear Information System (INIS)

    Yang Xinglin; Wang Huacen; Chen Nan; Dai Wenhua; Li Jin

    2006-01-01

    High current linear induction accelerator (LIA) is a complicated experimental physics device. It is difficult to evaluate and predict its performance. this paper presents a method which combines wavelet packet transform and radial basis function (RBF) neural network to build fault diagnosis and performance evaluation in order to improve reliability of high current LIA. The signal characteristics vectors which are extracted based on energy parameters of wavelet packet transform can well present the temporal and steady features of pulsed power signal, and reduce data dimensions effectively. The fault diagnosis system for accelerating cell and the trend classification system for the beam current based on RBF networks can perform fault diagnosis and evaluation, and provide predictive information for precise maintenance of high current LIA. (authors)

  2. A GPS Satellite Clock Offset Prediction Method Based on Fitting Clock Offset Rates Data

    Directory of Open Access Journals (Sweden)

    WANG Fuhong

    2016-12-01

    Full Text Available It is proposed that a satellite atomic clock offset prediction method based on fitting and modeling clock offset rates data. This method builds quadratic model or linear model combined with periodic terms to fit the time series of clock offset rates, and computes the model coefficients of trend with the best estimation. The clock offset precisely estimated at the initial prediction epoch is directly adopted to calculate the model coefficient of constant. The clock offsets in the rapid ephemeris (IGR provided by IGS are used as modeling data sets to perform certain experiments for different types of GPS satellite clocks. The results show that the clock prediction accuracies of the proposed method for 3, 6, 12 and 24 h achieve 0.43, 0.58, 0.90 and 1.47 ns respectively, which outperform the traditional prediction method based on fitting original clock offsets by 69.3%, 61.8%, 50.5% and 37.2%. Compared with the IGU real-time clock products provided by IGS, the prediction accuracies of the new method have improved about 15.7%, 23.7%, 27.4% and 34.4% respectively.

  3. A storm-based CSLE incorporating the modified SCS-CN method for soil loss prediction on the Chinese Loess Plateau

    Science.gov (United States)

    Shi, Wenhai; Huang, Mingbin

    2017-04-01

    The Chinese Loess Plateau is one of the most erodible areas in the world. In order to reduce soil and water losses, suitable conservation practices need to be designed. For this purpose, there is an increasing demand for an appropriate model that can accurately predict storm-based surface runoff and soil losses on the Loess Plateau. The Chinese Soil Loss Equation (CSLE) has been widely used in this region to assess soil losses from different land use types. However, the CSLE was intended only to predict the mean annual gross soil loss. In this study, a CSLE was proposed that would be storm-based and that introduced a new rainfall-runoff erosivity factor. A dataset was compiled that comprised measurements of soil losses during individual storms from three runoff-erosion plots in each of three different watersheds in the gully region of the Plateau for 3-7 years in three different time periods (1956-1959; 1973-1980; 2010-13). The accuracy of the soil loss predictions made by the new storm-based CSLE was determined using the data for the six plots in two of the watersheds measured during 165 storm-runoff events. The performance of the storm-based CSLE was further compared with the performance of the storm-based Revised Universal Soil Loss Equation (RUSLE) for the same six plots. During the calibration (83 storms) and validation (82 storms) of the storm-based CSLE, the model efficiency, E, was 87.7% and 88.9%, respectively, while the root mean square error (RMSE) was 2.7 and 2.3 t ha-1 indicating a high degree of accuracy. Furthermore, the storm-based CSLE performed better than the storm-based RULSE (E: 75.8% and 70.3%; RMSE: 3.8 and 3.7 t ha-1, for the calibration and validation storms, respectively). The storm-based CSLE was then used to predict the soil losses from the three experimental plots in the third watershed. For these predictions, the model parameter values, previously determined by the calibration based on the data from the initial six plots, were used in

  4. A Comparative Study to Predict Student’s Performance Using Educational Data Mining Techniques

    Science.gov (United States)

    Uswatun Khasanah, Annisa; Harwati

    2017-06-01

    Student’s performance prediction is essential to be conducted for a university to prevent student fail. Number of student drop out is one of parameter that can be used to measure student performance and one important point that must be evaluated in Indonesia university accreditation. Data Mining has been widely used to predict student’s performance, and data mining that applied in this field usually called as Educational Data Mining. This study conducted Feature Selection to select high influence attributes with student performance in Department of Industrial Engineering Universitas Islam Indonesia. Then, two popular classification algorithm, Bayesian Network and Decision Tree, were implemented and compared to know the best prediction result. The outcome showed that student’s attendance and GPA in the first semester were in the top rank from all Feature Selection methods, and Bayesian Network is outperforming Decision Tree since it has higher accuracy rate.

  5. Optimization of arterial age prediction models based in pulse wave

    Energy Technology Data Exchange (ETDEWEB)

    Scandurra, A G [Bioengineering Laboratory, Electronic Department, Mar del Plata University (Argentina); Meschino, G J [Bioengineering Laboratory, Electronic Department, Mar del Plata University (Argentina); Passoni, L I [Bioengineering Laboratory, Electronic Department, Mar del Plata University (Argentina); Dai Pra, A L [Engineering Aplied Artificial Intelligence Group, Mathematics Department, Mar del Plata University (Argentina); Introzzi, A R [Bioengineering Laboratory, Electronic Department, Mar del Plata University (Argentina); Clara, F M [Bioengineering Laboratory, Electronic Department, Mar del Plata University (Argentina)

    2007-11-15

    We propose the detection of early arterial ageing through a prediction model of arterial age based in the coherence assumption between the pulse wave morphology and the patient's chronological age. Whereas we evaluate several methods, a Sugeno fuzzy inference system is selected. Models optimization is approached using hybrid methods: parameter adaptation with Artificial Neural Networks and Genetic Algorithms. Features selection was performed according with their projection on main factors of the Principal Components Analysis. The model performance was tested using the bootstrap error type .632E. The model presented an error smaller than 8.5%. This result encourages including this process as a diagnosis module into the device for pulse analysis that has been developed by the Bioengineering Laboratory staff.

  6. Optimization of arterial age prediction models based in pulse wave

    International Nuclear Information System (INIS)

    Scandurra, A G; Meschino, G J; Passoni, L I; Dai Pra, A L; Introzzi, A R; Clara, F M

    2007-01-01

    We propose the detection of early arterial ageing through a prediction model of arterial age based in the coherence assumption between the pulse wave morphology and the patient's chronological age. Whereas we evaluate several methods, a Sugeno fuzzy inference system is selected. Models optimization is approached using hybrid methods: parameter adaptation with Artificial Neural Networks and Genetic Algorithms. Features selection was performed according with their projection on main factors of the Principal Components Analysis. The model performance was tested using the bootstrap error type .632E. The model presented an error smaller than 8.5%. This result encourages including this process as a diagnosis module into the device for pulse analysis that has been developed by the Bioengineering Laboratory staff

  7. Lossless medical image compression using geometry-adaptive partitioning and least square-based prediction.

    Science.gov (United States)

    Song, Xiaoying; Huang, Qijun; Chang, Sheng; He, Jin; Wang, Hao

    2018-06-01

    To improve the compression rates for lossless compression of medical images, an efficient algorithm, based on irregular segmentation and region-based prediction, is proposed in this paper. Considering that the first step of a region-based compression algorithm is segmentation, this paper proposes a hybrid method by combining geometry-adaptive partitioning and quadtree partitioning to achieve adaptive irregular segmentation for medical images. Then, least square (LS)-based predictors are adaptively designed for each region (regular subblock or irregular subregion). The proposed adaptive algorithm not only exploits spatial correlation between pixels but it utilizes local structure similarity, resulting in efficient compression performance. Experimental results show that the average compression performance of the proposed algorithm is 10.48, 4.86, 3.58, and 0.10% better than that of JPEG 2000, CALIC, EDP, and JPEG-LS, respectively. Graphical abstract ᅟ.

  8. [Prediction of schistosomiasis infection rates of population based on ARIMA-NARNN model].

    Science.gov (United States)

    Ke-Wei, Wang; Yu, Wu; Jin-Ping, Li; Yu-Yu, Jiang

    2016-07-12

    To explore the effect of the autoregressive integrated moving average model-nonlinear auto-regressive neural network (ARIMA-NARNN) model on predicting schistosomiasis infection rates of population. The ARIMA model, NARNN model and ARIMA-NARNN model were established based on monthly schistosomiasis infection rates from January 2005 to February 2015 in Jiangsu Province, China. The fitting and prediction performances of the three models were compared. Compared to the ARIMA model and NARNN model, the mean square error (MSE), mean absolute error (MAE) and mean absolute percentage error (MAPE) of the ARIMA-NARNN model were the least with the values of 0.011 1, 0.090 0 and 0.282 4, respectively. The ARIMA-NARNN model could effectively fit and predict schistosomiasis infection rates of population, which might have a great application value for the prevention and control of schistosomiasis.

  9. GWAS-based machine learning approach to predict duloxetine response in major depressive disorder.

    Science.gov (United States)

    Maciukiewicz, Malgorzata; Marshe, Victoria S; Hauschild, Anne-Christin; Foster, Jane A; Rotzinger, Susan; Kennedy, James L; Kennedy, Sidney H; Müller, Daniel J; Geraci, Joseph

    2018-04-01

    Major depressive disorder (MDD) is one of the most prevalent psychiatric disorders and is commonly treated with antidepressant drugs. However, large variability is observed in terms of response to antidepressants. Machine learning (ML) models may be useful to predict treatment outcomes. A sample of 186 MDD patients received treatment with duloxetine for up to 8 weeks were categorized as "responders" based on a MADRS change >50% from baseline; or "remitters" based on a MADRS score ≤10 at end point. The initial dataset (N = 186) was randomly divided into training and test sets in a nested 5-fold cross-validation, where 80% was used as a training set and 20% made up five independent test sets. We performed genome-wide logistic regression to identify potentially significant variants related to duloxetine response/remission and extracted the most promising predictors using LASSO regression. Subsequently, classification-regression trees (CRT) and support vector machines (SVM) were applied to construct models, using ten-fold cross-validation. With regards to response, none of the pairs performed significantly better than chance (accuracy p > .1). For remission, SVM achieved moderate performance with an accuracy = 0.52, a sensitivity = 0.58, and a specificity = 0.46, and 0.51 for all coefficients for CRT. The best performing SVM fold was characterized by an accuracy = 0.66 (p = .071), sensitivity = 0.70 and a sensitivity = 0.61. In this study, the potential of using GWAS data to predict duloxetine outcomes was examined using ML models. The models were characterized by a promising sensitivity, but specificity remained moderate at best. The inclusion of additional non-genetic variables to create integrated models may improve prediction. Copyright © 2017. Published by Elsevier Ltd.

  10. Corrosion models for predictions of performance of high-level radioactive-waste containers

    Energy Technology Data Exchange (ETDEWEB)

    Farmer, J.C.; McCright, R.D. [Lawrence Livermore National Lab., CA (United States); Gdowski, G.E. [KMI Energy Services, Livermore, CA (United States)

    1991-11-01

    The present plan for disposal of high-level radioactive waste in the US is to seal it in containers before emplacement in a geologic repository. A proposed site at Yucca Mountain, Nevada, is being evaluated for its suitability as a geologic repository. The containers will probably be made of either an austenitic or a copper-based alloy. Models of alloy degradation are being used to predict the long-term performance of the containers under repository conditions. The models are of uniform oxidation and corrosion, localized corrosion, and stress corrosion cracking, and are applicable to worst-case scenarios of container degradation. This paper reviews several of the models.

  11. Incremental Validity of Personality Measures in Predicting Underwater Performance and Adaptation.

    Science.gov (United States)

    Colodro, Joaquín; Garcés-de-Los-Fayos, Enrique J; López-García, Juan J; Colodro-Conde, Lucía

    2015-03-17

    Intelligence and personality traits are currently considered effective predictors of human behavior and job performance. However, there are few studies about their relevance in the underwater environment. Data from a sample of military personnel performing scuba diving courses were analyzed with regression techniques, testing the contribution of individual differences and ascertaining the incremental validity of the personality in an environment with extreme psychophysical demands. The results confirmed the incremental validity of personality traits (ΔR 2 = .20, f 2 = .25) over the predictive contribution of general mental ability (ΔR 2 = .07, f 2 = .08) in divers' performance. Moreover, personality (R(L)2 = .34) also showed a higher validity to predict underwater adaptation than general mental ability ( R(L)2 = .09). The ROC curve indicated 86% of the maximum possible discrimination power for the prediction of underwater adaptation, AUC = .86, p personality traits as predictors of an effective response to the changing circumstances of military scuba diving. They also may improve the understanding of the behavioral effects and psychophysiological complications of diving and can also provide guidance for psychological intervention and prevention of risk in this extreme environment.

  12. Predicting volume of distribution with decision tree-based regression methods using predicted tissue:plasma partition coefficients.

    Science.gov (United States)

    Freitas, Alex A; Limbu, Kriti; Ghafourian, Taravat

    2015-01-01

    Volume of distribution is an important pharmacokinetic property that indicates the extent of a drug's distribution in the body tissues. This paper addresses the problem of how to estimate the apparent volume of distribution at steady state (Vss) of chemical compounds in the human body using decision tree-based regression methods from the area of data mining (or machine learning). Hence, the pros and cons of several different types of decision tree-based regression methods have been discussed. The regression methods predict Vss using, as predictive features, both the compounds' molecular descriptors and the compounds' tissue:plasma partition coefficients (Kt:p) - often used in physiologically-based pharmacokinetics. Therefore, this work has assessed whether the data mining-based prediction of Vss can be made more accurate by using as input not only the compounds' molecular descriptors but also (a subset of) their predicted Kt:p values. Comparison of the models that used only molecular descriptors, in particular, the Bagging decision tree (mean fold error of 2.33), with those employing predicted Kt:p values in addition to the molecular descriptors, such as the Bagging decision tree using adipose Kt:p (mean fold error of 2.29), indicated that the use of predicted Kt:p values as descriptors may be beneficial for accurate prediction of Vss using decision trees if prior feature selection is applied. Decision tree based models presented in this work have an accuracy that is reasonable and similar to the accuracy of reported Vss inter-species extrapolations in the literature. The estimation of Vss for new compounds in drug discovery will benefit from methods that are able to integrate large and varied sources of data and flexible non-linear data mining methods such as decision trees, which can produce interpretable models. Graphical AbstractDecision trees for the prediction of tissue partition coefficient and volume of distribution of drugs.

  13. explICU: A web-based visualization and predictive modeling toolkit for mortality in intensive care patients.

    Science.gov (United States)

    Chen, Robert; Kumar, Vikas; Fitch, Natalie; Jagadish, Jitesh; Lifan Zhang; Dunn, William; Duen Horng Chau

    2015-01-01

    Preventing mortality in intensive care units (ICUs) has been a top priority in American hospitals. Predictive modeling has been shown to be effective in prediction of mortality based upon data from patients' past medical histories from electronic health records (EHRs). Furthermore, visualization of timeline events is imperative in the ICU setting in order to quickly identify trends in patient histories that may lead to mortality. With the increasing adoption of EHRs, a wealth of medical data is becoming increasingly available for secondary uses such as data exploration and predictive modeling. While data exploration and predictive modeling are useful for finding risk factors in ICU patients, the process is time consuming and requires a high level of computer programming ability. We propose explICU, a web service that hosts EHR data, displays timelines of patient events based upon user-specified preferences, performs predictive modeling in the back end, and displays results to the user via intuitive, interactive visualizations.

  14. Trait impulsivity predicts D-KEFS tower test performance in university students.

    Science.gov (United States)

    Lyvers, Michael; Basch, Vanessa; Duff, Helen; Edwards, Mark S

    2015-01-01

    The present study examined a widely used self-report index of trait impulsiveness in relation to performance on a well-known neuropsychological executive function test in 70 university undergraduate students (50 women, 20 men) aged 18 to 24 years old. Participants completed the Barratt Impulsiveness Scale (BIS-11) and the Frontal Systems Behavior Scale (FrSBe), after which they performed the Tower Test of the Delis-Kaplan Executive Function System. Hierarchical linear regression showed that after controlling for gender, current alcohol consumption, age at onset of weekly alcohol use, and FrSBe scores, BIS-11 significantly predicted Tower Test Achievement scores, β = -.44, p impulsiveness is associated with poorer executive cognitive performance even in a sample likely to be characterized by relatively high general cognitive functioning (i.e., university students). The results also support the role of inhibition as a key aspect of executive task performance. Elevated scores on the BIS-11 and FrSBe are known to be linked to risky drinking in young adults as confirmed in this sample; however, only BIS-11 predicted Tower Test performance.

  15. Do physiological measures predict selected CrossFit® benchmark performance?

    Directory of Open Access Journals (Sweden)

    Butcher SJ

    2015-07-01

    Full Text Available Scotty J Butcher,1,2 Tyler J Neyedly,3 Karla J Horvey,1 Chad R Benko2,41Physical Therapy, University of Saskatchewan, 2BOSS Strength Institute, 3Physiology, University of Saskatchewan, 4Synergy Strength and Conditioning, Saskatoon, SK, CanadaPurpose: CrossFit® is a new but extremely popular method of exercise training and competition that involves constantly varied functional movements performed at high intensity. Despite the popularity of this training method, the physiological determinants of CrossFit performance have not yet been reported. The purpose of this study was to determine whether physiological and/or muscle strength measures could predict performance on three common CrossFit "Workouts of the Day" (WODs.Materials and methods: Fourteen CrossFit Open or Regional athletes completed, on separate days, the WODs "Grace" (30 clean and jerks for time, "Fran" (three rounds of thrusters and pull-ups for 21, 15, and nine repetitions, and "Cindy" (20 minutes of rounds of five pull-ups, ten push-ups, and 15 bodyweight squats, as well as the "CrossFit Total" (1 repetition max [1RM] back squat, overhead press, and deadlift, maximal oxygen consumption (VO2max, and Wingate anaerobic power/capacity testing.Results: Performance of Grace and Fran was related to whole-body strength (CrossFit Total (r=-0.88 and -0.65, respectively and anaerobic threshold (r=-0.61 and -0.53, respectively; however, whole-body strength was the only variable to survive the prediction regression for both of these WODs (R2=0.77 and 0.42, respectively. There were no significant associations or predictors for Cindy.Conclusion: CrossFit benchmark WOD performance cannot be predicted by VO2max, Wingate power/capacity, or either respiratory compensation or anaerobic thresholds. Of the data measured, only whole-body strength can partially explain performance on Grace and Fran, although anaerobic threshold also exhibited association with performance. Along with their typical training

  16. Searching for an Accurate Marker-Based Prediction of an Individual Quantitative Trait in Molecular Plant Breeding

    Directory of Open Access Journals (Sweden)

    Yong-Bi Fu

    2017-07-01

    Full Text Available Molecular plant breeding with the aid of molecular markers has played an important role in modern plant breeding over the last two decades. Many marker-based predictions for quantitative traits have been made to enhance parental selection, but the trait prediction accuracy remains generally low, even with the aid of dense, genome-wide SNP markers. To search for more accurate trait-specific prediction with informative SNP markers, we conducted a literature review on the prediction issues in molecular plant breeding and on the applicability of an RNA-Seq technique for developing function-associated specific trait (FAST SNP markers. To understand whether and how FAST SNP markers could enhance trait prediction, we also performed a theoretical reasoning on the effectiveness of these markers in a trait-specific prediction, and verified the reasoning through computer simulation. To the end, the search yielded an alternative to regular genomic selection with FAST SNP markers that could be explored to achieve more accurate trait-specific prediction. Continuous search for better alternatives is encouraged to enhance marker-based predictions for an individual quantitative trait in molecular plant breeding.

  17. Searching for an Accurate Marker-Based Prediction of an Individual Quantitative Trait in Molecular Plant Breeding

    Science.gov (United States)

    Fu, Yong-Bi; Yang, Mo-Hua; Zeng, Fangqin; Biligetu, Bill

    2017-01-01

    Molecular plant breeding with the aid of molecular markers has played an important role in modern plant breeding over the last two decades. Many marker-based predictions for quantitative traits have been made to enhance parental selection, but the trait prediction accuracy remains generally low, even with the aid of dense, genome-wide SNP markers. To search for more accurate trait-specific prediction with informative SNP markers, we conducted a literature review on the prediction issues in molecular plant breeding and on the applicability of an RNA-Seq technique for developing function-associated specific trait (FAST) SNP markers. To understand whether and how FAST SNP markers could enhance trait prediction, we also performed a theoretical reasoning on the effectiveness of these markers in a trait-specific prediction, and verified the reasoning through computer simulation. To the end, the search yielded an alternative to regular genomic selection with FAST SNP markers that could be explored to achieve more accurate trait-specific prediction. Continuous search for better alternatives is encouraged to enhance marker-based predictions for an individual quantitative trait in molecular plant breeding. PMID:28729875

  18. Retrospective lifetime dietary patterns predict cognitive performance in community-dwelling older Australians.

    Science.gov (United States)

    Hosking, Diane E; Nettelbeck, Ted; Wilson, Carlene; Danthiir, Vanessa

    2014-07-28

    Dietary intake is a modifiable exposure that may have an impact on cognitive outcomes in older age. The long-term aetiology of cognitive decline and dementia, however, suggests that the relevance of dietary intake extends across the lifetime. In the present study, we tested whether retrospective dietary patterns from the life periods of childhood, early adulthood, adulthood and middle age predicted cognitive performance in a cognitively healthy sample of 352 older Australian adults >65 years. Participants completed the Lifetime Diet Questionnaire and a battery of cognitive tests designed to comprehensively assess multiple cognitive domains. In separate regression models, lifetime dietary patterns were the predictors of cognitive factor scores representing ten constructs derived by confirmatory factor analysis of the cognitive test battery. All regression models were progressively adjusted for the potential confounders of current diet, age, sex, years of education, English as native language, smoking history, income level, apoE ɛ4 status, physical activity, other past dietary patterns and health-related variables. In the adjusted models, lifetime dietary patterns predicted cognitive performance in this sample of older adults. In models additionally adjusted for intake from the other life periods and mechanistic health-related variables, dietary patterns from the childhood period alone reached significance. Higher consumption of the 'coffee and high-sugar, high-fat extras' pattern predicted poorer performance on simple/choice reaction time, working memory, retrieval fluency, short-term memory and reasoning. The 'vegetable and non-processed' pattern negatively predicted simple/choice reaction time, and the 'traditional Australian' pattern positively predicted perceptual speed and retrieval fluency. Identifying early-life dietary antecedents of older-age cognitive performance contributes to formulating strategies for delaying or preventing cognitive decline.

  19. Spherical and cylindrical cavity expansion models based prediction of penetration depths of concrete targets.

    Directory of Open Access Journals (Sweden)

    Xiaochao Jin

    Full Text Available The cavity expansion theory is most widely used to predict the depth of penetration of concrete targets. The main purpose of this work is to clarify the differences between the spherical and cylindrical cavity expansion models and their scope of application in predicting the penetration depths of concrete targets. The factors that influence the dynamic cavity expansion process of concrete materials were first examined. Based on numerical results, the relationship between expansion pressure and velocity was established. Then the parameters in the Forrestal's formula were fitted to have a convenient and effective prediction of the penetration depth. Results showed that both the spherical and cylindrical cavity expansion models can accurately predict the depth of penetration when the initial velocity is lower than 800 m/s. However, the prediction accuracy decreases with the increasing of the initial velocity and diameters of the projectiles. Based on our results, it can be concluded that when the initial velocity is higher than the critical velocity, the cylindrical cavity expansion model performs better than the spherical cavity expansion model in predicting the penetration depth, while when the initial velocity is lower than the critical velocity the conclusion is quite the contrary. This work provides a basic principle for selecting the spherical or cylindrical cavity expansion model to predict the penetration depth of concrete targets.

  20. Predicting memory performance in normal ageing using different measures of hippocampal size

    International Nuclear Information System (INIS)

    Lye, T.C.; Creasey, H.; Kril, J.J.; Grayson, D.A.; Piguet, O.; Bennett, H.P.; Ridley, L.J.; Broe, G.A.

    2006-01-01

    A number of different methods have been employed to correct hippocampal volumes for individual variation in head size. Researchers have previously used qualitative visual inspection to gauge hippocampal atrophy. The purpose of this study was to determine the best measure(s) of hippocampal size for predicting memory functioning in 102 community-dwelling individuals over 80 years of age. Hippocampal size was estimated using magnetic resonance imaging (MRI) volumetry and qualitative visual assessment. Right and left hippocampal volumes were adjusted by three different estimates of head size: total intracranial volume (TICV), whole-brain volume including ventricles (WB+V) and a more refined measure of whole-brain volume with ventricles extracted (WB). We compared the relative efficacy of these three volumetric adjustment methods and visual ratings of hippocampal size in predicting memory performance using linear regression. All four measures of hippocampal size were significant predictors of memory performance. TICV-adjusted volumes performed most poorly in accounting for variance in memory scores. Hippocampal volumes adjusted by either measure of whole-brain volume performed equally well, although qualitative visual ratings of the hippocampus were at least as effective as the volumetric measures in predicting memory performance in community-dwelling individuals in the ninth or tenth decade of life. (orig.)